patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11860815
It is to be appreciated that elements in the figures are illustrated for simplicity and clarity. Common but well-understood elements that are useful or necessary in a commercially feasible embodiment are not shown in order to facilitate a less hindered view of the illustrated embodiments. DETAILED DESCRIPTION A layout diagram of an embodiment10of the reconfigurable computing platform is shown inFIG.1. The reconfigurable computing platform includes three processing reconfigurable computing devices or FPGAs12A-C surrounded by MiniPODs14that convert optical signals to electrical signals. The MiniPods14are located adjacent to the FPGAs12A-C to minimize the need for long traces. A reconfigurable computing device or system-on-chip (SoC)16is positioned in the lower right corner of the board. The components are interconnected via high-speed buses18. The three large FPGAs12A-C and the SoC16are interfaced to optical fibers using the MiniPods14. The FPGAs12A-C and the SoC16are interconnected using high-speed internal links18. A layout diagram of another embodiment20of the reconfigurable computing platform, which incorporates additional MiniPODs14and memory (DDR4 DIMM 11), is shown inFIG.2. The development of the high-data throughput reconfigurable computing platform10,20or global feature extractor (gFEX) was motivated by the need of the ATLAS (a toroidal large hadron collider apparatus) detector and experiment at CERN (the European Organization for Nuclear Research) in Switzerland to perform pattern recognition of data from a calorimeter system in five large hadron collider (LHC) clock cycles or 125 ns. The ATLAS experiment requires multiple algorithms to be run in parallel to evaluate if there is an event that may be interesting for further analysis. These requirements led to the development of the high-data throughput, high-performance reconfigurable computing platform. Features of the reconfigurable computing platform include a combination of high-performance reconfigurable computing aligned with high-data throughput for data input and output. Thus, the reconfigurable computing platform provides an elegant and highly powerful solution for use in in-vehicle processing and autonomous systems; aircraft; industrial machinery, medical devices and monitoring; test and simulation; pattern recognition and correlation analysis, including energy grid monitoring and homeland security; cryptography, including data encryption and decryption; artificial intelligence, including neural networks, natural language processing, and deep learning. The detector generates large amounts of raw data, which includes about 25 megabytes per event multiplied by 40 million beam crossings per second in the center of the detector. This produces a total of 1 petabyte of raw data per second. A trigger system uses information to identify, in real time, the most interesting events to retain for detailed analysis. There are three trigger levels. The first is based in electronics on the detector, while the remaining two operate primarily on a large computer cluster near the detector. The first-level trigger selects about 100,000 events per second. After the third-level trigger has been applied, a few hundred events remain to be stored for further analysis. This amount of data requires over 100 megabytes of disk space per second, which is at least a petabyte per year. Earlier particle detector read-out and event detection systems were based on parallel shared buses, such as VMEbus or FASTBUS. However, these bus architectures cannot satisfy the data requirements of the experiments. Offline event reconstruction is performed on all permanently stored events, turning the pattern of signals from the detector into physical objects, such as jets, photons, and leptons. A market survey was conducted prior to the start of development, which did not uncover any conventional platforms that could match the performance of the high-data throughput reconfigurable computing platform10,20. The reconfigurable computing platform is designed to find unforeseen events in massive amounts of data in real time. The reconfigurable computing platform10,20uses the flexibility of FPGAs12A-12C to achieve high performance computing. Reconfigurable computers are distinguishable from traditional computing platforms for their ability to route data and control flow during processing. The name reconfigurable computing stems from the ability to adapt a hardware configuration during processing by loading new tasks on a programmable fabric. The reconfigurable computer is primarily used for parallel computing, in which many tasks are processed concurrently for the same data set. The reconfigurable computer is highly effective in pattern recognition, artificial intelligence, neural networks, cryptography, signal processing, video processing, and in general parallel computing. The reconfigurable computing platform10,20was developed to serve as a fast, real-time computing platform to select events of interest for each collision. The reconfigurable computing platform10,20receives low-resolution data from calorimeters on approximately 300 optical fibers at a rate of 300 Tera bits per second (Tb/s) and outputs the results of calculations on approximately 100 optical fibers at a rate of 100 Tera bits per second. The total processing time allotted is 125 ns per collision, which occurs at 40 MHz. During this time, the reconfigurable computing platform10,20concurrently executes at least five different algorithms. To achieve this performance, the reconfigurable computing platform10,20is implemented using the three large FPGAs12A-C, which are commercially available from Xilinx Corporation as the Ultra-Scale FPGA, and one system-on-chip (SoC)16, which is also commercially available from Xilinx Corporation as the ZYNQ. The architecture of the reconfigurable computing platform10,20is such that optical signals are converted to and from electrical signals as physically near as possible to the corresponding FPGA12A-C,16to substantially eliminate signal distortion, reflection, and cross-talk in high-speed electrical signals. Accordingly, the distance between an electro-optical converter or transceiver (MiniPOD14) and the corresponding FPGA12A-C, to which that electro-optical converter is connected, is configured to be no longer than approximately 6 inches, and preferably less than approximately 4 inches. The optical signals are preferably transferred to and from the optical-electrical converters using PRIZM fiber, which is available from Moog Inc. The optical signals are routed to and from the electro-optical converters using one or more traces on one or more layers of the multilayer board. The electrical traces between the electro-optical converter and corresponding FPGA12A-C, to which that electro-optical converter is connected, are maintained to be no longer than approximately 5 inches long, and preferably less than approximately 3 inches. In addition, each of the FPGAs is interconnected via high-speed links. These features enable synchronous operation of the FPGA-based processors in the reconfigurable computing platform10,20. In addition, two or more reconfigurable computing platforms can be coupled together by using the input/output optical signals associated with the reconfigurable computing platforms to exchange data, address, and control information between boards in any configuration and/or topology including, but not limited to, a serial configuration, parallel configuration, star configuration, and/or master/slave configuration. On a multiple FPGA board, clock synchronization is vital to processor operation. Data transfer and synchronization of data processing and output are controlled by a common system clock signal. On the reconfigurable computing platform10,20, the system clock signal is synchronized to an external clock signal that enables synchronous board-to-board operations. Thus, the reconfigurable computing platform can be synchronized with one or more external boards. The on-board SoC16is used to distribute the system clock signal to the processor FPGAs12A-C, which enables synchronization and clock distribution flexibility that has not been utilized in conventional designs. To perform this synchronization, the on-board SoC16receives an optical signal from the external link and recovers the external clock signal from the optical signal. The external clock signal recovery is performed by analyzing a data pattern in the optical signal and adjusting a phase of the system clock in accordance with the data pattern to maintain the desired phase of the system clock, and thus synchronization. The recovered clock signal is then used as the common system clock signal for the remaining FPGAs12A-C on the reconfigurable computing platform10,20. The reconfigurable computing platform10,20is implemented using an industry advanced telecommunications computing architecture (ATCA) format. The reconfigurable computing platform10,20includes three UltraScale+ FPGAs12A-C (which are commercially available from Xilinx Corporation as part no. FPGA XCVU9P-2FLGC2104E) and the ZYNQ Ultrascale+. SoC16(which is commercially available from Xilinx Corporation as part no. XCZU19EG-2FFVD1760E). The optical-to-electrical conversion is implemented using thirty-five (35) MiniPODs14(which are commercially available from Foxconn Interconnect Technology, Inc.). 312 input fibers at 12.8 Gb/s and 108 output fibers at 12.8 Gb/s are used to interface to incoming and outgoing signals. The Ultrascale+ FPGAs12A-C include eight (8) 25.6 Gb/s links and twelve (12) 12.8 Gb/s links to the ZYNQ Ultrascale+ SoC16. Parallel data buses link pairs of FPGAs12A-C,16running at 1.12 Gb/s using a 560 MHz clock double data rate (DDR) mode. The ZYNQ Ultrascale+16is used to control and configure the three Ultrascale+ FPGAs12A-C, monitor board health and interface to the gigabit ethernet and universal asynchronous receiver transmitter (UART). A variety of sensors including, but not limited to, temperature sensors, voltage sensors, current sensors, air flow sensors, power sensors, and the like, are implemented using discrete components and/or the FPGAs12A-C on the reconfigurable computing platform. These sensors are accessed by one or more of the FPGAs using an I2C bus, which then performs programmable operations in response to the measured value obtained from one or more of the sensors. The reconfigurable computing platform10,20also includes a 16 GB DDR 4⋅dual in-line memory module (DIMM) 11 attached to the ZYNQ Ultrascale+ that is used to buffer the processing data. Total power consumption of the reconfigurable computing platform10,20with all FPGAs12A-C,16running at 12.8 Gb/s is typically 300 W, which is substantially less than the 400 W limitation in the ATCA standard. The board is implemented as a 30-layer printed circuit board, including twelve (12) signal layers, fourteen (14) ground layers, and four (4) power layers. The insulating material for the board is Megton-6. The routing of signals is carefully laid out to maximize signal integrity and reduce signal cross-talk. For the differential pairs, except those associated with DDR 4 DIMM 11 related signals, the impedance is controlled to 100 Ohms, and for the differential pairs associated with the DDR 4 DIMM related signals, the differential pairs are configured to 86 Ohms and 66 Ohms. The single-ended lines, except those associated with the DDR 4 DIMM, are configured to be 50 Ohms, and the single-ended lines associated with the DDR 4 DIMM are configured to be 39 Ohms. These impedances are used to accommodate design guidelines associated with various components, such as the DDR4 DIMM, used in the reconfigurable computing platform. The impedances are provided by executing computer-aided design simulations, which are, for example, executed on systems available from Cadence Corporation, of the board layout and wiring during fabrication of the board, and tested to ensure that the actual impedances following fabrication of the board meet the design guidelines. For the parallel data buses, a length constraint and 5-W rule are applied in each data group (24 low-voltage differential signaling (LVDS) pairs) which make it possible to operate at 1.12 Gb/s. The 5-W rule minimizes crosstalk between high-speed parallel traces by requiring that the space separating the parallel traces be wider than five (5) times a height of either of the parallel traces when measured from the top of that trace to a reference plane, which is the plane on which that trace is disposed. For the 12.8 Gb/s and 25.6 Gb/s traces, a back-drill technology is used to minimize the stub and guarantee signal integrity. The back-drilling technology is used to remove an unused portion, or stub, of copper barrel from a thru-hole in a printed circuit board. When a high-speed signal travels between printed circuit board (PCB) layers through a copper barrel, the high-speed signal becomes distorted. If the signal layer usage results in a stub being present, and the stub is long, then that distortion becomes significant. These stubs can be removed by re-drilling those holes after the fabrication is complete, with a slightly larger drill. The holes are back drilled to a controlled depth, close to, but not touching, the last layer used by the via. Interconnection between FPGA lines requires the use of capacitors to decouple the DC voltage. On low-density boards, the capacitors are mounted between the FPGAs on the same side that the FPGAs are placed or disposed. For high-density boards, this is not possible given the lack of physical space. A method that allows positioning the capacitor on the rear side of a multilayer printed circuit board has been developed to alleviate this issue. The main difficulty in positioning the capacitor on the rear side of a multilayer printed circuit board is to preserve the line impedance for the high-speed signals while being routed through multiple layers. However, line impedance can be maintained by using the back-drilling technique to route signals vertically through the layers followed by horizontal traces in signal layers. This back-drilling technique is implemented on the reconfigurable computing platform10,20to interconnect the processor FPGAs12A-C, as well as connect the SoC16to the FPGAs12A-C. The back-drilling technique is used to remove stubs created in through-board vias during fabrication of the board. These stubs, if left on the board, would add unwanted reflections to signals as the signals traversed the vias to the end of the stubs and back to the intended destination(s) and source(s) of the signals. As shown inFIGS.1and2, the optical-to-electrical converters14are positioned as close as possible to the corresponding FPGA(s), with which the optical-to-electrical converter is coupled, to minimize signal distortion. The reconfigurable computing platform10,20includes four (4) primary 1-ounce power planes that distribute a high-current electrical voltage to the FPGAs while minimizing voltage drop. Power is distributed to the FPGAs by first distributing a common higher voltage, such as 48 V, around the periphery of the printed circuit board followed by converting, and optionally regulating, this common higher voltage to one or more specific required voltage values, such as 0.95V, 1.0V, 1.2 V, 1.8 V, 2.5 V, 3.3 V, and/or 12V, as physically near as possible to, for example, less than approximately 4 inches from the intended destination of the required voltage value, such as an FPGA12A-C16and/or other component with specific and/or substantial power requirements. This technique differs from conventional techniques utilizing single-point voltage generation that leads to substantial losses in power due to ohmic resistance in printed circuit lines. This technique also satisfies high-current and high-power distribution requirements without significant losses, which is well-suited for applications requiring multiple high-power processors on a single printed circuit board. Power planes are used for the power rails and are, for example, approximately 6″×6″ in dimension with 1 oz. thickness copper when used to provide the 0.95V, 1.0V, and 1.2V power rails to the FPGAs12A-C. The 48V power plane is connected to the 12V DC/DC converter or regulator. The 12V power provided by the 12V DC/DC converter or regulator is then coupled to the 12V power plane, which provides power to the remaining regulators. In addition, a remote sense function, which is a key feature that was specifically selected for implementation in the reconfigurable computing platform, is used to compensate for the voltage drop caused by the high-power dissipation of the reconfigurable computing platform10,20. The reconfigurable computing platform10,2010is one of several hardware modules designed to maintain a trigger acceptance rate with an increasing large hadron collider (LHC) luminosity and increasing pile-up conditions. The reconfigurable computing platform10,20is used to identify patterns of energy associated with a hadronic decay of high momentum Higgs, W & Z bosons, top quarks, and exotic particles in real time at the 40 MHz LHC bunch crossing rate. The reconfigurable computing platform10,20also receives coarse-granularity information, which is represented by Δn×Δφ=0.2×0.2 gTower, from calorimeters on 276 optical fibers. The gTower is a unit of area associated with the detector, and Δη and Δφ represent orthogonal directions in a two-dimensional area. The reconfigurable computing platform10,20identifies large-radius jets, such as Lorentz-boosted objects using wide-area jet algorithms refined by additional information. The high-pT bosons and fermions are a key component. The trigger system is designed for narrow jets with limited acceptance for large objects. The acceptance for large-radius jets is substantially enhanced by the inclusion of the reconfigurable computing platform10,20. The architecture of the reconfigurable computing platform10,20permits event-by-event local pileup suppression for large-R objects using baseline subtraction techniques. As shown inFIG.3, a calorimeter system30includes three major subsystems: a cluster processor subsystem (CP)32that includes cluster processor modules (CPMs) and common merger extended modules (CMXs); a jet/energy processor subsystem (JEP)36that includes jet/energy modules (JEMs) and CMXs38; and a pre-processor subsystem40with pre-processor modules (PPM). Three feature identification systems include an electron feature extractor (eFEX)42, a jet feature extractor (jFEX)44, and the gFEX46. The eFEX42and jFEX44provide functions similar to those of the CPMs and JEMs but benefit from a much finer granularity. Each system includes multiple modules that operate on limited regions of the calorimeter. In contrast, the gFEX46accesses the entire calorimeter data available in a single module, and thus enables the use of full-scan algorithms. With these benefits, the gFEX46maximizes flexibility of the trigger, avoids data duplication, and computes global quantities without boundaries concerning the detector. The electromagnetic calorimeter both analog signals for the CP32and JEP38and digitized data for the FEXs42,44,46. The hadronic calorimeter sends analog signals that are digitized in the pre-processor40and then transmitted to the FEXs42,44,46through an optical patch-panel. After adding the gFEX46, the acceptance of two or more sub jets is recovered and the resolution is nearly the same as that of one sub-jet. The LHC is the first system that provides high enough energy to produce large numbers of boosted top quarks. The boosted top quark is a particle that can be observed and monitored and can be selected using an algorithm implemented on the gFEX46with substantially greater efficiency than that exhibited in conventional solutions. The decay products of these top quarks are confined to a cone in the top quark flight direction and can be clustered to a single jet. Top quark reconstruction then amounts to analyzing the structure of the jet and looking for sub jets that are kinematically compatible with top quark decay. The gFEX receives coarse-granularity (0.2×0.2 gTower) information from the calorimeters on 276 optical fibers. Large FPGAs for data processing, a combined FPGA and SoC running an embedded system for control and monitoring, and several Avago MiniPODs for data inputs and outputs are utilized on the gFEX46. One feature of the gFEX46is that it receives data from the entire calorimeter enabling the identification of large-radius jets and the calculation of whole-event observables. The FPGAs12A-C shown inFIGS.1-2each have 2π azimuthal (φ) coverage for a slice in pseudorapidity (η) and executes all feature identification algorithms. The FPGAs12A-C,16communicate with each other via low-latency GPIO links while input and output to the MiniPODs14are linked via multi-gigabit transceivers (MGTs). On-board electrical MGTs provide links between FPGAs12A-C and the SoC16for data transmission and control. The reconfigurable computing platform or gFEX46is placed in an ATCA shelf so that the board can occupy two slots if needed: one for the board and one for cooling with, for example, a large heat sink, fiber routing, and the like. Cooling of the high-power processor FPGAs12A-C on a single printed circuit board is a challenge, especially when the FPGAs12A-C are surrounded by high-power optical-to-electrical converters14. A new component placement and distribution technique is implemented in the reconfigurable computing platform10,20that enables optimal forced air flow circulation to keep each component at low operational temperatures. The components are placed on the printed circuit board of the reconfigurable computing platform10,20based on their height differences to define a geometry such that air flow channels are formed across the board. These channels ensure a continuous unobstructed flow of air across components that enables operating temperatures of these components to remain below 85 C, which ensures component lifetimes greater than 10 years at continuous operation. A thermal map of the reconfigurable computing platform operating at about 300 W is provided inFIG.20that shows this feature. Specifically, air flow channels164,166,168,170,172,174,176,178are created as a result of placing components on the multilayer board based on their height differences. In addition, each FPGA12A-C,16on the reconfigurable computing platform10,20is coupled to a combination of fan and heat sink, and thus there are four fans on the reconfigurable computing platform10,20to provide sufficient cooling capacity. A functional block diagram of the reconfigurable computing platform10shown inFIG.1is shown inFIG.4, which includes the four FPGAs12A-C,16implemented on the board. Three Virtex UltraScale FPGAs are used as processor FPGAs12A-C, and one ZYNQ FPGA16is used for TTC clock recovery and distribution as well as control and monitoring. The processor FPGAs12A-C process data from the electromagnetic and hadronic calorimeter via fiber optical links and on-board MiniPOD14receivers (shown inFIG.1). After processing, the real-time trigger data from each processor FPGA12A-C is sent through one 12-channel MiniPOD14transmitter. The processor FPGA-112A and FPGA-212B have similar functionalities in that both receive 96 optical inputs. The input data is processed by the feature identification algorithms implemented in the FPGA, and the trigger information is sent to each processor FPGA12A-C directly. The processor FPGA-312C receives 84 optical links from the calorimeter, which meets the design requirement of 276 total input optical links. The FPGA-312C also functions as an aggregator that receives data from the other two processor FPGAs12A-B, and then sends trigger data to a front-end link exchange (FELIX). The ZYNQ FPGA16recovers the 40 MHz timing, trigger, and control (TTC) clock through the FELIX link. The recovered TTC clock is the source clock of the high-performance clock generator device (that is commercially available from Silicon Laboratories Inc. as part no. SI5345) with jitter-cleaning capability, which generates the reference clocks with required frequencies for the MGT links. The jitter cleaning function of the SI5345 guarantees a link operational rate above 10 Gb/s. The jitter-cleaning capability is implemented using a crystal oscillator or an external reference clock source connected to XAXB pins of the SI5345, which is used as a jitter reference by a low loop bandwidth (BW), and thus jitter attenuation feature. The FELIX64provides TTC clock information to the reconfigurable computing platform10using a GBT mode link and receives data from the reconfigurable computing platform10using FULL mode links. The ZYNQ16on the reconfigurable computing platform10recovers the TTC clock from the GBT link and sends the recovered TTC clock to the jitter cleaning clock generator SI5345 to improve clock quality, which generates the reference clock for the FULL mode links to the FELIX64. With this configuration, both the GBT mode link at 4.8 Gb/s and FULL mode link at 9.6 Gb/s are established between FELIX and reconfigurable computing platform10successfully. In the embodiment shown inFIGS.1and4, the reconfigurable computing platform10is a 26-layer board with 28 MiniPODs. The reconfigurable computing platform10is designed and manufactured with low-loss material Megtron-6. Back-drilling technology is adopted in the fabrication to minimize the influence of stubs on high-speed link performance. Compared to blind via technology, back drilling is more cost effective and results in greater signal fidelity. The processor FPGA-112A and FPGA-212B both receive 96 optical inputs, the FPGA-312C receives 84 optical links from the calorimeter, and the ZYNQ16is used to control and monitor the board. A functional block diagram of the architecture of the reconfigurable computing platform20shown inFIG.1is shown inFIG.5. As compared with the reconfigurable computing platform10shown inFIG.4, the reconfigurable computing platform20uses a Vertex UltraScale+ for the three processor FPGAs82A-C and replaces the ZYNQ66with a ZYNQ UltraScale+ FPGA86. To take advantage of the resources of ZYNQ UltraScale+86, the interfaces of the processor FPGA-382C are moved to the ZYNQ UltraScale+86. With the Vertex UltraScale+ FPGAs82A-C, each of the transceivers are GTYs and there are significantly more 25.6 Gb/s links on-board. All three processor FPGAs82A-C send trigger data to the ZYNQ UltraScale+86via eight 25.6 Gb/s GTY links and twelve 12.8 Gb/s GTH links. The ZYNQ UltraScale+86transfers this trigger data to the FELIX84through 12 channels of MGT links. After removing the FELIX connection, the processor FPGA-382C receives 100 optical links from the calorimeter and transmits real-time data via 24 optical links to L1Topo88as do the remaining two processor FPGAs82A-B. With increased optical inputs and outputs, the total number of MiniPODs is increased from 28 to 35. These improvements provide better compatibility for the reconfigurable computing platform20to be used in the high-luminosity LHC (HL-LHC) that includes as many as 24 output fiber links. With more MiniPODs14and incorporation of the ZYNQ UltraScale+ FPGA86, the board stack up is increased from 26 to 30 layers. The same PCB material and back drilling technologies are used as discussed above in relation to the reconfigurable computing platform10. The fully assembled reconfigurable computing platform20is shown inFIG.6. All four FPGAs82A-C,86are configurable via JTAG and QSPI flash. The IBERT test was performed to verify that all MGT links were stable at 12.8 Gb/s without any errors being detected. The twenty-four (24) on-board electrical GTY links from three processor FPGAs82A-C to the ZYNQ UltraScale+86are stable at 25.6 Gb/s. An eye diagram of the optical link at 12.8 Gb/s is shown inFIG.7for the optical link from processor UltraScale+ FPGA GTY transmitter to GTY receiver passing through MiniPODs and fibers. The open area in this eye diagram is 10,048 at 12.8 Gb/s. Eye diagrams for the 12.8 Gb/s and 25.6 Gb/s on-board 8-inch electrical links are shown inFIGS.8and9, respectively.FIG.8shows an eye diagram of the electrical link from processor UltraScale+ FPGA GTY transmitter to ZYNQ UltraScale+ GTY receiver. The open area in this eye diagram is 13,922 at 12.8 Gb/s.FIG.11Bshows an eye diagram of the on-board electrical link from processor UltraScale+ FPGA GTY transmitter to ZYNQ UltraScale+ GTY receiver. The open area in this eye diagram is 8,576 at 25.6 Gb/s. In telecommunications, an eye pattern, also known as an eye diagram, is an oscilloscope display, in which a digital signal from a receiver is repetitively sampled and applied to the vertical input, while the data rate is used to trigger the horizontal sweep. It is so called because, for several types of coding, the pattern looks like a series of eyes between a pair of rails. It is a tool for the evaluation of the combined effects of channel noise and inter-symbol interference on the performance of a baseband pulse-transmission system. It is the synchronized superposition of all possible realizations of the signal of interest viewed within a particular signaling interval. Several system performance measures can be derived by analyzing eye diagram. If the signals are too long, too short, poorly synchronized with the system clock, too high, too low, too noisy, or too slow to change, or have too much undershoot or overshoot, this can be observed from the eye diagram. An open eye pattern corresponds to minimal signal distortion. Distortion of the signal waveform due to inter-symbol interference and noise appears as closure of the eye pattern. A top view of another embodiment of the reconfigurable computing platform100includes a power input source of 48V that is converted to 12V by one DC-DC quarter brick module. Thirteen (13) LTM4630As with 26 A current capability are used to step down the 12V to 0.95V, 1.0V, 1.2 V, 1.8 V, 2.5 V, and 3.3 V. To meet the large current requirements of the Xilinx FPGAs102A-C, each Virtex Ultrascale FPGA is powered using three (3) LTM4630A voltage regulators (which are available from Linear Technology Corporation). To protect and manage the power sequence of the board, two power monitoring and management devices (which are commercially available from Analog Devices, Inc. as part no. ADM1066) are used. The ADM1066 is programmable through the I2C bus, and thus the power sequence can be defined based on over-voltage and under-voltage requirements.FIG.21shows the power sequence of the reconfigurable computing platform10,20. The various regulators180are shown along the left-hand side of the diagram connected to a series of arrows182that terminate in one of five columns marked first184, second186, third188, fourth190, and fifth192. The column184,186,188,190in which a particular arrow terminates indicates the relative time at which power is applied to the regulator180corresponding to that arrow in the power sequence. For example, regulator180A is connected to arrow182A, that terminates in column184, which indicates that power will be applied to the regulator182A first in the power sequence. There are nine groups of parallel data buses on the reconfigurable computing platform100. Six groups of parallel data buses are used to communicate between each pair of Virtex FPGAs102A-C and the remaining three groups of parallel data buses are used to communicate between the ZYNQ104and each of the Virtex FPGAs102A-C. For each of the nine groups of parallel data buses, the data rate is 1.12 Gb/s for each data line. For the multi-gigabit transceiver (MGT) design, three types of MGTs are used on the reconfigurable computing platform100; GTX on the ZYNQ104, and GTH and GTY on the Virtex FPGAs102A-C. For the MGT connections, 280 links are connected to the MiniPOD receivers14and 40 links are connected to the MiniPODs transmitters14. In addition, there are 6 GTY on-board connections between processor FPGAs102A and102C, and FPGAs102A and102B, respectively, which can operate at up to 25.6 Gb/s. The ZYNQ104is used to recover the TTC clock and control the reconfigurable computing platform100, which includes monitoring, configuration, and facilitating remote upgrades, and the like. The Gigabit Ethernet, QSPI interface, 4 Gb DDR3 memories, I2C interface, UART, and SD card interface are implemented with the Xilinx processing system (PS). With the IP-IDELAYE3, which is an IP core provided by Xilinx, the delay in each of the data lines in the ultrascale FPGA is adjustable. The adjustment has 511 steps with each step representing approximately 9.8 ps of delay. The parallel data buses operate at up to 560 MHz or 1.12 Gb/s. The optical and electrical links are stable at 12.8 Gb/s and the GTY electrical links are stable at up to 25.6 Gb/s without errors in response to performing the IBERT test to 1E-15.FIG.11andFIG.12show eye diagrams at 12.8 Gb/s and 25.6 Gb/s for the GTY, respectively. A 40 MHz TTC clock is provided through the GBT link at 4.8 Gb/s and data from the reconfigurable computing platform100is provided at 9.6 Gb/s. The reconfigurable computing platform100recovers the 40 MHz TTC clock from the GBT link and uses the on board jitter clearing device (which is commercially available from Silicon Devices Inc. as part no. Si5345) to improve the clock quality. Using the recovered TTC clock, the GTH operates at links speeds of 12.8 Gbps and 25.6 Gbps. The corresponding eye diagrams are shown inFIG.13andFIG.14for 12.8 Gbps and 25.6 Gbps, respectively. A functional block diagram of the architecture for the reconfigurable computing platform100shown inFIG.10is shown inFIG.15and includes a single module with FPGAs102A-C for data processing and a combined FPGA and CPU System-on-Chip (Hybrid FPGA)104for control and monitoring. A feature of the reconfigurable computing platform100is that it receives data from the entire calorimeter, enabling the identification of large-radius jets and the calculation of whole-event observables. Each processor FPGA102A-C provides 2π azimuthal (φ) coverage for a slice in pseudorapidity (η) and executes all feature identification algorithms. The processor FPGAs102-C communicate with each other via low-latency GPIO data buses while the input and output interface is provided by multi-gigabit transceivers (MGTs). The reconfigurable computing platform100is a customized ATCA module based on the PICMG® 3.0 Revision 3.0 specification. The reconfigurable computing platform100is designed to verify functionalities of selected technical solutions, which include the power-on sequence, power rails monitor, clock distribution, MGT link speed, and high-speed parallel GPIOs. Another embodiment of the reconfigurable computing platform150is shown inFIG.16, one Hybrid FPGA (ZYNQ)154and one processor FPGA152are included in the reconfigurable computing platform150, as well as MiniPODs14, MicroPODs, power modules, and high speed parallel GPIOs. The reconfigurable computing platform150uses a clock recovered from a FELIX link, and FELIX receives clock information from the timing, trigger and control (TTC) source. Since the recovered clock is improved for high-speed links, especially for links running at speeds above 10 Gb/s, a clock generator is implemented on the reconfigurable computing platform100. The input clocks of the clock generator is the recovered clock in the reconfigurable computing platform150, and the frequency of the output clocks is 40 MHz. The phase noise of the clock generator chips is shown inFIG.17. As the test results show, the clock generator Si5345 performs jitter cleaning to improve the recovered clock quality. The reconfigurable computing platform150is a 26 layer board that includes 26 MiniPODs14mounted thereon. Back drilling technology is adopted in the fabrication of the reconfigurable computing platform150to reduce the influence of stubs on high-speed link performance. The reconfigurable computing platform150receives shaped analog pulses from the electromagnetic and hadronic calorimeters, digitizes and synchronizes these analog pulses, identifies a bunch collision from which each pulse originates, scales the digital values to yield transverse energy (TE), and prepares and transmits the data to downstream elements. The electromagnetic calorimeter provides both analog signals (for the CP and JEP) and digitized data (for the gFEXes). The hadronic calorimeter sends analog signals, which are digitized on the reconfigurable computing platform150and transmitted optically to the FEXes through an optical fiber. The eFEX and jFEX operate in parallel with the CP and JEP. The reconfigurable computing platform150receives data from the electromagnetic and hadronic calorimeters using optical fibers. For most of the detectors, the so-called gTowers correspond to an area of Δη×Δφ=0.2×0.2. There are 276 MGTs signals from the calorimeters, which are converted from optical signals on the reconfigurable computing platform150. The control and clock signals are inputs and the combined data is transmitted using eight MGTs. Real time data is provided to the L1 topological trigger (L1Topo) by the three processor FPGA with 12 MGTs. The data received by processor FPGA152is sent to FPGA154and then combined with the data from processor FPGA152. Core trigger algorithms are implemented in the firmware of the processor FPGA152. The input data, after deserialization, is organized into calibrated gTowers in a gTower-builder step. A seeded simple-cone jet algorithm is used for large-area non-iterative jet finding. Seeds are defined by gTowers over a configurable ET threshold. An illustration of the seeds identified in an event is shown inFIG.18, in which a seeding step for identifying large-R jets by selecting towers over a threshold ET value is shown on the left, and summing the energy around the seeds within ΔR≤1.0 is shown on the right of the arrow180. The gTower ET in a circular region surrounding the seeds is summed. Portions of the jet area may extend into an η region on a neighboring processor FPGA. Part of the energy summation, therefore, takes place on that FPGA necessitating the transfer of seed information with low-latency parallel GPIOs. These partial sums are then sent to the original FPGA and included in the final ET of the large-R jets, as shown inFIG.19, which shows the final large-R jets. Each jet is stored on the processor FPGA that produced the seed, which is indicated by the reference designations corresponding to that FPGA as shown inFIGS.15,18,19. The jets are allowed to overlap, which enhances the efficiency for events with complex topologies, in which multiple energy depositions are close together, as is typically found in events containing boosted objects. The architecture of the reconfigurable computing platform150permits event-by-event local pileup suppression for large-R objects using baseline subtraction techniques. Pileup subtraction is performed using the energy density p measured on the gTowers within each processing region and is calculated on an event-by-event basis. The baseline energy subtracted from each jet is determined by the product of the area of each jet and the energy density from the associated region. In the past, this baseline subtraction used an average value for all events but is now calculated for each event in accordance with the disclosed embodiments. A CACTUS/IPbus interface is provided for high-level control of the reconfigurable computing platform150, which allows algorithmic parameters to be set, modes of operation to be controlled, and spy memories to be read. The IPbus protocol is implemented in the hybrid FPGA154including the standard firmware modified to run on the FPGA154and the software suite from CACTUS for a Linux instance running on an ARM processor. The hybrid FPGA154implements an intelligent platform management controller (IPMC) to monitor voltage and current of power rails on the reconfigurable computing platform. The hybrid FPGA154also monitors the temperature of all FPGA152via embedded sensors, and of any areas of dense logic via discrete sensors. This data is transmitted to an external monitoring system by the hybrid FPGA154. If any board temperature exceeds a programmable threshold, the IPMC powers down the board payload, which includes components not on the management power supply. The thresholds, at which this function is activated, are set above the levels at which the detector control system (DCS) powers down the module. Thus, this mechanism activates if the DCS fails, which may occur, for example, if there is a sudden or rapid rise in temperature to which the DCS cannot respond in sufficient time. Two negative 48 V inputs are ORed and inverted to one +48 V by an ATCA board power input module (PIM 400). The +48 V power is stepped down to 12V by a DC-DC converter, and the remaining power rails, such as 1.0V, 1.2 V, 1.8 V, 2.5 V, and 3.3 V are generated from 12 V with different DC-DC power modules. There are two types of optical transceivers (MiniPODs and MicroPODs) and two different MGTs (GTX and GTH) on the reconfigurable computing platform150board. Thus, each MGT is connected to two types of optical transceivers. Moreover, the GTH to GTH loopback, GTX to GTX loopback, GTH to GTX loopback, and GTX to GTH loopback are also included on the reconfigurable computing platform150board. High-speed parallel GPIOs are used to transfer data between FPGA152. The GPIOs operate at 480 Mb/s with a 50-bit width. Three different 50-bit GPIOs are used. The first is from processor FPGA high performance (HP) banks to HP banks with an LVDS differential interface, the second is from processor FPGA HP banks to ZYNQ HP banks with an LVDS differential interface, and the third is from processor FPGA HP bank to HP banks with a single-ended HSTL interface. When the 80-channel GTHs of the processor FPGA152and 16-channel GTXs of the ZYNQ154are turned on, the links is stable at 12.8 Gb/s with no error bit detected and a bit error rate less than 10−15. The GTH provides better performance than the GTX, and the MiniPODs14are approximately the same as the MicroPODs The data buses are stable at 960 Mb/s. The stable range for the processor FPGA HP banks to processor FPGA HP banks LVDS and HSTL interface is approximately 0.78 ps, which is 75% of a half cycle at 480 MHz. For the processor FPGA HP banks to ZYNQ HP banks LVDS interface, the stable range is about 0.702 ps, which is 67% of a half cycle at 480 MHz. The initial motivation for developing the reconfigurable computing platform was provided by the ATLAS experiment, which is one of seven (7) particle detector experiments constructed at the large hadron collector (LHC) apparatus (ATLAS) detector at CERN (European Organization for Nuclear Research). The experiment is designed to take advantage of the unprecedented energy available at the LHC and observe phenomena related to highly massive particles that were not observable using earlier lower-energy accelerators. ATLAS is designed to search for evidence of particle physics theories beyond the standard model. One or more embodiments disclosed herein, or a portion thereof, may make use of software running on a computer or workstation. By way of example, only and without limitation,FIG.22is a block diagram of an embodiment of a machine in the form of a computing system900, within which is a set of instructions902that, when executed, cause the machine to perform any one or more of the methodologies according to embodiments of the invention. In one or more embodiments, the machine operates as a standalone device; in one or more other embodiments, the machine is connected (e.g., via a network922) to other machines. In a networked implementation, the machine operates in the capacity of a server or a client user machine in a server-client user network environment. Exemplary implementations of the machine as contemplated by embodiments of the invention include, but are not limited to, a server computer, client user computer, personal computer (PC), tablet PC, personal digital assistant (PDA), cellular telephone, mobile device, palmtop computer, laptop computer, desktop computer, communication device, personal trusted device, web appliance, network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. The computing system900includes a processing device(s)904(e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), program memory device(s)906, and data memory device(s)908, which communicate with each other via a bus910. The computing system900further includes display device(s)912(e.g., liquid crystal display (LCD), flat panel, solid state display, or cathode ray tube (CRT)). The computing system900includes input device(s)914(e.g., a keyboard), cursor control device(s)916(e.g., a mouse), disk drive unit(s)918, signal generation device(s)920(e.g., a speaker or remote control), and network interface device(s)924, operatively coupled together, and/or with other functional blocks, via bus910. The disk drive unit(s)918includes machine-readable medium(s)926, on which is stored one or more sets of instructions902(e.g., software) embodying any one or more of the methodologies or functions herein, including those methods illustrated herein. The instructions902may also reside, completely or at least partially, within the program memory device(s)906, the data memory device(s)908, and/or the processing device(s)904during execution thereof by the computing system900. The program memory device(s)906and the processing device(s)904also constitute machine-readable media. Dedicated hardware implementations, such as but not limited to ASICs, programmable logic arrays, and other hardware devices can likewise be constructed to implement methods described herein. Applications that include the apparatus and systems of various embodiments broadly comprise a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an ASIC. Thus, the example system is applicable to software, firmware, and/or hardware implementations. The term “processing device” as used herein is intended to include any processor, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processing device” may refer to more than one individual processor. The term “memory” is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device (for example, hard drive), a removable memory device (for example, diskette), a flash memory and the like. In addition, the display device(s)912, input device(s)914, cursor control device(s)916, signal generation device(s)920, etc., can be collectively referred to as an “input/output interface,” and is intended to include one or more mechanisms for inputting data to the processing device(s)904, and one or more mechanisms for providing results associated with the processing device(s). Input/output or I/O devices (including but not limited to keyboards (e.g., alpha-numeric input device(s)914, display device(s)912, and the like) can be coupled to the system either directly (such as via bus910) or through intervening input/output controllers (omitted for clarity). In an integrated circuit implementation of one or more embodiments of the invention, multiple identical dies are typically fabricated in a repeated pattern on a surface of a semiconductor wafer. Each such die may include a device described herein and may include other structures and/or circuits. The individual dies are cut or diced from the wafer, then packaged as integrated circuits. One skilled in the art would know how to dice wafers and package die to produce integrated circuits. Any of the exemplary circuits or method illustrated in the accompanying figures, or portions thereof, may be part of an integrated circuit. Integrated circuits so manufactured are considered part of this invention. In accordance with various embodiments, the methods, functions or logic described herein is implemented as one or more software programs running on a computer processor. Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Further, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods, functions or logic described herein. The embodiment contemplates a machine-readable medium or computer-readable medium including instructions902, or that which receives and executes instructions902from a propagated signal so that a device connected to a network environment922can send or receive voice, video or data, and to communicate over the network922using the instructions902. The instructions902are further transmitted or received over the network922via the network interface device(s)924. The machine-readable medium also contains a data structure for storing data useful in providing a functional relationship between the data and a machine or computer in an illustrative embodiment of the systems and methods herein. While the machine-readable medium902is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform anyone or more of the methodologies of the embodiment. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to: solid-state memory (e.g., solid-state drive (SSD), flash memory, etc.); read-only memory (ROM), or other non-volatile memory; random access memory (RAM), or other re-writable (volatile) memory; magneto-optical or optical medium, such as a disk or tape; and/or a digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, the embodiment is considered to include anyone or more of a tangible machine-readable medium or a tangible distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored. It should also be noted that software, which implements the methods, functions and/or logic herein, are optionally stored on a tangible storage medium, such as: a magnetic medium, such as a disk or tape; a magneto-optical or optical medium, such as a disk; or a solid state medium, such as a memory automobile or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium as listed herein and other equivalents and successor media, in which the software implementations herein are stored. Although the specification describes components and functions implemented in the embodiments with reference to particular standards and protocols, the embodiments are not limited to such standards and protocols. The illustrations of embodiments described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Other embodiments are utilized and derived therefrom, such that structural and logical substitutions and changes are made without departing from the scope of this disclosure. Figures are also merely representational and are not drawn to scale. Certain proportions thereof are exaggerated, while others are decreased. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Such embodiments are referred to herein, individually and/or collectively, by the term “embodiment” merely for convenience and without intending to voluntarily limit the scope of this application to any single embodiment or inventive concept if more than one is in fact shown. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose are substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single embodiment. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate example embodiment. The abstract is provided to comply with 37 C.F.R. § 1.72(b), which requires an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as separately claimed subject matter. Although specific example embodiments have been described, it will be evident that various modifications and changes are made to these embodiments without departing from the broader scope of the inventive subject matter described herein. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and without limitation, specific embodiments in which the subject matter are practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings herein. Other embodiments are utilized and derived therefrom, such that structural and logical substitutions and changes are made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Given the teachings provided herein, one of ordinary skill in the art will be able to contemplate other implementations and applications of the techniques of the disclosed embodiments. Although illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that these embodiments are not limited to the disclosed embodiments, and that various other changes and modifications are made therein by one skilled in the art without departing from the scope of the appended claims.
57,125
11860816
The appendix to the specification contains additional illustrations of the methods and systems disclosed herein. DETAILED DESCRIPTION The present disclosure provides methods and systems (e.g., computer systems and software) for archiving and retrieving data. It should be understood that the teachings herein can be software-and/or hardware-implemented, and that they may be executed on a single CPU, which may have one or more threads, or distributed across multiple CPUs, each of which may have one or more threads, in a parallel processing environment. For purposes of illustration, several exemplary embodiments will be described in detail herein in the context of archiving and retrieving various corporate records. It is contemplated, however, that the methods and systems described herein can be utilized in other contexts. FIG.1is a flowchart100of representative steps that can be carried out to archive data according to aspects of the instant disclosure. In block102, a plurality of data items are received. This is illustrated in the schematic representation200of the archiving of data (e.g., corporate records202) inFIG.2. In block104, the data items (e.g., corporate records202) are separated into a plurality of data streams204based upon a plurality of corresponding criteria. In embodiments of the disclosure, there can be a one-to-one correspondence between data streams204and criteria (that is, each data stream has a single corresponding defining criterion). The criteria can correspond to a characteristic or category of the data items received. For example, corporate records202can be separated into a first data stream that contains employee personnel records, a second data stream that contains customer invoices, a third data stream that contains employee emails, a fourth data stream that contains corporate contracts, and so forth. Each of the plurality of data streams204will therefore contain data items of a common type (e.g., all data items in a given data stream will be employee emails). Further processing of only a single data stream (e.g., stream N) containing employee emails will be described herein for the sake of illustration, though it should be understood that analogous steps can be carried out with respect to the remaining data streams (e.g., streams 1, 2, 3, . . . N-1). In block106, a plurality of data item properties are defined for the data stream. The data item properties correspond to and describe aspects of a particular data item within the data stream, such as corporate record206shown inFIG.2. It is desirable for the data item properties to include at least a data item date property and a data item custodian property. Additional data item properties can be user defined. For example, for the illustrative data stream of employee emails, then the data item properties can include, without limitation: CUSTODIAN; DATE; SENDER; RECIPIENTS; SUBJECT; and the like. In block108, a record manifest208is created for the data stream. The record manifest includes metadata for the data items within the data stream corresponding to the data item properties for the data stream. This information can either be manually entered or extracted directly from the data item. For example, consider an email collected from John Smith's mailbox that was sent from Jane Doe to John Smith on Jan. 1, 2016, with the subject “Today's Meeting Agenda.” For this data item, the custodian (e.g., John Smith), the date (e.g., Jan. 1, 2016), the sender (e.g., Jane Doe), the recipient(s) (e.g., John Smith), and the subject (“Today's Meeting Agenda”) can be extracted from the email and added to record manifest208. Those of ordinary skill in the art will appreciate that, after repeating this process for all the data items within the data stream, the record manifest will, in effect, become a database for the data stream, with each entry in the database corresponding to an individual data item within the data stream. In block110, the data items206from the data stream and the record manifest208are stored in storage210. Storage210can be any suitable storage medium, including, without limitation, a storage area network (SAN), a network attached storage (NAS) device, cloud storage (e.g., Microsoft Azure, Amazon S3), a private cloud storage, a hybrid cloud storage, or the like. Advantageously, the data within storage210is merely at rest, and is not indexed except on demand, and then only to the extent necessary to satisfy a user request, as will now be described with reference to the flowchart300of exemplary steps shown inFIG.3. In block302, a search query including a plurality of search criteria is received. The search criteria desirably include both a data item date criterion and a data item custodian criterion. The search criteria can also include an identification of a particular data stream to search. For example, if a user wishes to search only employee emails prior to Jan. 1, 2016 collected from John Smith's mailbox, the search criteria can be structured to specify the employee email data stream, a data item custodian criterion of “CUSTODIAN=John Smith,” and a data item date criterion of “DATE<Jan. 1, 2016.” (Those of ordinary skill in the art will appreciate that the precise syntax of the search query may differ from the exemplary syntax shown here.) In block304, the stored data stream, which is otherwise at rest, is indexed. By waiting until the search query is received to index the data stream, computing resources, and thus financial resources, are conserved. Further computing and financial resources can be conserved by limiting the extent to which the data stream is indexed in block304. According to aspects of the disclosure, only data items having data item properties that match the data item custodian and data item date criteria are indexed, leaving the remaining data at rest. In the example above, therefore, only John Smith's emails from before Jan. 1, 2016 would be indexed; emails from other custodians or from other date ranges would be ignored. In additional aspects of the disclosure, data items are only indexed with respect to data item properties corresponding to the search criteria. For example, if the user is interested only in emails collected from John Smith's mailbox that were sent from Jane Doe, there would be no need to index the “SUBJECT” data item properties; only the “SENDER” data item property would be relevant to the search. Further computing resource and financial savings can be realized by indexing from record manifest208created in block108, rather than from the raw data items206themselves. In block306, the search criteria are applied to the indexed data stream. Search results are returned in block308. The description above illustrates a search of a single data stream. It is contemplated, however, that searches can be executed on multiple data streams in parallel (rather than in series, as in extant search methodologies). In embodiments, therefore, the search criteria received as part of the search query in block302can correspond to data item properties that are common to multiple data streams stored in storage210. Blocks304,306, and308can then be applied in parallel to the multiple data streams. In another embodiment, the instant disclosure provides a method of restoring a data item, such as an email, a contact, a calendar entry, or the like, from an archive. It is known to archive such data items by creating copies thereof in the archival storage location and then removing the majority of the contents of the data item (e.g., the body of an email message and any attachments) from the primary storage location in order to reduce storage consumption. The portion of the data item that remains in primary storage is known as a “stub,” and contains certain information regarding the data item (e.g., SENDER, RECIPIENT, SUBJECT, and the like), as well as a pointer to the copy of the original message in archival storage. It is possible to modify the stub. For example, a user may wish to assign or remove a flag or assign, change, or remove a category of the data item represented by the stub. Yet, because these changes to the stub are not reflected in the corresponding data item in archival storage, extant methods of restoring data items from an archive can lose these changes. In particular, extant methods of restoring data items from an archive typically delete the stub when importing the corresponding data item from the archive, thereby creating a data item that appears to be identical to the data item when it was archived, and that does not exhibit any post-archive changes made in the stub. This data loss can be disadvantageous. FIG.4depicts a flowchart400of representative steps that allow data items to be restored from an archive without loss of data present in a stub by treating the stub and the corresponding archived data item as complementary parts of a whole. The stub of the data item to be restored is identified in block402. In block404, the corresponding data item is retrieved from the archive. In block406, the stub and the retrieved data item are combined in a manner that preserves any changes to the stub. In particular, only data contained in the retrieved data item that are not already present in the stub (e.g., the body of an email message and any attachments) are copied into the stub. In optional block408, the restored data item can be transferred to a new live (rather than archival) storage location. The data item can then be deleted in full from the original live (rather than archival) storage location. Although several embodiments of this invention have been described above with a certain degree of particularity, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention. All directional references (e.g., upper, lower, upward, downward, left, right, leftward, rightward, top, bottom, above, below, vertical, horizontal, clockwise, and counterclockwise) are only used for identification purposes to aid the reader's understanding of the present invention, and do not create limitations, particularly as to the position, orientation, or use of the invention. Joinder references (e.g., attached, coupled, connected, and the like) are to be construed broadly and may include intermediate members between a connection of elements and relative movement between elements. As such, joinder references do not necessarily infer that two elements are directly connected and in fixed relation to each other. It is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative only and not limiting. Changes in detail or structure may be made without departing from the spirit of the invention as defined in the appended claims.
10,887
11860817
DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the present disclosure. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one skilled in the art that the present inventive subject matter may be practiced without these specific details. It will be appreciated that some of the examples disclosed herein are described in the context of virtual machines that are backed up by using base and incremental snapshots, for example. This should not necessarily be regarded as limiting of the disclosures. The disclosures, systems, and methods described herein apply not only to virtual machines of all types that run a file system (for example), but also to networked-attached storage (NAS) devices, physical machines (for example Linux servers), and databases. Various embodiments described herein relate to online data format conversion and in particular to online data format conversion during file transfer to a remote location. Some examples herein may include “on the fly” upload capability. As mentioned above, challenging issues can arise when data is stored in a variety of file formats across computer systems. Each file format may have its own use case that can cater to some specific need and/or possibly allow better read/write performance in certain scenarios. It can be challenging and, at times, unavoidable to have to switch from one file format to another during a file transfer, either to take advantage of a certain format or in view of some other limitation. One such limitation is archiving a file in one format, say format F1, residing in a computer system, say S1, to another computer system, say S2, which only understands file format F2. A naive way of transferring the file in format F1 from S1 to S2 in format F2 could include the following: convert the file locally on S1 from F1 to F2 format, and copy the new local file in F2 format to S2. Similar steps may be encountered during archival of data to a cloud location, for example. A data management system (or backup service) may ingest customer data in a cluster in a write optimized journaled file format. After some duration, the data can then be archived to cloud locations (such as Amazon S3, Azure, and others, for example) for retaining the customer data for a longer duration. Sometimes, these archival locations only support storing data in a read optimized patch file format. The naive approach mentioned above for archiving (or transferring) a given file in a journaled file format on (or to) a cluster in a patch file format at a cloud archival location can suffer from certain limitations, as follows. Conversion of data in journaled file to a patch file format locally on a cluster requires reading data from the former and writing data to the latter format. This results in increased consumption of input/output (I/O) resources locally. The overall time for transferring the file to an archival location is a combination of two durations: (local conversion process time+file copy to archival location time). So there is an inherent extra time of conversion in the end to end transfer process. In some present examples, an efficient process is disclosed for transferring a file in a different format to an archival location. Some examples seek to address the limitations discussed above. Some examples simulate local conversion of a file into a different format without actually reading or writing the data blocks of the file to build a profile of the eventual patch file. This profiled data contains information about all the pieces of the eventual patch file and can be used to transfer the data in patch file format to an archival location without a direct or explicit need to convert the journaled file to a patch file format. Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the appended drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. In some embodiments, data and/or metadata of a distributed file system is stored in a consistent, snapshottable distributed database. In some examples, the term “snapshottable” in relation to a distributed database means the distributed database is capable of being captured or backed up in one or more snapshots. In some examples, the term “snappable” in relation to an object, such as a file, means the object (e.g., the file) is capable of being captured or backed up in one or more snapshots. Each snapshot of a distributed file system is stored in one or more files in the distributed file system. FIG.1Ais a block diagram illustrating one embodiment of a networked computing environment100in which some embodiments are practiced. As depicted, the networked computing environment100includes a data center150, a storage appliance140, and a computing device154in communication with each other via one or more networks180. The networked computing environment100may include a plurality of computing devices interconnected through one or more networks180. The one or more networks180may allow computing devices and/or storage devices to connect to and communicate with other computing devices and/or other storage devices. In some cases, the networked computing environment may include other computing devices and/or other storage devices not shown. The other computing devices may include, for example, a mobile computing device, a non-mobile computing device, a server, a workstation, a laptop computer, a tablet computer, a desktop computer, or an information processing system. The other storage devices may include, for example, a storage area network (SAN) storage device, a NAS, a hard disk drive (HDD), a solid-state drive (SSD), or a data storage system. The data center150may include one or more servers, such as server160, in communication with one or more storage devices, such as storage device156. The one or more servers may also be in communication with one or more storage appliances, such as storage appliance170. The server160, storage device156, and storage appliance170may be in communication with each other via a networking fabric connecting servers and data storage units within the data center to each other. The storage appliance170may include a data management system for backing up virtual machines and/or files within a virtualized infrastructure. The server160may be used to create and manage one or more virtual machines associated with a virtualized infrastructure. The one or more virtual machines may run various applications, such as a cloud-based service, a database application or a web server. The storage device156may include one or more hardware storage devices for storing data, such as a HDD, a magnetic tape drive, a SSD, a SAN storage device, or a NAS device. In some cases, a data center, such as data center150, may include thousands of servers and/or data storage devices in communication with each other. The data storage devices may comprise a tiered data storage infrastructure (or a portion of a tiered data storage infrastructure). The tiered data storage infrastructure may allow for the movement of data across different tiers of a data storage infrastructure between higher-cost, higher-performance storage devices (e.g., SSDs and HDDs) and relatively lower-cost, lower-performance storage devices (e.g., magnetic tape drives). The one or more networks180may include a secure network such as an enterprise private network, an unsecure network such as a wireless open network, a local area network (LAN), a wide area network (WAN), and the Internet. The one or more networks180may include a cellular network, a mobile network, a wireless network, or a wired network. Each network of the one or more networks180may include hubs, bridges, routers, switches, and wired transmission media such as a direct-wired connection. The one or more networks180may include an extranet or other private network for securely sharing information or providing controlled access to applications or files. A server, such as server160, may allow a client to download information or files (e.g., executable, text, application, audio, image, or video files) from the server or perform a search query related to particular information stored on the server. In some cases, a server may act as an application server or a file server. In general, a server may refer to a hardware device that acts as the host in a client-server relationship or a software process that shares a resource with or performs work for one or more clients. One embodiment of server160includes a network interface165, processor166, memory167, disk168, and virtualization manager169all in communication with each other. Network interface165allows server160to connect to one or more networks180. Network interface165may include a wireless network interface and/or a wired network interface. Processor166allows server160to execute computer readable instructions stored in memory167. Processor166may include one or more processing units or processing devices, such as one or more CPUs and/or one or more GPUs. Memory167may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, Flash, etc.). Disk168may include a hard disk drive and/or a solid-state drive. Memory167and disk168may comprise hardware storage devices. The virtualization manager169may manage a virtualized infrastructure and perform management operations associated with the virtualized infrastructure. The virtualization manager169may manage the provisioning of virtual machines running within the virtualized infrastructure and provide an interface to computing devices interacting with the virtualized infrastructure. In one example, the virtualization manager169may set a virtual machine into a frozen state in response to a snapshot request made via an application programming interface (API) by a storage appliance, such as storage appliance170. Setting the virtual machine into a frozen state may allow a point in time snapshot of the virtual machine to be stored or transferred. In one example, updates made to a virtual machine that has been set into a frozen state may be written to a separate file (e.g., an update file) while the virtual disk file associated with the state of the virtual disk at the point in time is frozen. The virtual disk file may be set into a read-only state to prevent modifications to the virtual disk file while the virtual machine is in the frozen state. The virtualization manager169may then transfer data associated with the virtual machine (e.g., an image of the virtual machine or a portion of the image of the virtual machine) to a storage appliance in response to a request made by the storage appliance. After the data associated with the point in time snapshot of the virtual machine has been transferred to the storage appliance, the virtual machine may be released from the frozen state (i.e., unfrozen) and the updates made to the virtual machine and stored in the separate file may be merged into the virtual disk file. The virtualization manager169may perform various virtual machine related tasks, such as cloning virtual machines, creating new virtual machines, monitoring the state of virtual machines, moving virtual machines between physical hosts for load balancing purposes, and facilitating backups of virtual machines. One embodiment of storage appliance170includes a network interface175, processor176, memory177, and disk178all in communication with each other. Network interface175allows storage appliance170to connect to one or more networks180. Network interface175may include a wireless network interface and/or a wired network interface. Processor176allows storage appliance170to execute computer readable instructions stored in memory177. Processor176may include one or more processing units, such as one or more CPUs and/or one or more GPUs. Memory177may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, NOR Flash, NAND Flash, etc.). Disk178may include a hard disk drive and/or a solid-state drive. Memory177and disk178may comprise hardware storage devices. In one embodiment, the storage appliance170may include four machines. Each of the four machines may include a multi-core CPU, 64 GB of RAM, a 400 GB SSD, three 4 TB HDDs, and a network interface controller. In this case, the four machines may be in communication with the one or more networks180via the four network interface controllers. The four machines may comprise four nodes of a server cluster. The server cluster may comprise a set of physical machines that are connected together via a network. The server cluster may be used for storing data associated with a plurality of virtual machines, such as backup data associated with different point in time versions of 1000 virtual machines. The networked computing environment100may provide a cloud computing environment for one or more computing devices. Cloud computing may refer to Internet-based computing, wherein shared resources, software, and/or information may be provided to one or more computing devices on-demand via the Internet. The networked computing environment100may comprise a cloud computing environment providing Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) services. SaaS may refer to a software distribution model in which applications are hosted by a service provider and made available to end users over the Internet. In one embodiment, the networked computing environment100may include a virtualized infrastructure that provides software, data processing, and/or data storage services to end users accessing the services via the networked computing environment. In one example, networked computing environment100may provide cloud-based work productivity or business-related applications to a computing device, such as computing device154. The storage appliance140may comprise a cloud-based data management system for backing up virtual machines and/or files within a virtualized infrastructure, such as virtual machines running on server160or files stored on server160. In some cases, networked computing environment100may provide remote access to secure applications and files stored within data center150from a remote computing device, such as computing device154. The data center150may use an access control application to manage remote access to protected resources, such as protected applications, databases, or files located within the data center. To facilitate remote access to secure applications and files, a secure network connection may be established using a virtual private network (VPN). A VPN connection may allow a remote computing device, such as computing device154, to securely access data from a private network (e.g., from a company file server or mail server) using an unsecure public network or the Internet. The VPN connection may require client-side software (e.g., running on the remote computing device) to establish and maintain the VPN connection. The VPN client software may provide data encryption and encapsulation prior to the transmission of secure private network traffic through the Internet. In some embodiments, the storage appliance170may manage the extraction and storage of virtual machine snapshots associated with different point in time versions of one or more virtual machines running within the data center150. A snapshot of a virtual machine may correspond with a state of the virtual machine at a particular point in time. In response to a restore command from the server160, the storage appliance170may restore a point in time version of a virtual machine or restore point in time versions of one or more files located on the virtual machine and transmit the restored data to the server160. In response to a mount command from the server160, the storage appliance170may allow a point in time version of a virtual machine to be mounted and allow the server160to read and/or modify data associated with the point in time version of the virtual machine. To improve storage density, the storage appliance170may deduplicate and compress data associated with different versions of a virtual machine and/or deduplicate and compress data associated with different virtual machines. To improve system performance, the storage appliance170may first store virtual machine snapshots received from a virtualized environment in a cache, such as a flash-based cache. The cache may also store popular data or frequently accessed data (e.g., based on a history of virtual machine restorations, incremental files associated with commonly restored virtual machine versions) and current day incremental files or incremental files corresponding with snapshots captured within the past 24 hours, for example. An incremental file may comprise a forward incremental file or a reverse incremental file. A forward incremental file may include a set of data representing changes that have occurred since an earlier point in time snapshot of a virtual machine. To generate a snapshot of the virtual machine corresponding with a forward incremental file, the forward incremental file may be combined with an earlier point in time snapshot of the virtual machine (e.g., the forward incremental file may be combined with the last full image of the virtual machine that was captured before the forward incremental file was captured and any other forward incremental files that were captured subsequent to the last full image and prior to the forward incremental file). A reverse incremental file may include a set of data representing changes from a later point in time snapshot of a virtual machine. To generate a snapshot of the virtual machine corresponding with a reverse incremental file, the reverse incremental file may be combined with a later point in time snapshot of the virtual machine (e.g., the reverse incremental file may be combined with the most recent snapshot of the virtual machine and any other reverse incremental files that were captured prior to the most recent snapshot and subsequent to the reverse incremental file). The storage appliance170may provide a user interface (e.g., a web-based interface or a graphical user interface (GUI)) that displays virtual machine backup information such as identifications of the virtual machines protected and the historical versions or time machine views for each of the virtual machines protected. A time machine view of a virtual machine may include snapshots of the virtual machine over a plurality of points in time. Each snapshot may comprise the state of the virtual machine at a particular point in time. Each snapshot may correspond with a different version of the virtual machine (e.g., Version 1 of a virtual machine may correspond with the state of the virtual machine at a first point in time and Version 2 of the virtual machine may correspond with the state of the virtual machine at a second point in time subsequent to the first point in time). The user interface may enable an end user of the storage appliance170(e.g., a system administrator or a virtualization administrator) to select a particular version of a virtual machine to be restored or mounted. When a particular version of a virtual machine has been mounted, the particular version may be accessed by a client (e.g., a virtual machine, a physical machine, or a computing device) as if the particular version was local to the client. A mounted version of a virtual machine may correspond with a mount point directory (e.g., /snapshots/VM5/Version23). In one example, the storage appliance170may run an NFS server and make the particular version (or a copy of the particular version) of the virtual machine accessible for reading and/or writing. The end user of the storage appliance170may then select the particular version to be mounted and run an application (e.g., a data analytics application) using the mounted version of the virtual machine. In another example, the particular version may be mounted as an iSCSI target. In some embodiments, the management system190provides management of one or more clusters of nodes as described herein, such as management of one or more policies with respect to the one or more clusters of nodes. The management system190can serve as a cluster manager for one or more clusters of nodes (e.g., present in the networked computing environment100). According to various embodiments, the management system190provides for central management of policies (e.g., SLAs) that remotely manages and synchronizes policy definitions with clusters of nodes. For some embodiments, the management system190facilitates automatic setup of secure communications channels between clusters of nodes to facilitate replication of data. Additionally, for some embodiments, the management system190manages archival settings for one or more clusters of nodes with respect to cloud-based data storage resource provided by one or more cloud service provider. FIG.1Bis a block diagram illustrating one embodiment of server160inFIG.1A. The server160may comprise one server out of a plurality of servers that are networked together within a data center. In one example, the plurality of servers may be positioned within one or more server racks within the data center. As depicted, the server160includes hardware-level components and software-level components. The hardware-level components include one or more processors182, one or more memory184, and one or more disks185. The software-level components include a hypervisor186, a virtualized infrastructure manager199, and one or more virtual machines, such as virtual machine198. The hypervisor186may comprise a native hypervisor or a hosted hypervisor. The hypervisor186may provide a virtual operating platform for running one or more virtual machines, such as virtual machine198. Virtual machine198includes a plurality of virtual hardware devices including a virtual processor192, a virtual memory194, and a virtual disk195. The virtual disk195may comprise a file stored within the one or more disks185. In one example, a virtual machine may include a plurality of virtual disks, with each virtual disk of the plurality of virtual disks associated with a different file stored on the one or more disks185. Virtual machine198may include a guest operating system196that runs one or more applications, such as application197. The virtualized infrastructure manager199, which may correspond with the virtualization manager169inFIG.1A, may run on a virtual machine or natively on the server160. The virtualized infrastructure manager199may provide a centralized platform for managing a virtualized infrastructure that includes a plurality of virtual machines. The virtualized infrastructure manager199may manage the provisioning of virtual machines running within the virtualized infrastructure and provide an interface to computing devices interacting with the virtualized infrastructure. The virtualized infrastructure manager199may perform various virtualized infrastructure related tasks, such as cloning virtual machines, creating new virtual machines, monitoring the state of virtual machines, and facilitating backups of virtual machines. In one embodiment, the server160may use the virtualized infrastructure manager199to facilitate backups for a plurality of virtual machines (e.g., eight different virtual machines) running on the server160. Each virtual machine running on the server160may run its own guest operating system and its own set of applications. Each virtual machine running on the server160may store its own set of files using one or more virtual disks associated with the virtual machine (e.g., each virtual machine may include two virtual disks that are used for storing data associated with the virtual machine). In one embodiment, a data management application running on a storage appliance, such as storage appliance140inFIG.1Aor storage appliance170in FIG.1A, may request a snapshot of a virtual machine running on server160. The snapshot of the virtual machine may be stored as one or more files, with each file associated with a virtual disk of the virtual machine. A snapshot of a virtual machine may correspond with a state of the virtual machine at a particular point in time. The particular point in time may be associated with a time stamp. In one example, a first snapshot of a virtual machine may correspond with a first state of the virtual machine (including the state of applications and files stored on the virtual machine) at a first point in time (e.g., 5:30 p.m. on Jun. 29, 2024) and a second snapshot of the virtual machine may correspond with a second state of the virtual machine at a second point in time subsequent to the first point in time (e.g., 5:30 p.m. on Jun. 30, 2024). In response to a request for a snapshot of a virtual machine at a particular point in time, the virtualized infrastructure manager199may set the virtual machine into a frozen state or store a copy of the virtual machine at the particular point in time. The virtualized infrastructure manager199may then transfer data associated with the virtual machine (e.g., an image of the virtual machine or a portion of the image of the virtual machine) to the storage appliance. The data associated with the virtual machine may include a set of files including a virtual disk file storing contents of a virtual disk of the virtual machine at the particular point in time and a virtual machine configuration file storing configuration settings for the virtual machine at the particular point in time. The contents of the virtual disk file may include the operating system used by the virtual machine, local applications stored on the virtual disk, and user files (e.g., images and word processing documents). In some cases, the virtualized infrastructure manager199may transfer a full image of the virtual machine to the storage appliance or a plurality of data blocks corresponding with the full image (e.g., to enable a full image-level backup of the virtual machine to be stored on the storage appliance). In other cases, the virtualized infrastructure manager199may transfer a portion of an image of the virtual machine associated with data that has changed since an earlier point in time prior to the particular point in time or since a last snapshot of the virtual machine was taken. In one example, the virtualized infrastructure manager199may transfer only data associated with virtual blocks stored on a virtual disk of the virtual machine that have changed since the last snapshot of the virtual machine was taken. In one embodiment, the data management application may specify a first point in time and a second point in time and the virtualized infrastructure manager199may output one or more virtual data blocks associated with the virtual machine that have been modified between the first point in time and the second point in time. In some embodiments, the server160may or the hypervisor186may communicate with a storage appliance, such as storage appliance140inFIG.1Aor storage appliance170inFIG.1A, using a distributed file system protocol such as NFS. The distributed file system protocol may allow the server160or the hypervisor186to access, read, write, or modify files stored on the storage appliance as if the files were locally stored on the server. The distributed file system protocol may allow the server160or the hypervisor186to mount a directory or a portion of a file system located within the storage appliance. FIG.1Cis a block diagram illustrating one embodiment of storage appliance170inFIG.1A. The storage appliance may include a plurality of physical machines that may be grouped together and presented as a single computing system. Each physical machine of the plurality of physical machines may comprise a node in a cluster (e.g., a failover cluster). In one example, the storage appliance may be positioned within a server rack within a data center. As depicted, the storage appliance170includes hardware-level components and software-level components. The hardware-level components include one or more physical machines, such as physical machine120and physical machine130. The physical machine120includes a network interface121, processor122, memory123, and disk124all in communication with each other. Processor122allows physical machine120to execute computer readable instructions stored in memory123to perform processes described herein. Disk124may include a HDD and/or a SDD. The physical machine130includes a network interface131, processor132, memory133, and disk134all in communication with each other. Processor132allows physical machine130to execute computer readable instructions stored in memory133to perform processes described herein. Disk134may include a HDD and/or a SDD. In some cases, disk134may include a flash-based SSD or a hybrid HDD/SSD drive. In one embodiment, the storage appliance170may include a plurality of physical machines arranged in a cluster (e.g., eight machines in a cluster). Each of the plurality of physical machines may include a plurality of multi-core CPUs, 128 GB of RAM, a 500 GB SSD, four 4 TB HDDs, and a network interface controller. In some embodiments, the plurality of physical machines may be used to implement a cluster-based network file server. The cluster-based network file server may neither require nor use a front-end load balancer. One issue with using a front-end load balancer to host the IP address for the cluster-based network file server and to forward requests to the nodes of the cluster-based network file server is that the front-end load balancer comprises a single point of failure for the cluster-based network file server. In some cases, the file system protocol used by a server, such as server160inFIG.1A, or a hypervisor, such as hypervisor186inFIG.1B, to communicate with the storage appliance170may not provide a failover mechanism (e.g., NFS Version 3). In the case that no failover mechanism is provided on the client-side, the hypervisor may not be able to connect to a new node within a cluster in the event that the node connected to the hypervisor fails. In some embodiments, each node in a cluster may be connected to each other via a network and may be associated with one or more IP addresses (e.g., two different IP addresses may be assigned to each node). In one example, each node in the cluster may be assigned a permanent IP address and a floating IP address and may be accessed using either the permanent IP address or the floating IP address. In this case, a hypervisor, such as hypervisor186inFIG.1Bmay be configured with a first floating IP address associated with a first node in the cluster. The hypervisor may connect to the cluster using the first floating IP address. In one example, the hypervisor may communicate with the cluster using the NFS Version 3 protocol. Each node in the cluster may run a Virtual Router Redundancy Protocol (VRRP) daemon. A daemon may comprise a background process. Each VRRP daemon may include a list of all floating IP addresses available within the cluster. In the event that the first node associated with the first floating IP address fails, one of the VRRP daemons may automatically assume or pick up the first floating IP address if no other VRRP daemon has already assumed the first floating IP address. Therefore, if the first node in the cluster fails or otherwise goes down, then one of the remaining VRRP daemons running on the other nodes in the cluster may assume the first floating IP address that is used by the hypervisor for communicating with the cluster. In order to determine which of the other nodes in the cluster will assume the first floating IP address, a VRRP priority may be established. In one example, given a number (N) of nodes in a cluster from node (0) to node (N−1), for a floating IP address (i), the VRRP priority of node (j) may be (j-i) modulo N. In another example, given a number (N) of nodes in a cluster from node (0) to node (N−1), for a floating IP address (i), the VRRP priority of node (j) may be (i-j) modulo N. In these cases, node (j) will assume floating IP address (i) only if its VRRP priority is higher than that of any other node in the cluster that is alive and announcing itself on the network. Thus, if a node fails, then there may be a clear priority ordering for determining which other node in the cluster will take over the failed node's floating IP address. In some cases, a cluster may include a plurality of nodes and each node of the plurality of nodes may be assigned a different floating IP address. In this case, a first hypervisor may be configured with a first floating IP address associated with a first node in the cluster, a second hypervisor may be configured with a second floating IP address associated with a second node in the cluster, and a third hypervisor may be configured with a third floating IP address associated with a third node in the cluster. As depicted inFIG.1C, the software-level components of the storage appliance170may include data management system102, a virtualization interface104, a distributed job scheduler108, a distributed metadata store110, a distributed file system112, and one or more virtual machine search indexes, such as virtual machine search index106. In one embodiment, the software-level components of the storage appliance170may be run using a dedicated hardware-based appliance. In another embodiment, the software-level components of the storage appliance170may be run from the cloud (e.g., the software-level components may be installed on a cloud service provider). In some cases, the data storage across a plurality of nodes in a cluster (e.g., the data storage available from the one or more physical machines) may be aggregated and made available over a single file system namespace (e.g., /snapshots/). A directory for each virtual machine protected using the storage appliance170may be created (e.g., the directory for Virtual Machine A may be /snapshots/VM_A). Snapshots and other data associated with a virtual machine may reside within the directory for the virtual machine. In one example, snapshots of a virtual machine may be stored in subdirectories of the directory (e.g., a first snapshot of Virtual Machine A may reside in /snapshots/VM_A/s1/ and a second snapshot of Virtual Machine A may reside in /snapshots/VM_A/s2/). The distributed file system112may present itself as a single file system, in which as new physical machines or nodes are added to the storage appliance170, the cluster may automatically discover the additional nodes and automatically increase the available capacity of the file system for storing files and other data. Each file stored in the distributed file system112may be partitioned into one or more chunks. Each of the one or more chunks may be stored within the distributed file system112as a separate file. In some embodiments, the data management system102resides inside the distributed file system112. The data management system102may receive requests to capture snapshots of the entire distributed file system112on a periodic basis based on internal protocols or upon occurrence of user triggered events. The files stored within the distributed file system112may be replicated or mirrored over a plurality of physical machines, thereby creating a load-balanced and fault tolerant distributed file system. In one example, storage appliance170may include ten physical machines arranged as a failover cluster, and a first file corresponding with a snapshot of a virtual machine (e.g., /snapshots/VM_A/s1/s1.full) may be replicated and stored on three of the ten machines. The distributed metadata store110may include a distributed database management system that provides high availability without a single point of failure. In one embodiment, the distributed metadata store110may comprise a database, such as a distributed document oriented database. The distributed metadata store110may be used as a distributed key value storage system. In one example, the distributed metadata store110may comprise a distributed NoSQL key value store database. In some cases, the distributed metadata store110may include a partitioned row store, in which rows are organized into tables or other collections of related data held within a structured format within the key value store database. A table (or a set of tables) may be used to store metadata information associated with one or more files stored within the distributed file system112. The metadata information may include the name of a file, a size of the file, file permissions associated with the file, when the file was last modified, and file mapping information associated with an identification of the location of the file stored within a cluster of physical machines. In one embodiment, a new file corresponding with a snapshot of a virtual machine may be stored within the distributed file system112and metadata associated with the new file may be stored within the distributed metadata store110. The distributed metadata store110may also be used to store a backup schedule for the virtual machine and a list of snapshots for the virtual machine that are stored using the storage appliance170. In some cases, the distributed metadata store110may be used to manage one or more versions of a virtual machine. Each version of the virtual machine may correspond with a full image snapshot of the virtual machine stored within the distributed file system112or an incremental snapshot of the virtual machine (e.g., a forward incremental or reverse incremental) stored within the distributed file system112. In one embodiment, the one or more versions of the virtual machine may correspond with a plurality of files. The plurality of files may include a single full image snapshot of the virtual machine and one or more incrementals derived from the single full image snapshot. The single full image snapshot of the virtual machine may be stored using a first storage device of a first type (e.g., a HDD) and the one or more incrementals derived from the single full image snapshot may be stored using a second storage device of a second type (e.g., an SSD). In this case, only a single full image needs to be stored and each version of the virtual machine may be generated from the single full image or the single full image combined with a subset of the one or more incrementals. Furthermore, each version of the virtual machine may be generated by performing a sequential read from the first storage device (e.g., reading a single file from a HDD) to acquire the full image and, in parallel, performing one or more reads from the second storage device (e.g., performing fast random reads from an SSD) to acquire the one or more incrementals. The distributed job scheduler108may be used for scheduling backup jobs that acquire and store virtual machine snapshots for one or more virtual machines over time. The distributed job scheduler108may follow a backup schedule to backup an entire image of a virtual machine at a particular point in time or one or more virtual disks associated with the virtual machine at the particular point in time. In one example, the backup schedule may specify that the virtual machine be backed up at a snapshot capture frequency, such as every two hours or every 24 hours. Each backup job may be associated with one or more tasks to be performed in a sequence. Each of the one or more tasks associated with a job may be run on a particular node within a cluster. In some cases, the distributed job scheduler108may schedule a specific job to be run on a particular node based on data stored on the particular node. For example, the distributed job scheduler108may schedule a virtual machine snapshot job to be run on a node in a cluster that is used to store snapshots of the virtual machine in order to reduce network congestion. The distributed job scheduler108may comprise a distributed fault tolerant job scheduler, in which jobs affected by node failures are recovered and rescheduled to be run on available nodes. In one embodiment, the distributed job scheduler108may be fully decentralized and implemented without the existence of a master node. The distributed job scheduler108may run job scheduling processes on each node in a cluster or on a plurality of nodes in the cluster. In one example, the distributed job scheduler108may run a first set of job scheduling processes on a first node in the cluster, a second set of job scheduling processes on a second node in the cluster, and a third set of job scheduling processes on a third node in the cluster. The first set of job scheduling processes, the second set of job scheduling processes, and the third set of job scheduling processes may store information regarding jobs, schedules, and the states of jobs using a metadata store, such as distributed metadata store110. In the event that the first node running the first set of job scheduling processes fails (e.g., due to a network failure or a physical machine failure), the states of the jobs managed by the first set of job scheduling processes may fail to be updated within a threshold period of time (e.g., a job may fail to be completed within 30 seconds or within 3 minutes from being started). In response to detecting jobs that have failed to be updated within the threshold period of time, the distributed job scheduler108may undo and restart the failed jobs on available nodes within the cluster. The job scheduling processes running on at least a plurality of nodes in a cluster (e.g., on each available node in the cluster) may manage the scheduling and execution of a plurality of jobs. The job scheduling processes may include run processes for running jobs, cleanup processes for cleaning up failed tasks, and rollback processes for rolling-back or undoing any actions or tasks performed by failed jobs. In one embodiment, the job scheduling processes may detect that a particular task for a particular job has failed and in response may perform a cleanup process to clean up or remove the effects of the particular task and then perform a rollback process that processes one or more completed tasks for the particular job in reverse order to undo the effects of the one or more completed tasks. Once the particular job with the failed task has been undone, the job scheduling processes may restart the particular job on an available node in the cluster. The distributed job scheduler108may manage a job in which a series of tasks associated with the job are to be performed atomically (i.e., partial execution of the series of tasks is not permitted). If the series of tasks cannot be completely executed or there is any failure that occurs to one of the series of tasks during execution (e.g., a hard disk associated with a physical machine fails or a network connection to the physical machine fails), then the state of a data management system may be returned to a state as if none of the series of tasks were ever performed. The series of tasks may correspond with an ordering of tasks for the series of tasks and the distributed job scheduler108may ensure that each task of the series of tasks is executed based on the ordering of tasks. Tasks that do not have dependencies with each other may be executed in parallel. In some cases, the distributed job scheduler108may schedule each task of a series of tasks to be performed on a specific node in a cluster. In other cases, the distributed job scheduler108may schedule a first task of the series of tasks to be performed on a first node in a cluster and a second task of the series of tasks to be performed on a second node in the cluster. In these cases, the first task may have to operate on a first set of data (e.g., a first file stored in a file system) stored on the first node and the second task may have to operate on a second set of data (e.g., metadata related to the first file that is stored in a database) stored on the second node. In some embodiments, one or more tasks associated with a job may have an affinity to a specific node in a cluster. In one example, if the one or more tasks require access to a database that has been replicated on three nodes in a cluster, then the one or more tasks may be executed on one of the three nodes. In another example, if the one or more tasks require access to multiple chunks of data associated with a virtual disk that has been replicated over four nodes in a cluster, then the one or more tasks may be executed on one of the four nodes. Thus, the distributed job scheduler108may assign one or more tasks associated with a job to be executed on a particular node in a cluster based on the location of data required to be accessed by the one or more tasks. In one embodiment, the distributed job scheduler108may manage a first job associated with capturing and storing a snapshot of a virtual machine periodically (e.g., every 30 minutes). The first job may include one or more tasks, such as communicating with a virtualized infrastructure manager, such as the virtualized infrastructure manager199inFIG.1B, to create a frozen copy of the virtual machine and to transfer one or more chunks (or one or more files) associated with the frozen copy to a storage appliance, such as storage appliance170inFIG.1A. The one or more tasks may also include generating metadata for the one or more chunks, storing the metadata using the distributed metadata store110, storing the one or more chunks within the distributed file system112, and communicating with the virtualized infrastructure manager that the virtual machine the frozen copy of the virtual machine may be unfrozen or released for a frozen state. The metadata for a first chunk of the one or more chunks may include information specifying a version of the virtual machine associated with the frozen copy, a time associated with the version (e.g., the snapshot of the virtual machine was taken at 5:30 p.m. on Jun. 29, 2024), and a file path to where the first chunk is stored within the distributed file system112(e.g., the first chunk is located at /snapshots/VM_B/s1/s1.chunk1). The one or more tasks may also include deduplication, compression (e.g., using a lossless data compression algorithm such as LZ4 or LZ77), decompression, encryption (e.g., using a symmetric key algorithm such as Triple DES or AES-256), and decryption related tasks. The virtualization interface104may provide an interface for communicating with a virtualized infrastructure manager managing a virtualization infrastructure, such as virtualized infrastructure manager199inFIG.1B, and requesting data associated with virtual machine snapshots from the virtualization infrastructure. The virtualization interface104may communicate with the virtualized infrastructure manager using an API for accessing the virtualized infrastructure manager (e.g., to communicate a request for a snapshot of a virtual machine). In this case, storage appliance170may request and receive data from a virtualized infrastructure without requiring agent software to be installed or running on virtual machines within the virtualized infrastructure. The virtualization interface104may request data associated with virtual blocks stored on a virtual disk of the virtual machine that have changed since a last snapshot of the virtual machine was taken or since a specified prior point in time. Therefore, in some cases, if a snapshot of a virtual machine is the first snapshot taken of the virtual machine, then a full image of the virtual machine may be transferred to the storage appliance. However, if the snapshot of the virtual machine is not the first snapshot taken of the virtual machine, then only the data blocks of the virtual machine that have changed since a prior snapshot was taken may be transferred to the storage appliance. The virtual machine search index106may include a list of files that have been stored using a virtual machine and a version history for each of the files in the list. Each version of a file may be mapped to the earliest point in time snapshot of the virtual machine that includes the version of the file or to a snapshot of the virtual machine that includes the version of the file (e.g., the latest point in time snapshot of the virtual machine that includes the version of the file). In one example, the virtual machine search index106may be used to identify a version of the virtual machine that includes a particular version of a file (e.g., a particular version of a database, a spreadsheet, or a word processing document). In some cases, each of the virtual machines that are backed up or protected using storage appliance170may have a corresponding virtual machine search index. In one embodiment, as each snapshot of a virtual machine is ingested each virtual disk associated with the virtual machine is parsed in order to identify a file system type associated with the virtual disk and to extract metadata (e.g., file system metadata) for each file stored on the virtual disk. The metadata may include information for locating and retrieving each file from the virtual disk. The metadata may also include a name of a file, the size of the file, the last time at which the file was modified, and a content checksum for the file. Each file that has been added, deleted, or modified since a previous snapshot was captured may be determined using the metadata (e.g., by comparing the time at which a file was last modified with a time associated with the previous snapshot). Thus, for every file that has existed within any of the snapshots of the virtual machine, a virtual machine search index may be used to identify when the file was first created (e.g., corresponding with a first version of the file) and at what times the file was modified (e.g., corresponding with subsequent versions of the file). Each version of the file may be mapped to a particular version of the virtual machine that stores that version of the file. In some cases, if a virtual machine includes a plurality of virtual disks, then a virtual machine search index may be generated for each virtual disk of the plurality of virtual disks. For example, a first virtual machine search index may catalog and map files located on a first virtual disk of the plurality of virtual disks and a second virtual machine search index may catalog and map files located on a second virtual disk of the plurality of virtual disks. In this case, a global file catalog or a global virtual machine search index for the virtual machine may include the first virtual machine search index and the second virtual machine search index. A global file catalog may be stored for each virtual machine backed up by a storage appliance within a file system, such as distributed file system112inFIG.1C. The data management system102may comprise an application running on the storage appliance that manages and stores one or more snapshots of a virtual machine. In one example, the data management system102may comprise a highest level layer in an integrated software stack running on the storage appliance. The integrated software stack may include the data management system102, the virtualization interface104, the distributed job scheduler108, the distributed metadata store110, and the distributed file system112. In some cases, the integrated software stack may run on other computing devices, such as a server or computing device154inFIG.1A. The data management system102may use the virtualization interface104, the distributed job scheduler108, the distributed metadata store110, and the distributed file system112to manage and store one or more snapshots of a virtual machine, and/or manage operations in online data format conversion during file transfer to a remote location, for example. More specific operations in example data format conversion techniques are discussed further below. Each snapshot of the virtual machine may correspond with a point in time version of the virtual machine. The data management system102may generate and manage a list of versions for the virtual machine. Each version of the virtual machine may map to or reference one or more chunks and/or one or more files stored within the distributed file system112. Combined together, the one or more chunks and/or the one or more files stored within the distributed file system112may comprise a full image of the version of the virtual machine. FIG.2is a block diagram illustrating an example cluster200of a distributed decentralized database, according to some example embodiments. As illustrated, the example cluster200includes five nodes, nodes1-5. In some example embodiments, each of the five nodes runs from different machines, such as physical machine130inFIG.1Cor virtual machine198inFIG.1B. The nodes in the example cluster200can include instances of peer nodes of a distributed database (e.g., cluster-based database, distributed decentralized database management system, a NoSQL database, Apache Cassandra, DataStax, MongoDB, CouchDB), according to some example embodiments. The distributed database system is distributed in that data is sharded or distributed across the example cluster200in shards or chunks and decentralized in that there is no central storage device and no single point of failure. The system operates under an assumption that multiple nodes may go down, up, become non-responsive, and so on. Sharding is splitting up of the data horizontally and managing each shard separately on different nodes. For example, if the data managed by the example cluster200can be indexed using the 26 letters of the alphabet, node1can manage a first shard that handles records that start with A through E, node2can manage a second shard that handles records that start with F through J, and so on. In some example embodiments, data written to one of the nodes is replicated to one or more other nodes per a replication protocol of the example cluster200. For example, data written to node1can be replicated to nodes2and3. If node1prematurely terminates, node2and/or3can be used to provide the replicated data. In some example embodiments, each node of example cluster200frequently exchanges state information about itself and other nodes across the example cluster200using gossip protocol. Gossip protocol is a peer-to-peer communication protocol in which each node randomly shares (e.g., communicates, requests, transmits) location and state information about the other nodes in a given cluster. Writing: For a given node, a sequentially written commit log captures the write activity to ensure data durability. The data is then written to an in-memory structure (e.g., a memtable, write-back cache). Each time the in-memory structure is full, the data is written to disk in a Sorted String Table data file. In some example embodiments, writes are automatically partitioned and replicated throughout the example cluster200. Reading: Any node of example cluster200can receive a read request (e.g., query) from an external client. If the node that receives the read request manages the data requested, the node provides the requested data. If the node does not manage the data, the node determines which node manages the requested data. The node that received the read request then acts as a proxy between the requesting entity and the node that manages the data (e.g., the node that manages the data sends the data to the proxy node, which then provides the data to an external entity that generated the request). The distributed decentralized database system is decentralized in that there is no single point of failure due to the nodes being symmetrical and seamlessly replaceable. For example, whereas conventional distributed data implementations have nodes with different functions (e.g., master/slave nodes, asymmetrical database nodes, federated databases), the nodes of example cluster200are configured to function the same way (e.g., as symmetrical peer database nodes that communicate via gossip protocol, such as Cassandra nodes) with no single point of failure. If one of the nodes in example cluster200terminates prematurely (“goes down”), another node can rapidly take the place of the terminated node without disrupting service. The example cluster200can be a container for a keyspace, which is a container for data in the distributed decentralized database system (e.g., whereas a database is a container for containers in conventional relational databases, the Cassandra keyspace is a container for a Cassandra database system). In some examples, a data management system (for example data management system102above) can take a backup of a user's data. The user data is ingested and may be stored in a journaled file format suitable for high write performance, enabling the taking of a backup in a short span of time, for example. An example journaled file format is shown inFIG.3. With reference to that view, a journaled file format302is a sparse representation of the logical space304of data in a file where logical holes306(as shown inFIG.3) are not written. As the data is written to the file at308at different logical offsets (for example offset310for data block3and offset312for data block310) these data blocks are simply appended to a data file314. Information such as the logical offset (e.g.310, or312) in the journaled file, and/or a physical offset in a data file314, and/or a size318of a data block (e.g. data block1,2,3, and/or4) and so forth is stored in a separate index in memory (sorted by logical offset of blocks). Once all the data is written to the journaled file, the index is written down to a separate index file316associated with this journaled file. In some examples, overwrites may occur in the logical space304of the file, but in this instance, some examples do not amend or discard parts of the data blocks in the data file314and only modify information in the index associated with the overwritten blocks. This can enable writing the actual data blocks sequentially in the data file resulting in higher write performance. In some examples, the index file316size is very small in comparison to the data file314. In some examples, the physical data block size is in the order of tens of KBs and the information corresponding to it in the index file is no more than 50 bytes. Some present examples include a patch file format. In some instances, users may wish to configure a data backup system (or data management system) to archive snapshotted data to a desired cloud or offsite location. It can be desirable to store snapshotted data in patch file format because it is well suited for situations where file data is immutable (it does not change over time) and high read performance is needed. Further, a patch file format is more space efficient than a journaled file format because data which has been overwritten is not stored. FIG.4shows a schematic patch file format402, according to some examples. In the illustrated example, actual data is sparsely distributed over the logical space404of the patch file406. Logical holes408exist in the file where data is not written to the file. In a physical patch file414, each part410of the data is written in one or more blocks412of fixed size (for example 64 KB) in the physical patch file414. One or more batches of data blocks412(for example the first batch of four data blocks, as shown) is accompanied or prefaced by an index block416. The index block416stores information regarding mapping of the logical offset and size of the data blocks, to a physical offset and size in the physical patch file. The first index block416stores this information for a series of data blocks including the four data blocks just discussed, made up by, in this instance, three blocks from the first data part410, and one block from the second data part410. A second index block418may store corresponding information about the next batch (or series) of data blocks, and so on. The index blocks can be placed either before or after a batch of data blocks depending on a given implementation. Backed up data is generally read in a sequential fashion when restoring it. As such this format can be very suitable for reading data with high throughput from disks as the physical data blocks are stored in a sorted order of logical offsets. This is in contrast to the journaled file format where the actual physical data blocks can be randomly spread in the data file. In some examples, the size of the index blocks416or418is insignificant in comparison to the overall physical size of the file406, since approximately 1 GB of data can be indexed by an index block of size 200 KB. In some examples, a method of uploading a snapshot using data converted from a journaled file to a patch file format to an archival location may include performing a local journaled file to patch file conversion in a local node cluster and then copying the converted file to the remote archival location. This process may invoke three I/O operations for every data block in the local cluster, namely reading from a journaled file during conversion, writing to a patch file during conversion, and reading from the converted patch file during copying. These operations may be cumbersome or inconvenient in some examples. The present disclosure provides examples that enable “on the fly” archival upload from a journaled file format to a patch file format. In some “on the fly” examples, an entire conversion process is simulated. A simulation is performed in a way that there is no reading or writing of the data blocks of the file that constitute a major part of the file, as discussed above. This approach can be used in a variety of use cases such as transfer a file to another location in another format in following example scenarios: a file or snapshot archival to a cloud location, or a file or snapshot replication to another remote cluster. Some examples use a virtual patch file locally for jobs or processes which can only work on patch files and not on journal files, such as operations performed in a specific format F2 which is different from an original format F1. Example use cases include computing statistics for a patch file without, in fact, converting data residing in a different file format to a patch file. Example statistics can be used to monitor or track metrics for a snapshot in reports or other analysis, or informing decisions for triggering other relevant future processes or jobs. In some examples, an “actual” (non-simulated) conversion includes one or more of the following operations, such as reading data at a certain offset from a journaled file. This may involve first inspecting the index and then actually reading the data blocks residing at an appropriate location. This data block is then passed on to a patch file constructor (whose purpose is to create the patch file) and the constructor places the data block at a certain location in the final file (along with writing some metadata in the index blocks of the file). A “simulated” approach, according to some present examples, involves simulating an actual process by passing only the attributes of the data blocks, without reading the data, to the patch file constructor. The patch file constructor places a fake or virtual data block according to the attributes of that block at some physical location inside the patch file. Actual data is not written to the file. These attributes are captured and stored in a file, referred to as a patch file image. Example operations in an example simulation502are now described with reference toFIG.5. A file506has a logical space504. As shown by the flows in the figure, for a particular data block in the file506, information about its size, its physical offset in a patch file, and physical offset in a journaled file (along with journaled file path) is collected and stored in a separate file called a patch file image508. Specific operations may include: in operation1, scanning an index file to access or read, in operation2, attribute information including offset and data size of a given data block, as shown. In operation3, these attributes (only) are used to implement construction of a patch file, as shown. Operation3may be repeated in successive steps for further data blocks as the patch file image construction is completed. The sequence of a collection of the data block attributes may be based on or driven by an index order in the index file, for example as shown. In operation4, the attributes, or simulation information, is collected for all the relevant data blocks of the constructed patch file and written to the patch file image508. The data of the index blocks510generated in the patch file construction (operation3) is copied “as is” in the patch file image508along with storing its physical offset in the patch file. In some examples, the resulting patch file image is very small in size in comparison to the actual patch file as it does not store the data blocks themselves. This allows some examples to store the patch file image in flash memory instead of a conventional disk for faster reads and writes. In some examples, the size of a patch file image may be 520 KB for a patch file having a size of 1.28 GB. With all the above simulation information, the patch file image thus formed contains information identifying exactly where all the pieces of the patch file reside. With reference toFIG.6, if a read request602is made for a block at offset, say 1 MB, of size 4 KB, then that read request can be re-routed using the information in a patch file image612. Here, two re-routings may be invoked and occur at604and606, for example, to respective physical locations608and610in a journaled data file from where the actual data can be read. In some examples, this re-routing management can be performed by a data management system, such as the data management system102, as described above. In some examples, the actual re-routing is performed inside a file system, such as a distributed file system112described above. The file system may reside on one or more node clusters, for example as described with reference toFIG.2. A notional patch file614which might ordinarily be the subject of the read request602(instead of the patch file image) is shown for purposes of discussion only. In some examples, it is not present or involved in a re-routing operation. A virtual patch file can be exposed using the patch file image that was formed as a result of the simulation of the conversion process. A user of this virtual patch file need not know and will be unaware of the internal content thereof and will find no difference between an actual patch file and this virtual one. A read request for this virtual patch file at any arbitrary location will return the same data had it been an actual patch file. In some aspects, a virtual file may be considered as nothing but a wrapper layer in code which allows the writing of custom logic for read/write requests for a file which is being exposed via a file system so that a different view thereof can be presented to a user. In some examples, a journaled and patch file format described herein may be more rich or less rich in storing information. Some file formats have the ability to store duplicate blocks of data in the form of references to original ones. For example, for data blocks already existing in a file system, the same (duplicate) data blocks are not stored again. Only pointers to them are stored. These pointer or references may reside in the relevant index file in the case of journaled files and index blocks in the case of patch files. During the conversion processes discussed above, these references or pointers (or for that matter any other extra file metadata) can be copied over to the index blocks of the patch file. A patch file image will still be able to store this information as it copies the index block of the patch file in it. The methods described herein can, in some examples, provide expedited processes for local conversion of a file in different formats, and thus save a significant amount of time in the end to end process of archival upload of snapshotted data. This may be important in situations where large backed up files need to be archived on a periodic basis according to a SLA policy. Low overall archival time may help to support aggressive SLA policies. In some examples, the number of I/Os for each data block on a local cluster reduces by approximately 60-70%, for example by approximately 66%, by virtue of only needing to read each data block once. Although some I/O processing occurs while scanning data and constructing the intermediate file for a patch file image, it is not significant in some examples as it only encompasses the metadata part of the file, which is very small as mentioned earlier. This reduction in I/O processing can be highly beneficial for cluster management as there may be many jobs or processes simultaneously contending for the limited availability of I/O resources. Some disclosed examples herein include methods.FIG.7is a flow chart depicting operations in an example method700of online data conversion. The example method700may include: at operation702, identifying a snappable file in a distributed file system; at operation704, identifying a first data block in the snappable file, the first data block including data and attribute data; at operation706, scanning an index file to access the attribute data of the first data block; at operation708, initiating construction of a patch file based on the accessed attribute data; at operation710, repeating the scanning of the index file to access attribute data of at least a further second data block, the second data block including data and attribute data; at operation712, completing construction of the patch file based on the accessed attribute data of the first and second data blocks; at operation714, generating conversion simulation information by collecting attribute data for all the data blocks of the constructed patch file; and, at operation,716, writing the simulation information to a patch file image. In some examples, the attribute data of the first and second data blocks includes at least logical space offset and data size information. In some examples, scanning the index file to access attribute data of the first data block is performed without reading the data of the first data block. In some examples, the patch file is constructed without writing the data from the first or second data block to the patch file. In some examples, the method700further comprises receiving a request to transfer data of the snappable file to a remote location, the transfer involving or necessitating a conversion of data from a first data format to a second data format; and effecting a data format conversion for the transfer using only the simulation information. In some examples, the method700further comprises receiving a read request for data in the first or second data block; and re-routing the read request to corresponding data in a journaled patch file using information contained in the patch file image. In some examples, a tangible or non-transitory machine-readable medium includes instructions which, when read by a machine, cause the machine to perform one or more operations as summarized above or as described elsewhere herein. FIG.8is a block diagram800illustrating an example architecture806software that can be used to implement various embodiments described herein.FIG.8is merely a non-limiting example of a software architecture806, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software is implemented by a hardware layer852, which includes a processor854operating on instructions804, a memory856storing instructions804, and other hardware858. For some embodiments, the hardware layer852is implemented using a machine900ofFIG.9that includes processors910, memory930, and I/O components950. In this example architecture806, the software can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software includes layers such as an operating system802, libraries820, frameworks818, and applications816. Operationally, the applications816invoke API calls808through the software stack and receive messages812in response to the API calls808, consistent with some embodiments. In various implementations, the operating system802manages hardware resources and provides common services. The operating system802includes, for example, a kernel822, services824, and drivers826. The kernel822acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel822provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services824can provide other common services for the other software layers. The drivers826are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers826can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries820provide a low-level common infrastructure utilized by the applications816. The libraries820can include system libraries844(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries820can include API libraries846such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries820can also include a wide variety of other libraries848to provide many other APIs to the applications816. The frameworks818provide a high-level common infrastructure that can be utilized by the applications816, according to some embodiments. For example, the frameworks818provide various GUI functions, high-level resource management, high-level location services, and so forth. The frameworks818can provide a broad spectrum of other APIs that can be utilized by the applications816, some of which may be specific to a particular operating system or platform. In some embodiments, the applications816include a built-in application838and a broad assortment of other applications such as a third-party application840. According to some embodiments, the applications816are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications816, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application840(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application840can invoke the API calls808provided by the operating system802to facilitate functionality described herein. FIG.9illustrates a diagrammatic representation of an example machine900in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies of various embodiments described herein. Specifically,FIG.9shows a diagrammatic representation of the machine900in the example form of a computer system, within which instructions916(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine900to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions916may cause the machine900to execute the method700ofFIG.7. Additionally, or alternatively, the instructions916may implement operations of other methods described herein. The instructions916transform the general, non-programmed machine900into a particular machine900programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine900operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine900may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions916, sequentially or otherwise, that specify actions to be taken by the machine900. Further, while only a single machine900is illustrated, the term “machine” shall also be taken to include a collection of machines900that individually or jointly execute the instructions916to perform any one or more of the methodologies discussed herein. The machine900may include processors910, memory930, and I/O components950, which may be configured to communicate with each other such as via a bus902. In some embodiments, the processors910(e.g., a CPU, a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a GPU, a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor912and a processor914that may execute the instructions916. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.9shows multiple processors910, the machine900may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory930may include a main memory932, a static memory934, and a storage unit936, both accessible to the processors910such as via the bus902. The main memory930, the static memory934, and storage unit936store the instructions916embodying any one or more of the methodologies or functions described herein. The instructions916may also reside, completely or partially, within the main memory932, within the static memory934, within the storage unit936, within at least one of the processors910(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine900. The storage unit936can comprise a machine readable medium938for storing the instructions916. The I/O components950may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components950that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components950may include many other components that are not shown inFIG.9. The I/O components950are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various embodiments, the I/O components950may include output components952and input components954. The output components952may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components954may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further embodiments, the I/O components950may include biometric components956, motion components958, environmental components960, or position components962, among a wide array of other components. For example, the biometric components956may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components958may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components960may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components962may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components950may include communication components964operable to couple the machine900to a network980or devices970via a coupling982and a coupling972, respectively. For example, the communication components964may include a network interface component or another suitable device to interface with the network980. In further examples, the communication components964may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices970may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components964may detect identifiers or include components operable to detect identifiers. For example, the communication components964may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components964, such as location via IP geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (i.e.,930,932,934, and/or memory of the processor(s)910) and/or storage unit936may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions916), when executed by processor(s)910, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), EEPROM, FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various embodiments, one or more portions of the network980may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network980or a portion of the network980may include a wireless or cellular network, and the coupling982may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling982may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. The instructions916may be transmitted or received over the network980using a transmission medium via a network interface device (e.g., a network interface component included in the communication components964) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions916may be transmitted or received using a transmission medium via the coupling972(e.g., a peer-to-peer coupling) to the devices970. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions916for execution by the machine900, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. Other embodiments can comprise corresponding systems, apparatus, and computer programs recorded on one or more machine readable media, each configured to perform the operations of the methods. The disclosed technology may be described in the context of computer-executable instructions, such as software or program modules, being executed by a computer or processor. The computer-executable instructions may comprise portions of computer program code, routines, programs, objects, software components, data structures, or other types of computer-related structures that may be used to perform processes using a computer. In some cases, hardware or combinations of hardware and software may be substituted for software or used in place of software. Computer program code used for implementing various operations or aspects of the disclosed technology may be developed using one or more programming languages, including an object-oriented programming language such as Java or C++, a procedural programming language such as the “C” programming language or Visual Basic, or a dynamic programming language such as Python or JavaScript. In some cases, computer program code or machine-level instructions derived from the computer program code may execute entirely on an end user's computer, partly on an end user's computer, partly on an end user's computer and partly on a remote computer, or entirely on a remote computer or server. For purposes of this document, it should be noted that the dimensions of the various features depicted in the Figures may not necessarily be drawn to scale. For purposes of this document, reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “another embodiment” may be used to describe different embodiments and do not necessarily refer to the same embodiment. For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via another part). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. For purposes of this document, the term “based on” may be read as “based at least in part on.” For purposes of this document, without additional context, use of numerical terms such as a “first” object, a “second” object, and a “third” object may not imply an ordering of objects, but may instead be used for identification purposes to identify different objects. For purposes of this document, the term “set” of objects may refer to a “set” of one or more of the objects. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
93,886
11860818
The foregoing and other features of the present disclosure will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and made part of this disclosure. The present disclosure is generally directed to a virtual computing system having a plurality of clusters, with each of the plurality of clusters having a plurality of nodes. Each of the plurality of nodes includes one or more virtual machines and other entities managed by an instance of a monitor such as a hypervisor. These and other components may be part of a datacenter, which may be managed by a user (e.g., an administrator or other authorized personnel). A distributed storage system, for providing storage and protection capabilities, is associated with the virtual computing system. The virtual computing system may be configured for providing database management services. For example, at least some of the one or more virtual machines within the virtual computing system may be configured as database virtual machines for storing one or more databases. These databases may be managed by a database system. The database system may provide a plurality of database services. For example, in some embodiments, the database system may provide database provisioning services and copy data management services. Database provisioning services involve creating and/or associating databases with the database system for management and use. Creating a new database and associating the database with the database system may be a complex and long drawn process. A user desiring to create a new database with a provider of the database system may make a new database creation request with the database provider. The user request may pass through multiple entities (e.g., people, teams, etc.) of the database provider before a database satisfying the user request may be created. For example, the user may be required to work with a first entity of the database provider to specify the configuration (e.g., database engine type, number of storage disks needed, etc.) of the database that is desired. Upon receiving the database configuration, another entity of the database provider may configure a database virtual machine for hosting the database, while yet another entity may configure the networking settings to facilitate access to the database upon creation. Yet another entity of the database provider may configure database protection services to backup and protect the database. All of these tasks may take a few to several days. Thus, creating the database is a time intensive process and inconvenient for the user. The user may not have the time or desire to wait for the multiple days to create the database. Further, creating the database using the above procedure requires the user to rely on the other entities. If these other entities become unavailable, the user may have no choice but to wait for those entities to become operational again. Additionally, the user may not be fully privy to or even understand the various configurational details of the desired database that the user may be asked to provide to the other entities for creating the database. The present disclosure provides technical solutions to the above problems. Specifically, the database system of the present disclosure greatly simplifies the database provisioning service. The database system of the present disclosure allows the user to quickly and conveniently create a new database and associate the database with the database system without the need for contacting and working with multiple entities. The entire process of creating and associating the database with the database system may be completed by the user within a span of a few minutes instead of the multiple days mentioned above. The database system of the present disclosure provides a user friendly, intuitive user interface that solicits information from and conveniently walks the user through the various steps for creating a new database within minutes. The database system may include a catalog of standardized configurations, which the user may select from the user interface for creating the database. The user may modify the standardized configurations or create custom configurations to suit their needs. By virtue of providing standardized configurations, the present disclosure simplifies the database creation process for the user. The user interface also hides the complexity of creating the database from the user. For example, the user need not worry about creating, partitioning, or associating storage space (e.g., storage disk space) with the database that is being created. The user may simply specify a size of the database that is desired in the user interface and the database system automatically translates that size into storage space. Thus, based upon the needs of the user, the user is able to specifically tailor the database during creation and create the database easily and quickly using the user interface. The database system also provides the ability to register an existing database with the database system. Such existing databases may have been created outside of the database system. Users having existing databases may desire to associate their databases with the database system for management. Similar to creating a new database in the database system, registering an existing database with the database system is easy, convenient, and may be completed within a span of a few minutes via the user interface. As with the creation of a new database, the user interface walks the user through the registration process, provides standardized configurations for the user to select from, ability to modify the standardized configurations, and create new configurations. Upon registering the database with the database system, the database may take advantage of other database management services offered by the database system. Copy data management services involve protecting a database. Protecting a database means replicating a state of the database for creating a fully functional copy of the database. Replicating the state of the database may involve creating fully functional clones (e.g., back-ups) of the database. Since the clones are fully functional copies of the original or source database, a user may perform operations on the cloned copy that would otherwise be performed on the original database. For example, the user may perform reporting, auditing, testing, data analysis, etc. on the cloned copy of the original database. A cloned database may be created by periodically capturing snapshots of the source database. A snapshot stores the state of the source database at the point in time at which the snapshot is captured. The snapshot is thus a point in time image of the database. The snapshot may include a complete encapsulation of the virtual machine on which the database is created, including the configuration data of the virtual machine, the data stored within the database, and any metadata associated with the virtual machine. Any of a variety of snapshotting techniques may be used. For example, in some embodiments, copy-on-write, redirect-on-write, near-sync, or other snapshotting methods may be used to capture snapshots. From the snapshot, the source database may be recreated to the state at which the snapshot was captured. However, the number of snapshots that are captured in a given day may be limited. Specifically, because capturing a snapshot requires quiescing (e.g., pausing) the source database and entering a safe mode in which user operations are halted, it is desirable to take only a minimum number of snapshots in a day. Thus, choices of state that may recreated from a snapshot may be limited. If a state is desired that falls between the capture of two snapshots, the user is generally out of luck. Thus, the desire to limit the number of snapshots in a day results in a significant technical problem that results in losing changes made to a database since the last snapshot capture or between two snapshot captures. The present disclosure provides technical solutions to this problem. Specifically, the present disclosure automatically creates an instance of a database protection system for each database (e.g., source database) that is created within or registered with the database system. The database protection system instance may be configured to protect the source database by automatically capturing snapshots of the source database. Additionally, to avoid losing changes in state between two snapshot captures or since the last snapshot capture, the database system may capture transactional logs. A transactional log may be a text, image, disk, or other type of file that records every transaction or change that occurs on the source database since a last snapshot capture. Thus, by using the snapshots or a combination of snapshots and transactional logs, any state of the source database down to the last second (or even fractions of seconds or other time granularities) may be recreated. Specifically, states of the source database that fall between the capture of two snapshots may be recreated by using a combination of snapshots and transactional logs. The frequency of capturing transactional logs may be higher than the frequency of capturing snapshots in a day. For example, in some embodiments, by default, a transactional log may be captured every 30 minutes. In other embodiments, the user may define the frequency of capturing transactional logs. Further, since the source database is not quiesced (paused) for capturing the transactional log, user operations may continue while the transactional logs are being captured. Further, since the transactional logs only capture the changes in the database since the last snapshot capture, the transactional logs do not consume a lot of space. Thus, clones of the source database can be created to a point in time by using a combination of transactional logs and snapshots (e.g., between two snapshot captures), or based upon available snapshots (e.g., at the point of snapshot capture). Further, the frequency with which the snapshots and transactional logs are captured by the database system may depend upon the level of protection desired by the user. The database system may solicit a protection schedule and definition of a Service Level Agreement (“SLA”) from the user. For convenience, the database system may include built-in defaults of the protections schedule and SLA levels that the user may select from. The user may modify the defaults or define new parameters for the protection schedule and SLA. Thus, the level of protection accorded to each database associated with the database system may be individually tailored based upon the requirements of the user. The protection schedule may allow the user to define the frequency of snapshots and transactional logs to be captured each day, and the time-period for capturing daily, weekly, monthly, and/or quarterly snapshots based upon the SLA. Thus, the present disclosure provides an easy, convenient, cost effective, and user-friendly mechanism for creating and registering databases, as well as effectively protecting those databases. Referring now toFIG.1, a cluster100of a virtual computing system is shown, in accordance with some embodiments of the present disclosure. The cluster100includes a plurality of nodes, such as a first node105, a second node110, and a third node115. Each of the first node105, the second node110, and the third node115may also be referred to as a “host” or “host machine.” The first node105includes database virtual machines (“database VMs”)120A and120B (collectively referred to herein as “database VMs120”), a hypervisor125configured to create and run the database VMs, and a controller/service VM130configured to manage, route, and otherwise handle workflow requests between the various nodes of the cluster100. Similarly, the second node110includes database VMs135A and135B (collectively referred to herein as “database VMs135”), a hypervisor140, and a controller/service VM145, and the third node115includes database VMs150A and150B (collectively referred to herein as “database VMs150”), a hypervisor155, and a controller/service VM160. The controller/service VM130, the controller/service VM145, and the controller/service VM160are all connected to a network165to facilitate communication between the first node105, the second node110, and the third node115. Although not shown, in some embodiments, the hypervisor125, the hypervisor140, and the hypervisor155may also be connected to the network165. Further, although not shown, one or more of the first node105, the second node110, and the third node115may include one or more containers managed by a monitor (e.g., container engine). The cluster100also includes and/or is associated with a storage pool170(also referred to herein as storage sub-system). The storage pool170may include network-attached storage175and direct-attached storage180A,180B, and180C. The network-attached storage175is accessible via the network165and, in some embodiments, may include cloud storage185, as well as a networked storage190. In contrast to the network-attached storage175, which is accessible via the network165, the direct-attached storage180A,180B, and180C includes storage components that are provided internally within each of the first node105, the second node110, and the third node115, respectively, such that each of the first, second, and third nodes may access its respective direct-attached storage without having to access the network165. It is to be understood that only certain components of the cluster100are shown inFIG.1. Nevertheless, several other components that are needed or desired in the cluster100to perform the functions described herein are contemplated and considered within the scope of the present disclosure. Although three of the plurality of nodes (e.g., the first node105, the second node110, and the third node115) are shown in the cluster100, in other embodiments, greater than or fewer than three nodes may be provided within the cluster. Likewise, although only two database VMs (e.g., the database VMs120, the database VMs135, the database VMs150) are shown on each of the first node105, the second node110, and the third node115, in other embodiments, the number of the database VMs on each of the first, second, and third nodes may vary to include other numbers of database VMs. Further, the first node105, the second node110, and the third node115may have the same number of database VMs (e.g., the database VMs120, the database VMs135, the database VMs150) or different number of database VMs. In some embodiments, each of the first node105, the second node110, and the third node115may be a hardware device, such as a server. For example, in some embodiments, one or more of the first node105, the second node110, and the third node115may be an NX-1000 server, NX-3000 server, NX-6000 server, NX-8000 server, etc. provided by Nutanix, Inc. or server computers from Dell, Inc., Lenovo Group Ltd. or Lenovo PC International, Cisco Systems, Inc., etc. In other embodiments, one or more of the first node105, the second node110, or the third node115may be another type of hardware device, such as a personal computer, an input/output or peripheral unit such as a printer, or any type of device that is suitable for use as a node within the cluster100. In some embodiments, the cluster100may be part of a data center. Further, one or more of the first node105, the second node110, and the third node115may be organized in a variety of network topologies. Each of the first node105, the second node110, and the third node115may also be configured to communicate and share resources with each other via the network165. For example, in some embodiments, the first node105, the second node110, and the third node115may communicate and share resources with each other via the controller/service VM130, the controller/service VM145, and the controller/service VM160, and/or the hypervisor125, the hypervisor140, and the hypervisor155. Also, although not shown, one or more of the first node105, the second node110, and the third node115may include one or more processing units configured to execute instructions. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits of the first node105, the second node110, and the third node115. The processing units may be implemented in hardware, firmware, software, or any combination thereof. The term “execution” is, for example, the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. The processing units, thus, execute an instruction, meaning that they perform the operations called for by that instruction. The processing units may be operably coupled to the storage pool170, as well as with other elements of the first node105, the second node110, and the third node115to receive, send, and process information, and to control the operations of the underlying first, second, or third node. The processing units may retrieve a set of instructions from the storage pool170, such as, from a permanent memory device like a read only memory (“ROM”) device and copy the instructions in an executable form to a temporary memory device that is generally some form of random access memory (“RAM”). The ROM and RAM may both be part of the storage pool170, or in some embodiments, may be separately provisioned from the storage pool. In some embodiments, the processing units may execute instructions without first copying the instructions to the RAM. Further, the processing units may include a single stand-alone processing unit, or a plurality of processing units that use the same or different processing technology. With respect to the storage pool170and particularly with respect to the direct-attached storage180A,180B, and180C, each of the direct-attached storage may include a variety of types of memory devices that are suitable for a virtual computing system. For example, in some embodiments, one or more of the direct-attached storage180A,180B, and180C may include, but is not limited to, any type of RAM, ROM, flash memory, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (“CD”), digital versatile disk (“DVD”), etc.), smart cards, solid state devices, etc. Likewise, the network-attached storage175may include any of a variety of network accessible storage (e.g., the cloud storage185, the networked storage190, etc.) that is suitable for use within the cluster100and accessible via the network165. The storage pool170, including the network-attached storage175and the direct-attached storage180A,180B, and180C, together form a distributed storage system configured to be accessed by each of the first node105, the second node110, and the third node115via the network165, the controller/service VM130, the controller/service VM145, the controller/service VM160, and/or the hypervisor125, the hypervisor140, and the hypervisor155. In some embodiments, the various storage components in the storage pool170may be configured as virtual disks for access by the database VMs120, the database VMs135, and the database VMs150. Each of the database VMs120, the database VMs135, the database VMs150is a software-based implementation of a computing machine. The database VMs120, the database VMs135, the database VMs150emulate the functionality of a physical computer. Specifically, the hardware resources, such as processing unit, memory, storage, etc., of the underlying computer (e.g., the first node105, the second node110, and the third node115) are virtualized or transformed by the respective hypervisor125, the hypervisor140, and the hypervisor155, into the underlying support for each of the database VMs120, the database VMs135, the database VMs150that may run its own operating system and applications on the underlying physical resources just like a real computer. By encapsulating an entire machine, including CPU, memory, operating system, storage devices, and network devices, the database VMs120, the database VMs135, the database VMs150are compatible with most standard operating systems (e.g. Windows, Linux, etc.), applications, and device drivers. Thus, each of the hypervisor125, the hypervisor140, and the hypervisor155is a virtual machine monitor that allows a single physical server computer (e.g., the first node105, the second node110, third node115) to run multiple instances of the database VMs120, the database VMs135, and the database VMs150with each VM sharing the resources of that one physical server computer, potentially across multiple environments. For example, each of the hypervisor125, the hypervisor140, and the hypervisor155may allocate memory and other resources to the underlying VMs (e.g., the database VMs120, the database VMs135, the database VM150A, and the database VM150B) from the storage pool170to perform one or more functions. By running the database VMs120, the database VMs135, and the database VMs150on each of the first node105, the second node110, and the third node115, respectively, multiple workloads and multiple operating systems may be run on a single piece of underlying hardware computer (e.g., the first node, the second node, and the third node) to increase resource utilization and manage workflow. When new database VMs are created (e.g., installed) on the first node105, the second node110, and the third node115, each of the new database VMs may be configured to be associated with certain hardware resources, software resources, storage resources, and other resources within the cluster100to allow those virtual VMs to operate as intended. The database VMs120, the database VMs135, the database VMs150, and any newly created instances of the database VMs may be controlled and managed by their respective instance of the controller/service VM130, the controller/service VM145, and the controller/service VM160. The controller/service VM130, the controller/service VM145, and the controller/service VM160are configured to communicate with each other via the network165to form a distributed system195. Each of the controller/service VM130, the controller/service VM145, and the controller/service VM160may be considered a local management system configured to manage various tasks and operations within the cluster100. For example, in some embodiments, the local management system may perform various management related tasks on the database VMs120, the database VMs135, and the database VMs150. The hypervisor125, the hypervisor140, and the hypervisor155of the first node105, the second node110, and the third node115, respectively, may be configured to run virtualization software, such as, ESXi from VMWare, AHV from Nutanix, Inc., XenServer from Citrix Systems, Inc., etc. The virtualization software on the hypervisor125, the hypervisor140, and the hypervisor155may be configured for running the database VMs120, the database VMs135, the database VM150A, and the database VM150B, respectively, and for managing the interactions between those VMs and the underlying hardware of the first node105, the second node110, and the third node115. Each of the controller/service VM130, the controller/service VM145, the controller/service VM160, the hypervisor125, the hypervisor140, and the hypervisor155may be configured as suitable for use within the cluster100. The network165may include any of a variety of wired or wireless network channels that may be suitable for use within the cluster100. For example, in some embodiments, the network165may include wired connections, such as an Ethernet connection, one or more twisted pair wires, coaxial cables, fiber optic cables, etc. In other embodiments, the network165may include wireless connections, such as microwaves, infrared waves, radio waves, spread spectrum technologies, satellites, etc. The network165may also be configured to communicate with another device using cellular networks, local area networks, wide area networks, the Internet, etc. In some embodiments, the network165may include a combination of wired and wireless communications. The network165may also include or be associated with network interfaces, switches, routers, network cards, and/or other hardware, software, and/or firmware components that may be needed or considered desirable to have in facilitating intercommunication within the cluster100. Referring still toFIG.1, in some embodiments, one of the first node105, the second node110, or the third node115may be configured as a leader node. The leader node may be configured to monitor and handle requests from other nodes in the cluster100. For example, a particular database VM (e.g., the database VMs120, the database VMs135, or the database VMs150) may direct an input/output request to the controller/service VM (e.g., the controller/service VM130, the controller/service VM145, or the controller/service VM160, respectively) on the underlying node (e.g., the first node105, the second node110, or the third node115, respectively). Upon receiving the input/output request, that controller/service VM may direct the input/output request to the controller/service VM (e.g., one of the controller/service VM130, the controller/service VM145, or the controller/service VM160) of the leader node. In some cases, the controller/service VM that receives the input/output request may itself be on the leader node, in which case, the controller/service VM does not transfer the request, but rather handles the request itself. The controller/service VM of the leader node may fulfil the input/output request (and/or request another component within/outside the cluster100to fulfil that request). Upon fulfilling the input/output request, the controller/service VM of the leader node may send a response back to the controller/service VM of the node from which the request was received, which in turn may pass the response to the database VM that initiated the request. In a similar manner, the leader node may also be configured to receive and handle requests (e.g., user requests) from outside of the cluster100. If the leader node fails, another leader node may be designated. Additionally, in some embodiments, although not shown, the cluster100may be associated with a central management system that is configured to manage and control the operation of multiple clusters in the virtual computing system. In some embodiments, the central management system may be configured to communicate with the local management systems on each of the controller/service VM130, the controller/service VM145, the controller/service VM160for controlling the various clusters. Again, it is to be understood again that only certain components and features of the cluster100are shown and described herein. Nevertheless, other components and features that may be needed or desired to perform the functions described herein are contemplated and considered within the scope of the present disclosure. It is also to be understood that the configuration of the various components of the cluster100described above is only an example and is not intended to be limiting in any way. Rather, the configuration of those components may vary to perform the functions described herein. Turning now toFIG.2, an example block diagram of a database system200is shown, in accordance with some embodiments of the present disclosure.FIG.2is discussed in conjunction withFIG.1. The database system200or portions thereof may be configured as utility software for creating and implementing database management services. The database system200is configured to facilitate creation/registration, querying, and/or administration of the databases associated therewith. Thus, the database system200includes a database engine205that is configured to receive input from and provide output to a user via a dashboard210. The database engine205is also associated with a database storage system215that is configured to store one or more databases under management of the database system200. In association with the dashboard210and the database storage system215, the database engine205is configured to implement one or more database management services of the database system200. For example, the database engine205is configured to provide database provisioning services to create new databases and register existing databases with the database system200using a database provisioning system220. The database engine205is also configured to protect databases created or registered by the database provisioning system220via a database protection system225. Although the database provisioning system220. and the database protection system225are shown as separate components, in some embodiments, the database provisioning system and the database protection system may be combined and the combined component may perform the operations of the individual components. The database provisioning system220and the database protection system225are both discussed in greater detail below. The database system200may be installed on a database VM (e.g., the database VMs120, the database VMs135, the database VMs150ofFIG.1). The database system200may be installed via the controller/service VM (e.g., the controller/service VM130, the controller/service VM145, the controller/service VM160) of the node (e.g., the first node105, the second node110, and the third node115) on which the database system is to be installed. For example, an administrator desiring to install the database system200may download a copy on write image file (e.g., qcow or qcow2 image file) on the controller/service VM to define the content and structure of a disk volume to be associated with the database system200. In some embodiments, instead of a copy on write image file, another type of disk image file, depending upon the type of underlying hypervisor, may be installed. Further, the administrator may create or one or more new database VMs on which the database system200is to reside. As part of creating the database VMs, the administrator may allocate a particular number of virtual central processing units (vCPU) to each of the database VMs, define the number of cores that are desired in each vCPU, designate a specific amount of memory to each of the database VMs, and attach a database storage device (e.g., a virtual disk from the storage pool170) with each of the database VMs. In some embodiments, at least a portion of the database storage device attached to the database system200may form the database storage system215. The administrator may also create a new network interface (e.g., associate a virtual local area network (VLAN), assign an Internet Protocol (“IP”) address to access the database system200, etc.) with each of the database VMs. The administrator may perform additional and/or other actions to create the database VMs on which the database system200resides upon creation and installation of the disk image file. In some embodiments, the database VMs on which the database system200resides may all be located on a single node (e.g., one of the first node105, the second node110, and the third node115). In other embodiments, the database VMs on which the database system200resides may be spread across multiple nodes within a single cluster, or possibly amongst multiple clusters. When spread across multiple clusters, each of the associated multiple clusters may be configured to at least indirectly communicate with one another to facilitate operation of the database system200. Upon installing the database system200, a user (e.g., the administrator or other user authorized to access the database system) may access the dashboard210. The dashboard210, thus, forms the front end of the database system200and the database engine205and the database storage system215form the backend of the database system. The database system200may be accessed via a computing device associated with the virtual computing system100. In other embodiments, instead of or in addition to being accessible via a particular computing device, the database system200may be hosted on a cloud service and may be accessed via the cloud. In some embodiments, the database system200may additionally or alternatively be configured as a mobile application suitable for access from a mobile computing device (e.g., a mobile phone). In some embodiments, the database system200and particularly the dashboard210may be accessed via an Application Programming Interface (“API”)230. To access the dashboard210via the API230, a user may use designated devices such as laptops, desktops, tablets, mobile devices, other handheld or portable devices, and/or other types of computing devices that are configured to access the API. These devices may be different from the computing device on which the database system200is installed. In some embodiments and when the dashboard210is configured for access via the API230, the user may access the dashboard via a web browser and upon entering a uniform resource locator (“URL”) for the API such as the IP address of the database system200or other web address. Using the API230and the dashboard210, the users may then send instructions to the database engine205and receive information back from the database engine. In some embodiments, the API230may be a representational state transfer (“REST”) type of API. In other embodiments, the API230may be any other type of web or other type of API (e.g., ASP.NET) built using any of a variety of technologies, such as Java, .Net, etc., that is capable of accessing the database engine205and facilitating communication between the users and the database engine. In some embodiments, the API230may be configured to facilitate communication via a hypertext transfer protocol (“HTTP”) or hypertext transfer protocol secure (“HTTPS”) type request. The API230may receive an HTTP/HTTPS request and send an HTTP/HTTPS response back. In other embodiments, the API230may be configured to facilitate communication using other or additional types of communication protocols. In other embodiments, the database system200may be configured for access in other ways. The dashboard210provides a user interface that facilitates human-computer interaction between the users and the database engine205. The dashboard210is configured to receive user inputs from the users via a graphical user interface (“GUI”) and transmit those user inputs to the database engine205. The dashboard210is also configured to receive outputs/information from the database engine205and present those outputs/information to the users via the GUI of the management system. The GUI may present a variety of graphical icons, windows, visual indicators, menus, visual widgets, and other indicia to facilitate user interaction. In other embodiments, the dashboard210may be configured as other types of user interfaces, including for example, text-based user interfaces and other man-machine interfaces. Thus, the dashboard210may be configured in a variety of ways. Further, the dashboard210may be configured to receive user inputs in a variety of ways. For example, the dashboard210may be configured to receive the user inputs using input technologies including, but not limited to, a keyboard, a stylus and/or touch screen, a mouse, a track ball, a keypad, a microphone, voice recognition, motion recognition, remote controllers, input ports, one or more buttons, dials, joysticks, etc. that allow an external source, such as the user, to enter information into the database system200. The dashboard210may also be configured to present outputs/information to the users in a variety of ways. For example, the dashboard210may be configured to present information to external systems such as users, memory, printers, speakers, etc. Therefore, although not shown, dashboard210may be associated with a variety of hardware, software, firmware components, or combinations thereof. Generally speaking, the dashboard210may be associated with any type of hardware, software, and/or firmware component that enables the database engine205to perform the functions described herein. Thus, the dashboard receives a user request (e.g., an input) from the user and transmits that user request to the database engine205. In some embodiments, the user request may be to request a database management service. For example, in some embodiments, the user request may be to request a database provisioning service. In response to the user request for a database provisioning service, the database engine205may activate the database provisioning system220. The database provisioning system220includes a database creation system235for creating new databases within the database system200and a database registration system240for registering databases that were previously created outside of the database system with the database system. Although the database creation system235and the database registration system240are shown as separate components, in some embodiments, those components may be combined together and the combined component may perform the functions of the individual components. The database creation system235and the database registration system240are discussed in greater detail inFIGS.3-5Ebelow. The database protection system225is configured to protect databases associated with the database system200. Thus, the database protection system225implements a copy data management service of the database system200. During creation or registration of a database, the database provisioning system220creates an instance of a database protection system225for protecting the associated database. Thus, upon the creation or registration of a database, that database may be protected by the associated instance of the database protection system225by capturing snapshots, transactional logs, and creating cloned databases. Each instance of the database protection system225may receive a variety of user defined constraints in accordance with which the associated database is protected. The database protection system225is discussed in greater detail inFIG.6below. The database engine205, including the database provisioning system220and the database protection system225may be configured as, and/or operate in association with, hardware, software, firmware, or a combination thereof. Specifically, the database engine205may include a processing unit245configured to execute instructions for implementing the database management services of the database system200. In some embodiments, each of the database provisioning system220and the database protection system225may have their own separate instance of the processing unit245. The processing unit245may be implemented in hardware, firmware, software, or any combination thereof. “Executing an instruction” means that the processing unit245performs the operations called for by that instruction. The processing unit245may retrieve a set of instructions from a memory for execution. For example, in some embodiments, the processing unit245may retrieve the instructions from a permanent memory device like a read only memory (ROM) device and copy the instructions in an executable form to a temporary memory device that is generally some form of random access memory (RAM). The ROM and RAM may both be part of the storage pool170and/or provisioned separately from the storage pool. In some embodiments, the processing unit245may be configured to execute instructions without first copying those instructions to the RAM. The processing unit245may be a special purpose computer, and include logic circuits, hardware circuits, etc. to carry out the instructions. The processing unit245may include a single stand-alone processing unit, or a plurality of processing units that use the same or different processing technology. The instructions may be written using one or more programming language, scripting language, assembly language, etc. The database engine205may also include a memory250. The memory250may be provisioned from or be associated with the storage pool170. In some embodiments, the memory250may be separate from the storage pool170. The memory250may be any of a variety of volatile and/or non-volatile memories that may be considered suitable for use with the database engine205. In some embodiments, the memory250may be configured to store the instructions that are used by the processing unit245. Further, although not shown, in some embodiments, the database provisioning system220and the database protection system225may each, additionally or alternatively, have their own dedicated memory. Further, the database engine205may be configured to handle a variety of types of database engines. For example, in some embodiments, the database engine205may be configured to manage PostgreSQL, Oracle, Microsoft SQL server, and MySQL database engines. In other embodiments, the database engine205may be configured to manage other or additional database engines. Each database that is created within or registered with the database system200may be of a particular “database engine type.” The database engine type may identify the type of database management system (e.g., Oracle, PostgreSQL, etc.) of a particular database. By virtue of creating or registering a database with a particular database engine type, that database is managed in accordance with the rules of that database engine type. Thus, the database engine205is configured to be operable with and manage databases associated with a variety of database engine types. It is to be understood that only some components of the database engine205are shown and discussed herein. In other embodiments, the database engine205may also include other components that are considered necessary or desirable in implementing the various database management services discussed herein. Similarly, the database provisioning system220and the database protection system225may have components that are considered necessary or desirable in implementing the various database management services discussed herein. Referring still toFIG.2, the database storage system215is configured to store one or more databases that are either created within the database system200or registered with the database system. The database storage system215may include a source database storage255and a target database storage260. The source database storage255is configured to store the original instances of the databases (also referred to herein as source databases) that are created within or registered with the database system200. The target database storage260is configured to store the clones of the source databases (also referred to herein as cloned databases). In some embodiments, the source database storage255and the target database storage260may be provisioned from the storage pool170and may include virtual disk storage that is associated with the database VMs (e.g., the database VMs120, the database VMs135, the database VMs150) on which the database system200, the source databases, and the cloned databases reside. For example, in some embodiments, the source database storage255may be associated with one or more database VMs (referred to herein as source database VMs) and the source databases stored within the source database storage may be stored within the virtual disks associated with the source database VMs. Similarly, in some embodiments, the target database storage260may be associated with one or more database VMs (referred to herein as target database VMs) and the databases stored within the target database storage may be stored within the virtual disks associated with the target database VMs. In some embodiments, each source database VM may be configured to store one or more source databases and each target database VM may be configured to store one or more target databases. In other embodiments, the source database storage255and the target database storage260may additionally or alternatively be provisioned from other types of storage associated with the database system200. Further, depending upon the size of a particular database and the size of the virtual disk associated with a particular source database VM, a source database may be stored in its entirety on a single source database VM or may span multiple source database VMs. Further, as the size of that source database increases, the source database may be moved to another source database VM, may be stored onto multiple source database VMs, and/or additional storage may be provisioned to the source database VMs to house the increased size of the source database. Similarly, depending upon the size of a cloned database and the size of the virtual disk associated with a particular target database VM, the cloned database may be stored on a single or multiple target database VMs. Further, as the size of the cloned database increases (e.g., by virtue of updating the cloned database to incorporate any changes in the source database), the cloned database may be moved to another target database VM of appropriate size, may be divided amongst multiple target database VMs, and/or additional virtual disk space may be provisioned to the target database VM. Thus, the database storage system215is structured with the flexibility to expand and adapt to accommodate databases of various sizes. The database storage system215also includes a database manager265. In some embodiments, each instance of the source database within the source database storage255may include an instance of the database manager265. In other embodiments, a single instance of the database manager265may manage multiple or all source databases. The database manager265is configured to work with the database protection system225to protect the source databases stored within the source database storage255. The database manager265is discussed in greater detail inFIG.6below. Although not shown, the database manager265may include a processing unit (e.g., similar to the processing unit245), a memory (e.g., similar to the memory250), and other hardware, software, and/or firmware components that are necessary or considered desirable for performing the functions described herein. Further, although the cloned databases in the target database storage260are shown as having a database manager, in some embodiments, each cloned database may be associated with a database manager for managing the cloned databases. Turning now toFIG.3, an example flow chart outlining operations of a process300is shown, in accordance with some embodiments of the present disclosure. The process300may include additional, fewer, or different operations, depending on the particular embodiment. The process300may be used to implement the database provisioning service. Thus, the process300may be used to create a new database or register an existing database. The process300is discussed in conjunction withFIGS.1and2and is implemented by the database provisioning system220of the database engine205in conjunction with the dashboard210. Specifically, the database provisioning system220receives inputs from the user via the dashboard210and performs operations in response to those inputs for creating a new database or registering an existing database. Thus, the process300starts at operation305with the database provisioning system220receiving a user request via the dashboard210for either creating a new database or registering an existing database. Specifically, once the database system200is installed and the user is able to access the dashboard210, the dashboard may present an option to create a new database or register an existing database. If the user desires to create a new database, the user may select the database creation option from the dashboard210and activate the database creation system235of the database provisioning system220. If the user desires to register an existing database, the user may select the database registration option and activate the database registration system240of the database provisioning system220. Upon activation, the database creation system235or the database registration system240may present one or more user interfaces to the user for soliciting parameters for creating a new database or registering an existing database, respectively. For example, at operation310, the activated one of the database creation system235or the database registration system240presents, via the dashboard210, a user interface for requesting the database engine type of the database to be created or registered. The dashboard210may present a selection of the database engine types that are supported by the database engine205. The user may select one of the various database engine types presented on the dashboard210. As noted above, the database engine type defines the database management system of the database being created or registered. For example, if the user desires to create a database with the database engine type Oracle, and if Oracle is presented as an option on the dashboard at the operation310, the user may select Oracle on the dashboard. As another example, if the user desires to register an existing database that has been configured with the database engine type Oracle, the user may select Oracle from the dashboards at the operation310. The database creation system235or the database registration system240receives the user's selection of the database engine type at the operation310. Additionally, the database creation system235or the database registration system240configures the remaining user interfaces that are presented to the user on the dashboard210based on the database engine type selected by the user at the operation310. For example, if the user selected Oracle as the database engine type at the operation315, the database creation system235or the database registration system240may configure the remaining database creation process or the database registration process in accordance with requirements for Oracle. Thus, at operation315, the database creation system235or the database registration system240presents one or more user interfaces to the user, via the dashboard210, for requesting a selection of parameters for defining the configuration for and creating a new source database VM on which the database being created or registered will ultimately reside. For example, in some embodiments, the activated one of the database creation system235or the database registration system240may request parameters for defining a software profile, a network profile, a compute profile, and a database parameter profile to be associated with the new source database VM. In other embodiments, the database provisioning system220may request other or additional types of parameters from the user for creating the source database VM based upon the database engine type selected at the operation310. The user interface may present one or more standardized profiles for one or more of the software profile, network profile, compute profile, and database parameter profile. The user may select from the standardized profiles in some embodiments. In some embodiments, the database creation system235or the database registration system240may also allow the user to modify a standardized profile and/or create new profiles from scratch based upon the user's preferences. Each of the profiles is based upon the database engine type selected at the operation310. Thus, the standardized profiles that are presented to the user are in compliance with the database engine type. Similarly, the database creation system235or the database registration system240only allow those changes to the standardized profiles or creation of new profiles that comply with the database engine type. The software profile defines the software and operating system parameters for the database engine type that is selected at the operation310. For example, if at the operation310, the database engine type is selected as PostgreSQL, the software profile may include one or more software and operations system image profiles associated with PostgreSQL. Each software profile may define the rules that are to be applied in managing the database being created or registered. In some embodiments, one or more sample software profiles may be available for the user to select. In other embodiments, the user may create their own custom software profile or modify an existing software profile to suit their needs. When creating their own custom software profile or modifying an existing software profile, in some embodiments, the user may be required to create/modify the software profile before starting the process300, while in other embodiments, the user may be able to create the custom software profile as part of the operation315. The network profile identifies the network location of the database being created or registered to facilitate access to the database after creation or registration. In some embodiments, the network profile may be the same profile that is created during installation of the database system200. In other embodiments, a different network profile may be used. Similar to the software profile, the database creation system235or the database registration system240may make a sample network profile available for the user to select. Alternatively, the user may create a new network profile or modify an existing network profile either before starting the process300or during the operation315. The compute profile defines the size/configuration of the source database VM. For example, the compute profile may define the number of vCPUs, number of cores per vCPU, and memory capacity to be associated with the source database VM. In other embodiments, the compute profile may define other or additional configurational parameters. At the operation315, the database creation system235or the database registration system240may also request the database parameter profile from the user. The database parameter profile defines the custom parameters that are applied to the database being created or registered. Again, the database creation system235or the database registration system240may make sample compute profiles and/or a sample database parameter profiles available for the user to select in some embodiments. Alternatively, the user may create custom compute and/or database parameter profiles or modify existing compute and/or database parameter profiles, either before starting the process300or during the operation315. In some embodiments, the database creation system235or the database registration system240may pre-select a default option for the user for one or more of the software profile, compute profile, network profile, and the database parameter profile. The database creation system235or the database registration system240may allow the user to change the default options by selecting another standardized option, modifying a standardized option, or creating a new profile. Thus, at the operation315, the database creation system235or the database registration system240receives selection of the various parameters for creating a new source database VM. In some embodiments, based upon the parameters received from the user, the database creation system235or the database registration system240may create a new source database VM at the operation315. In other embodiments, the database creation system235or the database registration system240may wait until other remaining parameters are received before creating the source database VM. In some embodiments, instead of creating a new source VM, the database creation system235or the database registration system240may allow the user to use a previously created source database VM. Thus, at the operation315, the database creation system235or the database registration system240may first request the user to select one option from either creating a new source database VM or using an existing (e.g., previously created) source database VM. Based on the user's selection, the database creation system235or the database registration system240may request the various profiles discussed above or request the user to identify the existing source database VM to use. In some embodiments, the database creation system235or the database registration system240may present a list of existing source database VMs created previously for the database engine type selected at the operation310and that have space available to receive the database being created or registered. The user may select one source database VM from the list. The database creation system235or the database registration system240may facilitate the user selection of an existing source database VM in other manners (e.g., by allowing the user to browse to a location, etc.). Upon receiving selection of the various profiles for creating a new source database VM or receiving selection of an existing source database VM, at operation320, the database creation system235or the database registration system240presents one or more user interfaces, via the dashboard210, for requesting parameters (e.g., configurational details) for the database being created/registered. For example, the database creation system235or the database registration system240may request a database name and a description of the database being created or registered to distinguish that database from other databases within the database system200. The database creation system235or the database registration system240may also request a database password to restrict access to the database to only authorized users, a database size to determine how much storage space is needed for storing that base, and/or any additional or other parameters that may be considered necessary or desirable in creating/registering the database. Further, the parameters that are requested may vary based upon whether a database is being created or whether an existing database is being registered. For example, if an existing database is being registered, the database registration system240may automatically determine the size of the database. In some embodiments, certain default values may be pre-selected for the user and the user may be allowed to change those values. Thus, at the operation320, the database creation system235or the database registration system240receives selection of parameters from the user, via the dashboard210, for either creating a new database or registering an existing database. At operation325, the database creation system235or the database registration system240presents one or more user interfaces, via the dashboard210, to request selection of parameters for creating an instance of a database protection system (e.g., the database protection system225) for the database being created or registered by the process300. The instance of the database protection system is configured to protect the database being created or registered by the process300. To create the instance of the database protection system, the database creation system235or the database registration system240may request a name and description for the instance of the database protection system225, a level of a Service Level Agreement (“SLA”), and a protection schedule to define rules based on which the instance of the database protection system225operates. An SLA is an agreement between a service provider (e.g., the owner of the database system200) and the user (e.g., the owner of the database) that outlines, among other things, the protection scope of the database. The protection scope defines for how long data from the database being created or registered is retained. Thus, the protection scope defines the database retention policy. In some embodiments, the SLA may define various protection parameters such as continuous, daily, weekly, monthly, quarterly, or yearly protection parameters for determining the protection scope of the database being created/registered. In other embodiments, the SLA may define other or additional protection parameters. Each database for which an instance of the database protection system225is created may be protected by capturing snapshots and/or transactional logs. The number of snapshots and transactional logs to be captured on each day may be defined by the user in the protection schedule. As used herein, a “day” may be any 24-hour period (e.g., from midnight to Noon). In some embodiments, the protection schedule may define default values to define the frequency of capturing snapshots and transactional logs, which the user may modify. Thus, based upon the frequency of capturing snapshots and transactional logs defined in the protection schedule, the instance of the database protection system225may be configured to capture one or more snapshots and one or more transactional logs each day. Generally speaking, the number of transactional logs that are captured each day may be higher than the number of snapshots that are captured on that day. Since it is impractical and expensive to indefinitely store the captured snapshots and the transactional logs, the protection parameters in the SLA define the duration for how long those snapshots and transactional logs are stored. For example, the continuous protection parameter within the SLA defines the duration in days for which all captured snapshots and transactional logs are retained. For example, if the continuous protection parameter is defined as 30 days, the instance of the database protection system225is configured to retain all snapshots and transactional logs that are captured within the last 30 days. By retaining all snapshots and the transactional logs, the user may replicate any or substantially any state of the database (down to a second or even a fraction of a second). The SLA may also define a daily protection parameter, which defines the duration in days for which a daily snapshot is stored. For example, if the daily protection parameter is 90 days, the instance of the database protection system225is configured to store a daily snapshot for 90 days. The protection schedule may define the time of day to identify the snapshot that is designated as the daily snapshot. For example, if the user specifies that the snapshot captured at 11:00 AM every day is the daily snapshot and the SLA defines the daily protection parameter for 90 days, the instance of the database protection system225may be configured to store a daily snapshot that was captured at or closest to 11:00 AM and store the daily snapshot for 90 days. Similarly, the SLA may define weekly, monthly, and quarterly protection parameters. A weekly protection parameter in the SLA may define the duration in weeks for which a weekly snapshot is stored. The protection schedule may define the day of the week to identify which snapshot is designated as the weekly snapshot. For example, if the user defines in the protection schedule that the snapshot captured on Monday is to be designated as the weekly snapshot, and the weekly protection parameter in the SLA specifies a duration of 8 weeks, the instance of the database protection system225may store the snapshot captured every week on Monday for 8 weeks. If multiple snapshots are captured each day, the protection schedule may also define which snapshot captured on the designated day of the week (e.g., Monday) serves as the weekly snapshot. In some embodiments, the time defined in the protection schedule for capturing a daily snapshot may be used. For example, if the protection schedule defines that the snapshot captured at 11:00 AM is the daily snapshot, and the weekly snapshot is to be captured on Monday, the instance of the database protection system225may store the snapshot captured at or closest to 11:00 AM every Monday as the weekly snapshot. In other embodiments, another time period may be used. Likewise, a monthly protection parameter in the SLA may define a duration in months for which a monthly snapshot is to be stored. The user may specify the date within the protection schedule for identifying which snapshot corresponds to the monthly snapshot. For example, the user may specify storing the snapshot captured on the 20thof every month as the monthly snapshot in the protection schedule, and the monthly protection parameter may specify a duration of 12 months for storing the monthly snapshot. Thus, the instance of the database protection system225stores a monthly snapshot captured on the 20thof every month and stores that monthly snapshot for 12 months. A quarterly protection parameter in the SLA may define a duration in quarters for which a quarterly snapshot is to be stored and the user may specify in the protection schedule which months correspond to the various quarters. For example, the user may specify January, April, July, and October as the quarters and the quarterly protection parameter may specify storing the quarterly snapshots for 20 quarters. Thus, the instance of the database protection system225may designate a snapshot captured on the first day of January, April, July, and October (e.g., January 1, April 1, July 1, and October 1) as the quarterly snapshot and store the quarterly snapshot for 20 quarters. Thus, for each protection parameter that is defined in the SLA, a corresponding value may be requested from the user in the protection schedule to identify which snapshot corresponds to that protection parameter. It is to be understood that the various protection parameters and their respective schedules mentioned above are only examples and may vary from one embodiment to another as desired. Further, when the duration specified by a protection parameter expires, any snapshots or transactional logs that are expired (e.g., past their duration) may be deleted. As an example, if a snapshot is to be stored for 30 days, on the 31stday, that snapshot may be deleted. Thus, each snapshot and transactional log is managed based on the SLA and protection schedule independent from other snapshots and transactional logs. Additionally, to simplify user selection, in some embodiments, various levels of SLA may be pre-defined within the database provisioning system220. Each level of the SLA may have default values of the various protection parameters. For example, in some embodiments, the various levels of SLA may be GOLD, SILVER, BRONZE and the various protection parameters for these levels may be as follows: NameContinuousDailyWeeklyMonthlyQuarterlyGOLD30 Days90 Days16 Weeks12 Months75 QuartersSILVER14 Days60 Days12 Weeks12 Months0 QuartersBRONZE7 Days30 Days8 Weeks6 Months0 Quarters It is to be understood that the nomenclature of the GOLD, SILVER, BRONZE levels of the SLA is only an example and the levels may be given different names in other embodiments. Further, although three levels of the SLA are described herein, in other embodiments, greater or fewer than three SLA levels may be used. Additionally, the values of the protection parameters in each level of the SLA may vary from one embodiment to another. The database creation system235or the database registration system240may present the various pre-defined SLA levels to the user at the operation325to select from. In some embodiments, the database creation system235or the database registration system240may allow the user to modify the values of one or more protection parameters in the pre-defined SLA levels. For example, if the user desires to select the GOLD level, but would like continuous protection for 45 days instead of the default value of 30 days shown in the table above, the user may modify the continuous protection parameter of the GOLD level. Thus, the pre-defined SLA levels provide the convenience and flexibility of tailoring the various protection parameters to suit the user's needs. Alternatively, the database creation system235or the database registration system240may allow the user to create a new SLA at the operation325. To create a new SLA, upon receiving input from the user at the operation325indicating creation of a new SLA, the database creation system235or the database registration system240may present one or more user interfaces to the user requesting certain information. For example, the database creation system235or the database registration system240may request an SLA name, description, and values for the continuous, daily, weekly, monthly, and quarterly protection parameters. The database creation system235or the database registration system240may request other or additional details as well. Upon receiving the various inputs from the user for creating the new SLA, the database creation system235or the database registration system240may create the new SLA and allow the user to select that SLA at the operation325. Therefore, at the operation325, the database creation system235or the database registration system240receives selection of the SLA and the protection schedule for creating an instance of the database protection system225for the database being created/registered. At operation330, upon receiving the various user selections at the operations310-325, the database creation system235or the database registration system240creates a new database or registers the existing database with the database system200. To create/register the database, the database creation system235or the database registration system240initiates a series of operations. For example, the database creation system235or the database registration system240may create a source database VM (or designate an existing source database VM), convert the database size into a number of virtual disks that are needed to house the database, create a database profile having a database name, description, network information, etc., attach the software and parameters of the database engine type with the database, create an instance of the database protection system, associate the SLA and schedule with the database protection system, designate storage for storing snapshots and transactional logs, etc. Once the database is created/registered, database management services may be applied to the database. In some embodiments, the database creation system235or the database registration system240may request other or additional information for creating/registering the database. For example, the database creation system235or the database registration system240may request which cluster or clusters the user desires to create/register the database, which node or nodes on a cluster the user desires to create the source database VM, etc. Thus, the database system200provides an easy, convenient, and flexible mechanism to create a new database or register an existing database using a user friendly and intuitive user interface. Instead of requiring multiple days to create/register a database, using the user interface of the present disclosure, the database may be created/registered within minutes. Once created/registered, additional database management services may be implemented on those databases. Turning now toFIGS.4A-5E, example user interfaces for creating and registering a database are shown, in accordance with some embodiments of the present disclosure.FIGS.4A-4Gshow example user interfaces for creating a database, whileFIGS.5A-5Eshow example user interfaces for registering a database.FIGS.4A-5Eare discussed in conjunction withFIGS.1-3. Referring toFIGS.4A-4G,FIG.4Ashows an example dashboard400. The dashboard400is similar to the dashboard210. The dashboard400becomes accessible to the user upon installing the database system200and allows the user to manage and monitor activities across multiple databases that are created/registered within the database system. In some embodiments, the user may be required to be authenticated before being able to access the dashboard400. The dashboard400may include a toolbar405and a body410. The toolbar405is configured to switch between various control functions of the database system200. In some embodiments, the toolbar410includes a main menu415, an alerts menu420, and a user view menu425. The main menu415may be configured as a drop-down list that enables the user selected within the user view menu425to select and activate a control function. For example, when the control function “dashboard” is selected in the main menu415, the dashboard400may be configured to display a “home page”430in the body410of the dashboard. As the control function selected in the main menu415is changed, the page (or at least portions thereof) that is displayed in the body410of the dashboard400may change to reflect the control function that is activated. The alerts menu420allows the user to view and monitor any alerts that are occurring within the database system. An “alert” may be indicative of an error or issue in the database system that needs user attention. In some embodiments, alerts may be color coded to identify the criticality of the alert. The user view menu425determines which features are accessible to a particular user. For example, if the user selected in the user view menu425is an administrator, certain features that are generally not available to a non-administrator user may be activated. It is to be understood that the toolbar405is only an example and may vary from one embodiment to another. The homepage430displayed within the body410provides a summary or a “quick-glance” view of all the databases (e.g., source databases) that are managed by the database system200. The homepage430may be divided into one or more components or cells. For example, the homepage430may include a database list cell435A that lists all of the source databases stored in the source database VMs of the database system200and provides a summary of each of those source databases (e.g., name of the source database, database engine type, size of the database, name of the instance of the database protection system, number of cloned databases created from the source database, etc.). A summary cell435B lists the total number of source databases created or registered on the source database VMs combined (e.g., the total number of databases listed in the database list cell435A), the total number of cloned databases created from those total number of source databases, data usage values of the source and/or the cloned databases, etc. A clone cell435C provides history on the cloning of the databases (e.g., number of clones created in a given time period, whether the cloning was successful or not, etc.). A version cell435D provides information on the cluster on which the source databases resides and the version of the software implemented by the database system. An alerts cell435E provides additional details on the various alerts being generated by the databases listed within the database list cell4335A. It is to be understood that the various components or cells of the homepage430shown inFIG.4Aare examples and features thereof may vary from one embodiment to another. For example, the number of components or cells that are shown on the homepage430may vary from one embodiment to another. Likewise, the details that are displayed within each component or cell may vary from one embodiment to another. The orientation, placement, size, and other design aspects of each component or cell may vary from one embodiment to another. In some embodiments, the configuration of the homepage430and/or the configuration of each component/cell may be defined by the user by using a settings menu option of the dashboard400. By virtue of the dashboard400, the user may get a quick glance summary of the various source databases that are managed by the database system200, as well as a quick summary of the configuration and specifics of each source database. Via the dashboard400, the user may also decide to perform various database management services on the source databases. For example, to initiate a database provisioning service, the user authorized to request the database provisioning service may select a “databases” option from the main menu415to display a database page440(shown inFIG.4B) in the body410of the dashboard. If the user selected in the user view menu425is not permitted to request the database provisioning service, the “databases” option may be inactivated or not shown in the main menu415. The database page440may display a database menu445A from which the user may view additional information and/or perform additional operations. The database page440may also include a database detail component445B to display additional details corresponding to the option selected from the database menu445A. For example, as shown inFIG.4B. upon selecting the “Sources” option from the database menu445, a list of the source databases associated with the database system200may be displayed within the database detail component445B. The user may select any one of the source databases to view additional details of that source database and/or perform permitted operations on that source database. The database page440may also allow the user to create a new database or register an existing database. For example, the user may select a create button445C on the database page440to send a user request to the database engine205to start the process300for creating a new database. Likewise, the user may select a register button445D to initiate a registration process and sending a user request to the database engine205. Upon selecting the create button445C, the database engine205, and particularly the database creation system235of the database engine may present a user interface450, shown inFIG.4C. The user interface450identifies the various steps of creating the new database and highlights the current step. For example, the user interface450has the “Engine” step highlighted. By virtue of identifying the various steps of creating the new database, the dashboard400keeps the user informed as to which step the database creation process is on and which steps are coming up next. The user interface450may also present one or more database engine types that are supported by the database system200. The user may select one of the database engine types to highlight the selection. For example, the PostgreSQL database engine type is shown selected in the user interface450. The user may select the “Next” button to send the selected database engine type selection to the database creation system235, which in response, presents a user interface455ofFIG.4D. The user interface455identifies that the database creation process is at the “Server” step for creating a new source database VM or identifying an existing source database VM. The user interface455may, thus, allow a user to select an option455A to create a new source database VM or an option455B to use an existing source database VM. In the user interface455ofFIG.4D, the option455A for creating a new source database VM is shown selected. Thus, the user interface455requests user selection of one or more profiles for creating a new source database VM. For example, the user interface455may request a name455C for the source database VM, a software profile455D, a compute profile455E, a network profile455F, a description455G for the source database VM, and security options455H for the source database VM. The user interface455may also request a parameter profile for the source database VM. In other embodiments, the user interface455may request additional or other information for creating a new source database VM. Although not shown, if the user selects the option455B for using an existing source database VM, the user interface455may display options for allowing the user to identify an existing source database VM. Upon selecting the “Next” button, the various user selections of the user interface455are sent to the database creation system235, which then presents a user interface460ofFIG.4Eto the user on the dashboard400. The user interface460identifies that the database creation process is at the “Database” step at which various parameters for creating a database on the source database VM are received from the user. The user interface460also requests the user to provide a name460A and description460B for the database being created, a password460C and460D to restrict access to the database, a size460E of the database to be created, a database parameter profile460F to be applied to the database and the source database VM, a listener port460G for the database engine type selected in the user interface450, and any other details (e.g., details460H) that may be needed or considered desirable to have in creating the database. When the user is satisfied with the selections on the user interface460, the user may select a “Next” button to send the selections to the database creation system235. The database creation system235may then present a user interface465ofFIG.4F. The user interface465identifies that the database creation process is at the last step of creating the “Time Machine.” The user interface465is configured to request selection of SLA and protection schedule from the user for creating an instance of the database protection system225. The user interface465, thus, requests a name465A and description for the instance of the database protection system225, an SLA level465C, and a protection schedule465D. Within the protection schedule465D, the user interface465requests the user to provide a number of snapshots465E desired each day, a number of transactional logs465F desired each day, and time periods465G for identifying which snapshot to designate as the daily snapshot,465H for identifying which snapshot to designate as the weekly snapshot,465I for identifying which snapshot to designate as the monthly snapshot, and465J for identifying which snapshot to designate as the quarterly snapshot. Upon providing the various parameters in the user interface465, the user may select a “Next” button to send the selections to the database creation system235and start the database creation. The database creation system235may display a status of the database creation in a user interface470ofFIG.4G. Upon creating the database, the newly created database may be displayed within the database list cell435A, within the database page440, and anywhere else where source databases are listed. The database creation system235may also update the various calculations and numbers in the dashboard400that provide some statistic of the source databases (e.g., the summary cell435B). It is to be understood that the configurations of the various user interfaces ofFIGS.4A-4Gmay vary from one embodiment to another. For example, the various selections may be displayed as drop-lists, as radio buttons, or in other ways. Similarly, some fields may be pre-filled with default values and allowed to be changed by the user if desired. The placement of the various fields, the size, orientation, and other design aspects of those fields may be varied as well. Additionally, some fields may be considered mandatory, while other fields may be designated as mandatory to be filled in by the user. The dashboard400thus provides an easy mechanism for creating a new database in a simple, user friendly, and intuitive user interface. Turning now toFIGS.5A-5E, example user interfaces for registering an existing database are shown, in accordance with some embodiments of the present disclosure. Similar to the database creation process, the database registration process is initiated from a dashboard500. The dashboard500is similar to the dashboards210and400. The dashboard500includes a database page505, which is similar to the database page440. To register an existing database, the user may select a register button510and select “Next.” Upon selecting “Next,” the database registration system240may be activated, which starts the database registration process by displaying a user interface515ofFIG.5B. From the user interface515, the user may select the desired database engine type and select “Next.” Upon selecting “Next,” the user's selection of the database engine type is sent to the database registration system240, which then displays a user interface520ofFIG.5C. By way of the user interface520, the database registration system240requests the user for creating a new source database VM or selecting an existing source database VM. Likewise, by way of user interface525ofFIG.5D, the database registration system240requests the user for providing various parameters of the existing database, and by way of user interface530, the database registration system requests the user to define the SLA and protection schedule. Upon receiving all of the parameters from the user, the database registration system240registers the database. Similar to the user interface470, the database registration system may display the registration status in a user interface. Upon registration, the database becomes a source database that resides on a source database VM and on which other database management services may be applied. Referring now toFIG.6, an example block diagram of a database system600is shown, in accordance with some embodiments of the present disclosure. The database system600is similar to the database system200. Therefore, the components of the database system600that are already discussed with respect to the database system200are not discussed again. The database system600includes a database engine605that is associated with a dashboard610via an API615. The database engine605is also associated with a database storage system620for storing one or more databases managed by the database system600. The database engine605is similar to the database engine205, the dashboard610is similar to the dashboard210, the API615is similar to the API230, and the database storage system620is similar to the database storage system215. The database engine605includes a database protection system625that is configured to protect databases that are associated with the database system600. The database protection system625is similar to the database protection system225. Although not shown, the database engine605also includes a database provisioning system similar to the database provisioning system220. As discussed above, an instance of the database protection system625may be created for each source database when that source database is created or registered within the database system600. Thus, the database protection system625may include multiple instances of the database protection system—one for each source database. For example, the database protection system625may include database protection system instance630A-630N (collectively referred to herein as database protection system instances630). In other embodiments, each instance of the database protection system (e.g., the database protection system instances630A-630N) may be configured to protect more than one source database. Each of the database protection system instances630A-6930N may respectively include a clone management system635A-635N (collectively referred to herein as clone management systems635) and a snapshot/log capturing system640A-640N (collectively referred to herein as snapshot/log capturing systems640). Each of the database protection system instances630may be associated with a source database stored within a source database storage645. The source database storage645is similar to the source database storage255. Thus for example, the database protection system instance630A may be associated with a source database650A stored within the source database storage645, the database protection system instance630B may be associated with a source database650B of the source database storage, the database protection system instance630N may be associated with a source database650N, and so on. Thus, the clone management system635A and the snapshot/log capturing system640A of the database protection system instance630A may be configured to protect the source database650A, the clone management system635B and the snapshot/log capturing system640B may be configured to protect the source database650B, the clone management system635N and the snapshot/log capturing system640N may be configured to protect the source database650N, and so on. By virtue of having the database protection system instances630for each of the source databases650A-650N (collectively referred to herein as the source databases650), the protection of each of those databases may be customized and tailored to suit the user's needs. To protect the source databases650, the database protection system instances630may create a clone of those source databases. The clones of the source databases650(e.g., cloned databases) may be stored within a target database storage655. The target database storage655is similar to the target database storage260. For each source database (e.g., the source databases650) that is stored within the source database storage645, one or more clones of that source database may be created and stored within the target database storage655. For example, when a clone of the source database650A is created, a cloned database660A is created and stored within the target database storage655. Similarly, clones of the source databases650B and650N may be created as cloned databases660B and660N, respectively, and stored within the target database storage655. The cloned databases660A-660N are collectively referred to herein as the cloned databases660. Although each of the source databases650in the source database storage645has been shown as having a corresponding instance of the cloned databases660in the target database storage655, it is to be understood that in some embodiments, clones of only some of the source databases stored in the source database storage may be made. The source databases650that have not been cloned may not have a cloned database within the target database storage655. Further, similar to the source databases650, which reside on a database VM (e.g., the source database VMs), the cloned databases660also reside on a database VM. The database VMs on which the cloned databases660reside are referred to herein as target database VM. Each of the cloned databases660may reside entirely on one target database VM or may span across multiple target database VMs. In some embodiments, the source database VMs and the target database VMs may be created on the same node or different nodes of the same cluster or across multiple clusters. Thus, the database protection system instances630, and particularly the clone management systems635of the database protection system instances creates the cloned databases660from the source databases650stored within the source database storage645, and stores the cloned databases within the target database storage655. The cloned databases660may be of a variety of types. As discussed above, each of the source databases650are created or registered on a source database VM. Thus, each of the cloned databases660may include a clone of the source database VM only (e.g., to create the target database VM) or may include the clone of the source database VM plus the database that resides on that source database VM. For example, the cloned database660A of the source database650A may include a clone of the source database VM on which the source database650A resides or a clone of that source database VM plus the database650A. When both the source database VM and the source database650A are cloned, the cloned database660A may include a target database VM created on the target database storage655with a similar or different configuration as the source database VM and the clone of the source database stored on the target database VM. When only the source database VM is cloned, a target database VM is created for that source database VM and stored on the target database storage655. The target database VM may be used at a later point to store the clone of the source database that resides on the associated source database VM. Thus, the cloned databases660may include the source database VM only, the source database VM plus the source database, or the source database only (which is to be stored on a previously created target database VM). The cloned databases660may be considered operationally same (or substantially similar) to the source databases650. Each of the cloned databases660may be refreshed/updated to incorporate any changes that may have occurred in the source databases650since the cloned databases were created. In some embodiments, the operations that are performed on the source databases650may be performed on the cloned databases660as well. Thus, in some embodiments, instead of using the source databases650, the cloned databases660may be used for performing operations (e.g., analyzing data). The cloned databases660may be created from snapshots and transactional logs captured from the source databases650. The cloned databases660are generally created upon receiving a user request. The user may request to clone a particular one of the source databases650to a point in time or to a specific snapshot. For example, the user may request a cloned database of a particular one of the source databases650as that source database existed at 11:00 AM on a particular date. Alternatively, the user may specifically identify a snapshot and request a cloned database of the source databases650based on that snapshot. Creating a cloned database (e.g., the cloned databases600) involves replicating a state of the source databases650. The “state” of the source databases650may include the configuration of the source database, the user data stored within the source database, metadata stored within the source database, and any other information associated with the source database. In other words, a cloned database may be an exact or substantially exact copy of the source database. Thus, upon receiving a user request to create a cloned database (e.g., the cloned database660A) from a source database (e.g., the source database650A), the clone management system (e.g., the clone management system635A) associated with the source database may retrieve snapshots and transactional logs of the source database from a repository where the snapshots and transactional logs are stored. If the user request is to clone the source database to a point in time, the clone management system (e.g., the clone management system635A) may retrieve all snapshots and transactional logs captured of the source database at that point in time and create a cloned database (e.g., the cloned database660A) from those snapshots and transactional logs. The cloned database (e.g., the cloned database660A) represents the state of the source database at the requested point in time. If the user request is to clone the source database based on a particular available snapshot, the clone management system (e.g., the clone management system635A) may retrieve that particular snapshot and create a cloned database (e.g., the cloned database660A) from that particular snapshot. The cloned database (e.g., the cloned database660A) represents the state of the source database (e.g., the source database650A) at the time the requested snapshot was captured. Thus, the clone management systems635are configured to create the cloned databases660. The clone management systems635are also configured to refresh the cloned databases660, as well as manage/perform any operations performed on the cloned databases. Referring now toFIGS.7A-7Fin conjunction withFIGS.6and4A, example user screenshots illustrating the cloning of a source database are shown, in accordance with some embodiments of the present disclosure. To clone a source database, the user may start from dashboard700ofFIG.7A. The dashboard700is similar to the dashboard400. To create a clone of a source database, the user may select the option “time machines” from main menu705. Upon selecting the option “time machines” from the main menu705, the user may select a particular source database to be cloned. Alternatively, in some embodiments, the user may first select the source database to be cloned and then select the option “time machines” from the main menu705. Upon selecting the source database to be cloned, the database protection system instance associated with the source database is activated. Further, the database protection system instance displays, in a body710of the dashboard700, a summary section715. The summary section715may display one or more configurational features of the source database such as the database engine type (e.g., type inFIG.7A), how long ago was the source database created (e.g., age), the last time the source database was updated (e.g., last update), the next period for capturing a transactional log (e.g., next log catch up), name of the source database (e.g., name), the number of clones previously made of the source database (e.g., clones), the SLA level of the source database (e.g., SLA), and the protection schedule (e.g., schedule). In other embodiments, other or additional details may be provided in the summary section715. The database protection system instance may also display within the body410a menu section720that provides options of operations that the user may perform on the source database. For example, the user may select a clone option725A to create a new clone of the source database. The user may elect to manually capture transactional logs or snapshots by interacting with log catch up option725B and snapshot option725C, respectively. Similarly, the user may perform other operations by selecting another one of options725D. Depending upon the options that the user is authorized to make, some of the options725A-725D may be inactivated. Further, although specific options are shown inFIG.7A, the number and types of options may vary from one embodiment to another. The database protection system instance may also display a calendar730visually representing the SLA associated with the source database. For example, the calendar730may include a color coded legend to represent the duration for the continuous, daily, weekly, monthly, and quarterly protection parameters for a selected number of months. For example, the calendar730ofFIG.7Ashows five months (May-September) with dates highlighted based upon the protection parameter that applies to that date. For example, by looking at the calendar730, the user may quickly determine that August 20-September 19 fall under the continuous protection parameter, July 21-August 19 fall under the daily protection parameter, and July 5, 12, and 19 are days when the weekly protection parameter applies. The calendar730may also show the dates when manual snapshots and/or transactional logs were captured of the source database (e.g., on June 7, July 13, August 19). Further, upon selecting a particular date, the user may view additional details of available snapshots and transactional logs for that date from which a clone may be created. For example, inFIG.7A, August 21 is shown as selected. Thus, in a display portion735, a time scale shows the available snapshots/transactional logs. Since August 21 falls under the continuous protection parameter, the user may select any time on the time scale to create a clone of the source database. If for example July 25 is selected by the user, the display portion735may highlight the time on the time scale at which the daily snapshot was captured (as identified from the protection schedule) and which the user may be able to select to create a clone from. Thus, the body710of the dashboard700provides a user friendly, visual, and intuitive interface for easily and quickly determining how the source database is protected and the level of protection that is available to that source database on any given day. To clone the source database, the user may select the clone option725A. Upon selecting the clone option725A, the database protection system instance and particularly the clone management system of the source database initiates the cloning process. The cloning process may include three distinct operations: a time operation to identify whether the user desires to create the clone based on an available snapshot or to a point in time, a server operation to either create a new target database VM to house the clone of the source database or select a previously created target database VM, and a database operation to create the clone of the source database on the target database VM. Selecting the clone option725A triggers the time operation by displaying a user interface740shown inFIG.7B. The user interface740solicits a clone name740A, a date from a calendar740B from which the clone is desired, and one of a point in time option740C or snapshot option740D. If the user desires to clone the source database to a point in time, the user may select the point in time option740C and if the user desires to create the clone from an available snapshot, the user may select the snapshot option740D. Upon providing the information requested in the user interface740, the user may interact with (e.g., click on) a next button740E, which opens user interface745ofFIG.7C. The dialog box745is shown for a point in time selection. Thus, the user interface745may solicit the exact time from which the clone is to be created. The user interface745may display the times that are available for the user to select for the point in time option. Specifically, the user interface745may display a time scale745A, which may include an activated slot from which the user may pick a time and an inactivated slot that is unavailable to the user. For example, the time scale745A shows an activated slot745B and an inactivated slot745C. The user may pick (e.g., by interacting with the time scale745A or by entering the time in box745D) a time from the activated slot745B for creating the clone. For example, if the user selects 5:00 from the activated slot745B, the corresponding clone management system is configured to create a clone based on the state of the source database at5:00. It is to be understood that the activated slot745B corresponds to the protection parameter of the SLA and the protection schedule. For example, if the date that the user selected on the calendar740B falls under the daily protection parameter and the protection schedule indicates that the daily snapshot is captured at 11:00 AM, in some embodiments, the user may be able to only select 11:00 AM for the clone. In other embodiments, the user may still be allowed to select times other than 11:00 AM as well. However, since only a daily snapshot is available for that date, the clone may be based on the daily snapshot of 11:00 AM. If the user selects the snapshot option740D inFIG.7B, the user interface745may look somewhat different and may present options to allow the user to select one or more available snapshots to create a clone from. Upon selecting the point in time or the available snapshot, the user may interact with a next button745E to display user interface750ofFIG.7D. The user interface750allows the user to either create a new target database VM750A on the target database storage (e.g., the target database storage655) or select an existing target database VM750B. To create a new target database VM, the database protection system instance may solicit the target database VM name and one or more profiles (e.g., software profile, compute profile, network profile, database parameter profile), and any other desired or required information from the user. In some embodiments, one or more profiles may be same as that of the source database VM, while in other embodiments, the one or more profiles may vary from those of the source database VM. If the user selects the existing target database VM750B, the user interface750may display a list of previously created target database VMs for the source database. In some embodiments, multiple target database VMs may be created for a particular source database. Upon providing the target database VM information, the user may interact with a next button750C to display a user interface755ofFIG.7E. In the user interface755, the user may specify details of the cloned database. For example, the database protection system instance may solicit a name755A of the cloned database, a password755B if desired, and any other information that is desired or needed. Upon interacting with a clone button755C, the database protection system instance creates the clone of the source database based upon the inputs received from the user inFIGS.7A-7D. Interacting with the clone button755C may display a user interface760ofFIG.7F, which shows the status of the clone. The database protection system instance may retrieve the snapshot(s) and/or the transactional logs and create the clone therefrom. For the point in time option, the database protection system instance may use both snapshots and transactional logs to create the cloned database, while for the available snapshot option, the database protection system instance may use only the available snapshot. The database protection system instance may also create a new target database VM (if a new target database VM is selected by the user). Once the cloned database is created, the cloned database may be displayed within the dashboard700. Thus, the dashboard700provides an easy and convenient mechanism to clone source databases. Turning now toFIG.8and referring toFIG.8in conjunction withFIGS.6and7A-7F, an example flowchart outlining operations of a process800is shown, in accordance with some embodiments of the present disclosure. The process800may include additional, other, or different operations depending upon the embodiment. The process800may be used to create a cloned database from a source database. The process800may be implemented by the clone management systems635of the database protection system instances630. The process800starts at operation805with the database protection system instances630receiving a user request for creating a clone. As discussed inFIG.7A, the user may request creation of a clone of a source database via the dashboard700. Upon receiving the user request, the database protection system instances630corresponding to the source database being clones is activated. The activated one of the database protection system instances630receives selection from the user of creating the clone from either a point in time or from an available snapshot. If the user selects the point in time option, the process800proceeds to operation810. At the operation810, the activated one of the database protection system instances630presents user interface, via the dashboard700, to the user to receive selection of a specific time at which the clone is to be created. The clone is created based on the state of the source database at that specific time. At operation815, the activated one of the database protection system instances630retrieves the snapshot corresponding to that specific time or the snapshot that is available closest to that specific time. At operation820, the activated one of the database protection system instances630retrieves any transactional logs that may be needed. For example, if the snapshot that is retrieved at the operation815is captured at the specific time selected by the user at the operation810, then the source database may be cloned from the snapshot of the operation815. However, if the snapshot of the operation815is created before or after the specific time selected by the user, then one or more transactional logs may exist between the time the snapshot is captured and the specific time. For example, if the specific time is 11:00 AM and the closest snapshot to 11:00 AM is from 10:00 AM, the database protection system instance630may determine if there are transactional logs available between 10:00 AM and 11:00 AM. The database protection system instance630may retrieve any transactional logs that may be available. If no transactional logs are available between 10:00 AM and 11:00 AM, the database protection system instance630may create the clone from the snapshot of 10:00 AM. Thus, at the operation820, the activated one of the database protection system instances630determines if a transactional log is needed or exists, and retrieves those transactional log(s). Additionally, at operation825, the activated one of the database protection system instance630receives selection for a target database VM from the user. As indicated above, the user may either create a new target database VM or use an existing target database VM for storing the cloned database. If the activated one of the database protection system instances630receives selection for creating a new target database VM from the user, the database protection system instance may solicit information (e.g., name, profiles, etc.) from the user to create the target database VM. The target database VM may be associated with the target database storage. Alternatively, if the user desires to use an existing target database VM, the activated one of the database protection system instance630may present a list of existing target database VMs created previously for the source database. The user may select one of the existing target database VMs from the list. At operation830, the activated one of the database protection system instances630creates the cloned database from the snapshot of the operation815and any transactional logs of the operation820. The database protection system instances630stores the cloned database, at operation835, on the target database VM, and the process800ends at operation840. If at the operation805, the user selects the option of creating a clone from an available snapshot, the process800proceeds to operation845. At the operation845, the activated one of the database protection system instances630solicits the user to select the snapshot from which the clone is to be created. Upon receiving the user's selection of the available snapshot, at operation850, the activated one of the database protection system instance630retrieves that snapshot. Before, after, or along with retrieving the snapshot, at the operation825, the database protection system instance630may either create a new target database VM or identify an existing target database VM to use. At the operations830and835, the database protection system instance630creates the clone of the source database and stores the cloned database on the target database VM. Again, the process800ends at the operation840. Returning back toFIG.6, the cloned databases660of the source databases650are created from snapshots and transactional logs. Thus, to be able to create the cloned databases660, snapshots and transactional logs are needed of the source databases650. The snapshots and transactional logs may be captured via the snapshot/log capturing systems640. The snapshot/log capturing systems640may be configured with the protection schedule and the SLA that are defined by the user when the source databases650are created or registered with the database system600. The protection schedule defines, among other things, the frequency of capturing snapshots and transactional logs each day. Thus, based upon the protection schedule, the snapshot/log capturing systems640may instruct a data manager of the source databases650to capture snapshots and transactional logs automatically. As discussed above, an instance of a data manager may be associated with each source database that is created on the source database storage645. For example, the source database650A may be associated with a database manager665A, the source database650B may be associated with a data manager665B, and the source database650N may be associated with a database manager665N. The database managers665A-665N are collectively referred to herein as database managers665. Although not shown, in some embodiments, the cloned databases660may each be associate with a database manager as well. The database managers665are configured to capture the snapshots and the transactional logs upon instruction from the snapshot/log capturing systems640. The database managers665may include an agent that captures the snapshots and the transactional logs based on the protection schedule received from the snapshot/log capturing systems640. Thus, the database manager665A may include an agent670A, the database manager665B may include an agent670B, and the database manager665N may include an agent670N. The agents670A-670N are collectively referred to herein as agents670. Each of the agents670is an autonomous software program that is configured for performing one or more specific and approved operations. For example, each of the agents670may be configured to capture snapshots and transactional logs based upon the protection schedule, and store the captured snapshots and transactional logs in a repository associated therewith. The clone management systems635may retrieve the snapshots and transactional logs from the repositories when creating a clone of the source databases650. For example, the agent670A may be configured to store the captured snapshots and transactional logs in a repository675A that is configured for the source database650A, the agent670B may be configured to store the captured snapshots and transactional logs in a repository675B that is configured for the source database650B, and the agent670N may be configured to store the captured snapshots and transactional logs in a repository675N that is configured for the source database650N. The repositories675A-675N are collectively referred to herein as repositories675. For example, if the protection schedule specifies capturing 2 snapshot every day and capturing a transactional log every 2 hours for the source database650A, the agent670A may capture 2 snapshots of the source database650A and a transactional log every 2 hours such that in a 24-hour period, the agent captures 2 snapshots and 12 transactional logs. Further, if the continuous protection parameter in the SLA specifies a continuous protection of 30 days, the agent670A may be configured to save all snapshots and transactional logs captured in the previous 30 days (not including the current day). Thereafter, the agent670A may purge (e.g., delete) some of the snapshots and transactional logs based upon the protection parameters defined in the SLA level. For example, if the SLA level specifies a daily protection parameter of 30 days after the duration of the continuous protection parameter expires, the agent670A may be configured to delete all but one snapshot that were captured before the previous 30 days, as well as delete all the transaction logs captured before the previous 30 days. The snapshot that is saved as the daily snapshot may be the snapshot that is closest to the daily snapshot time specified in the protection schedule. For example, if the time specified in the protection schedule is 11:00 AM for the daily snapshot, the snapshot that is captured at 11:00 AM or closest to 11:00 AM is saved and all other snapshots captured on that day are deleted. Thus, the agent670A is configured to capture and manage snapshots and transactional logs. Referring toFIGS.9A-9Gin conjunction withFIG.6, an example flow diagram outlining operations of how an agent (e.g., the agents670) of a database (e.g., the source databases650) may capture, store, and purge snapshots and transactional logs based upon the protection schedule and protection parameters defined in the SLA level is shown, in accordance with some embodiments of the present disclosure. Simply for purposes of explanation,FIGS.9A-9Gshow the flow for an SLA that requires a continuous protection of 7 days and a daily protection thereafter for 7 days. Thus, the SLA defines the continuous protection parameter as 7 days and the daily protection parameter as 7 days. Further,FIGS.9A-9Gare based on a protection schedule that specifies capturing 1 snapshot every day and 3 transactional logs every day. It is to be understood that the SLA definitions and the protection schedule above is only an example and not intended to be limiting in any way. FIG.9Ashows the contents of a database900on a first day of the continuous 7 days. Since the protection schedule specifies capturing 1 snapshot every day, on the first day, the agent associated with the database900captures a snapshot905. The time of the day at which the snapshot905is captured may either be defined in the protection schedule or may be pre-defined within the agent capturing the snapshot. Additionally, as shown inFIG.9B, on the first day, the agent associated with the database900also captures 3 transactional logs (e.g., one transactional log every 8 hours)910. On the second day, as shown inFIG.9C, the agent associated with the database900captures another snapshot915and 3 transactional logs920shown inFIG.9D. The agent associated with the database900continues to capture snapshots and transactional logs for 7 days to satisfy the continuous protection parameter and provide continuous protection for 7 days as defined in the SLA level. Thus, as shown inFIG.9D, by the end of the seventh day, the agent associated with the database900has captured the snapshots900,915, snapshots925A-925E, the transactional logs910,920, and930A-930E. On the eighth and following days, the agent associated with the database900continues to capture snapshots (e.g., snapshot935) and 3 transactional logs940shown inFIG.9E. However, since the SLA requires 7 days of continuous protection, after the 7 days, the continuous protection is not required and the agent may purge some of the captured snapshots and transactional logs, again based on the definitions in the SLA. As indicated above, the SLA ofFIGS.9A-9Gdefines a daily protection parameter of 7 days for daily protection after the expiration of the 7 days of continuous protection. Thus, on the ninth day, the snapshot905captured on the first day is greater than 7 days old. Since only one snapshot is captured every day, the agent associated with the database900maintains the snapshot as the daily snapshot. If multiple snapshots are captured each day, the agent associated with the database900may delete all snapshots except one. Further, since the daily protection parameter provides guarantee of a daily snapshot, the agent may delete all of the transactional logs that were captured that day. Therefore, as shown inFIGS.9E and9F, on the ninth day, the snapshot905may continue to be stored as the daily snapshot but the transactional logs910may be deleted. Further, on the ninth day, the agent associated with the database900captures another snapshot945to continue to provide a continuous protection for the past 7 days. Similarly, on each day, from days 10-14, the agent associated with the database900continues to delete the transactional logs that are older than 7 days, and capture a new snapshot and 3 transactional logs (e.g., snapshots950A-950E and transactional logs955A-955E shown inFIG.9G). Thus, the agent associated with the database900continues to capture snapshots and transactional logs based upon the SLA level and the protection schedule, and deletes some of the snapshots and transactional logs that are no longer required to satisfy the SLA level. Snapshots of a source database may be captured by creating a snapshot of the source database VM and a snapshot of the source database itself. Specifically, a snapshot may be an image/copy of the location of the storage files associated with the virtual disks of the source database and an image/copy of the location of the configuration files associated with the source database VM. The virtual disk(s) on which the source database is stored may be composed of or divided into one or more memory blocks. The snapshot/log capturing systems640may capture images or copies of the memory blocks for capturing snapshots. A copy of the memory blocks may be made by identifying the memory pointer (e.g., location) assigned to each memory block and copying the memory pointer to a repository (e.g., the repositories675). During a first snapshot of the memory blocks, the contents of all the memory blocks may be copied to the repositories675. After the first snapshot is captured, transactional logs may be captured based upon the protection schedule to record all transactions or changes in the source database after the capture of the first snapshot. Capturing a transactional log may be referred to herein as a log catch up operation. For example, say the protection schedule defines capturing 2 snapshots each day and 2 transactional logs between the 2 snapshot captures. If the source database includes 1000 memory blocks, the first snapshot creates copies of all the 1000 memory blocks. Capturing a snapshot involves pausing the source database such that no user operations are performed while the source database is being snapshotted, creating copies of the memory blocks (and other information such as the configuration file of the source database VM, etc.), and unpausing the source database. Since snapshots temporarily halt operation of the source database, taking frequent snapshots of the source database is not practical or desirable. However, to accurately capture the state of the source database in between two snapshot captures and allow creation of cloned databases to satisfy the SLA (e.g., the continuous protection parameter), transactional logs may be captured between two snapshot captures. The frequency of capturing transactional logs may be higher than the frequency of capturing the snapshots. Thus, for example and continuing the example above, if after capturing the first snapshot, 4 out of the 1000 memory blocks of the source database have changed (e.g., due to data being updated, new data being added, etc.), the agents670create a first transactional log based upon the protection schedule. The first transactional log may reflect that the 4 blocks have changed since the last snapshot capture. Specifically, the first transactional log may include memory pointers to the 4 memory blocks that have changed. Thus, instead of copying all of the 1000 memory blocks, the first transactional log only copies the changes since the last snapshot capture, thereby saving space and time. Similarly, based upon the protection schedule, the agents670may capture a second transactional log after the first transactional log. The second transactional log may determine which memory blocks have changes since the first snapshot capture. For example, if the agents670determine that 6 memory blocks have changed since the first snapshot capture, the second transactional log may include memory pointers back to the first snapshot indicating which 6 of the memory blocks have changed. The 6 memory blocks that changed at the time of capturing the second transactional log may or may not include the 4 memory blocks that changed at the time of capturing the first transactional log. Thus, each transactional log that is captured identifies the memory blocks that have changed since the previous snapshot capture and include memory pointers to those changed memory blocks. When the source database is cloned, say to a state when the second transactional log is captured, the associated one of the clone management systems635may recreate the source database from the first snapshot and the second transactional log. Specifically, the associated one of the clone management systems635may determine (e.g., from the memory pointers in the second transactional log) which particular memory blocks have changed from the first snapshot. In the example above, the second transactional log includes memory pointers of the 6 memory blocks that have changed since the first snapshot capture. Thus, the agents670may create the cloned based on the 994 memory blocks from the first snapshot that have not changed plus the 6 memory blocks in the second transactional log that have changed. Thus, the cloned database reflects an accurate state of the source database at the time of the second transactional log capture. Further, and continuing with the example above of capturing 2 snapshots and 2 transactional logs each day, the agents670may capture a second snapshot based upon the protection schedule. In some embodiments, the second snapshot may be a copy of all the memory blocks (e.g., all 1000 memory blocks in the example above) again and transactional logs that are captured after the second snapshot may identify changes in the memory blocks relative to the second snapshot. Thus, any changes made between the capture of the first snapshot and the capture of the second snapshot are reflected in the second snapshot. In other embodiments, the second snapshot may also be an incremental snapshot that reflects only which memory blocks have changed since the first snapshot capture. Thus, the second snapshot in such cases may take less time to create, as well as less space to store. The subsequent transactional logs may continue to make pointers to the first snapshot to reflect the memory blocks that change. Advantageously, the snapshot and transactional log capturing efficiently only copies changes in the memory blocks. Furthermore, all transactions are recorded using transaction logs such that when a clone is created, the source database may be recovered based on both the snapshots and the transaction logs to any desired state. Further, since capturing a transactional log does not require pausing the source database, the transactional logs may be captured in background while the source database is operating, and the transactional logs may be captured at a greater frequency than the snapshots. To capture a transactional log, the agents670maintain a small staging disk. The staging disk may be part of the repositories675or may be separately provisioned from those repositories. The staging disk may be dedicated or shared amongst the various source databases650and may be used to temporarily store the transactional logs. The agents670may sweep (e.g., collects) the transactional logs from the source database to the staging disk based upon the protection schedule. From the staging disk, the agents670may start the log catch up operation to move the transactional logs from the staging disk to the repositories675. Thus, based upon a combination of snapshots and transactional logs, the state of the source database may be effectively and accurately replicated. While the snapshots and transactional logs may be automatically captured based upon the protection schedule, in some embodiments, the database engine605may allow the users to manually capture snapshots and transactional logs. Such an operation may be particularly useful if the user desires a state that falls between the capture of two snapshots and transactional logs. Referring still toFIG.6, the database storage system620may also include an external database manager680. The source databases650that are created within the database system600are already configured to be snapshotted. However, databases that were created outside of the database system600and registered with the database system may not have been configured to be snapshotted. Thus, such databases need to be reconfigured to be able be snapshotted and protected by the database system600. The reconfiguration may be part of the registration process or performed after those databases have been registered. To reconfigure the externally created databases, a process1000ofFIG.10may be used. Referring toFIG.10in conjunction withFIG.6, a flowchart outlining the operations of the process100is shown, in accordance with some embodiments of the present disclosure. The process1000may include other, additional, or fewer operations depending upon the particular embodiment. The process1000may be implemented by the database engine605. The process1000starts at operation1005and at operation1010, a complete back up of the external database is made by the external database manager680. The complete backup includes a complete copy of the external database. In other words, the actual data of the database is copied when creating a back-up. Knowledge about the structure of the underlying storage (e.g., the virtual disks) of a database is not needed when a back-up is created. In contrast, snapshotting requires knowledge of the underlying storage (e.g., virtual disks) since no actual copy of the data is made. Rather, when a snapshot is captured, a copy of the location of the virtual disk where the data is stored is made. Thus, to configure the external database for capturing snapshots and transactional logs, the external database is backed up to a storage disk (e.g., virtual disk), the structure of which is known to the database engine605. The back-up copy may be considered the source database for purposes of protection and may be part of the source database storage645. Further, an instance of a database manager may be associated with the back-up copy to create clones from the back-up copy. In some embodiments, the operation1010may be performed as part of the registration process when the external database is registered with the database system600. In other embodiments, the operation1010may be performed after the registration process. Since the back-up of the external database is to a known structure of the storage disk, snapshots and transactional logs may be created from the back-up copy. Thus, at operation1015, snapshots and transactional logs may be captured of the external database from the back-up copy. The process of capturing snapshots and transactional logs from the back-up copy is the same as that of a database created internally within the database system600. The snapshots and transactional logs may be stored within the repositories675and clones may be created from those snapshots or a combination of snapshots and transactional logs. Thus, at operation1020, upon receiving a user request to create a clone of the external database, the database manager associated with the back-up copy of the external database may create a cloned database from the back-up copy. The clone creation process is same as discussed above. The cloned database may be stored within the target database storage655. In some embodiments, the user may desire to store the external database outside of the database system600or on a system that is not compatible with the database system. Thus, at operation1025, a user may request storing the cloned database to an external location. The database engine605and particularly the external database manager680may reconfigure the cloned database to a form that is compatible with the external location and at operation1030, send the reconfigured database to the external location. The database engine605may continue to make snapshots and transactional logs of the back-up copy of the operation1010based upon the SLA and protection schedule defined for the external database during the registration process. When the user request of the operation1020is received, the database engine605may create the clone of the external database, reconfigure the cloned database, and send it to the external location. The process1000ends at operation1035. Thus, the database system of the present disclosure is a versatile system for creating and managing databases. A variety of operations may be performed on the databases associated with the database system using a user friendly and intuitive user interface. It is to be understood that any examples used herein are simply for purposes of explanation and are not intended to be limiting in any way. It is also to be understood that any examples used herein are simply for purposes of explanation and are not intended to be limiting in any way. Further, although the present disclosure has been discussed with respect to memory usage, in other embodiments, the teachings of the present disclosure may be applied to adjust other resources, such as power, processing capacity, etc. The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, unless otherwise noted, the use of the words “approximate,” “about,” “around,” “substantially,” etc., mean plus or minus ten percent. The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
138,720
11860819
DETAILED DESCRIPTION Disclosed herein are systems, methods, and computer program products related to distributed databases. In an embodiment, a distributed database system comprises an application programming interface with a function for storing a data item indexed by key generated to identify a selected database node. The distributed database may, for example, select a node with capacity suitable for storing the data item and then generate a key that corresponds to the selected node. The generated key, in an embodiment, is then provided via an application programming interface to the application that invoked the interface for storing the data item. The application programming interface may further comprise a function for retrieving the stored data item using the generated key. In an embodiment, a system comprises a hardware processor and a memory on which machine-readable instructions may be stored. The processor, as a result of performing the instructions, may cause the system at least to select a node of a distributed database for storing a data item. The system may, in an embodiment, select the node having the most capacity for storing and retrieving data items, relative to other nodes. The processor, as a result of performing the instructions, may further cause the system to at least generate a key to identify the data item, where the generated key also comprises information indicative of the selected node. For example, the generated key may comprise data such that a hash of the key refers to the selected node. The processor, as a result of performing the instructions, may further cause the system to provide the generated key via the application programming interface. In an embodiment, upon receiving a request to retrieve the data item based on the generated key, the processor may, as a result of performing the instructions, determine which node the data item was stored on using the key, and provide access to the stored data item. In an embodiment, a method of operating a distributed database may comprise selecting, based on a request to store a data item in any of the nodes of a distributed database, a particular node on which to store the data item. The method may further comprise generating a key which identifies the data item and which comprises information that is indicative of the selected node. The method may further comprise providing the generated key in response to the request. In an embodiment, a hash function is used, in both get operations and put operations, to identify the node of a distributed database. In the case of the put operation, where a data item is to be stored by the distributed database, the hash function is applied to a key to determine where a corresponding data item is to be stored. In the case of a get operation, the hash key is applied to a key to determine where the corresponding data item has been stored, so that the data item may be retrieved. In an embodiment, the same hash function is used for keys supplied as a parameter to a put operation and for keys generated automatically and returned from a put operation. The distributed database system may therefore operate without knowing the source of a key. FIG.1depicts an embodiment of a distributed database with automatic node selection and key generation. In an embodiment, a distributed database100may comprise a plurality of nodes102-108. In an embodiment, each of the nodes102-108is a computing node configured to maintain a subset of a collection of data. Nodes are sometimes referred to as partitions, because a collection of data may be subdivided, or partitioned, between the nodes102-108of the database. In an embodiment, an application programming interface116facilitates interaction with the distributed database100. Operations facilitated by the application programming interface116may include operations to store data, sometimes referred to as “put” or “insert” operations. In an embodiment, the application programming interface116may comprise a put function for causing the distributed database to store a data item on one of the nodes102-108. In some instances, the put function may receive a key value and a data item as parameters. In such instances, the distributed database may select one of the nodes102-108for storing the data item based on a hash of the key value. For example, the put function might be supplied with a “customer id” key value and a data item comprising customer information corresponding to the customer id. The customer id might then be hashed and the result of the hash used to select a node for storing the customer information. In an embodiment, a get operation retrieves data items previously stored using a put operation. In an embodiment, the application programming interface116may comprise a get function which accepts a key value as input and returns the data item stored in association with the key. Continuing the previous example, the get function may receive the customer id as a parameter. Applying the hash function to the customer id, the distributed database system may use the resulting value to determine which one of the nodes102-108that the corresponding customer information was stored on. Once the node has been determined, the corresponding customer information may be retrieved from the node and provided as a return value of the get function. In an embodiment, the application programming interface116may comprise a put function which accepts a data item as an input parameter and provides, as a return value, an automatically generated key that corresponds to the data item and which may be used in subsequent get operations. This variant of the put function may further comprise the distributed database100selecting a node suited for storing the data item, based on factors such as the available read and/or write capacity of the selected node. In an embodiment, a node selection module112may receive information indicating which of the nodes102-108may be suitable for storing a data item. In an embodiment, a monitoring module114may record usage and/or capacity statistics associated with the nodes102-108and provide the information to the node selection module112. In an embodiment, the node selection module112may then identify a node with the greatest capacity for storing data items. For example, the node selection module112may determine which of the nodes102-108is least populated, and select it for storing a data item. In an embodiment, a key generation module110may receive a request to generate a key that may be used to identify a data item and which further identifies the node selected for storing the data item. As noted, in an embodiment a hash function may be applied to a key in order to identify a node on which a data item may be stored. In an embodiment, the key generation module110may perform a converse operation in which a node is first selected and then a key is generated to comprise information mapping to the selected node. For example, the generated key, when hashed, may identify the selected node. In an embodiment, the nodes102-108are computing nodes each configured as a database that maintains a portion of the data maintained by the distributed database100. In an embodiment, a node102-108comprise storage devices on which is stored a portion of the data maintained by the distributed database. FIG.2depicts an embodiment of an application programming interface comprising interfaces for automatic key generation and node selection. In an embodiment, an application programming interface comprises functions, routines, protocols, and/or interfaces for initiating various operations involving a distributed database, such as put and get operations. In an embodiment, the application programming interface is a component of a distributed database system. In an embodiment, a computing node of the distributed database is configured to receive signals conforming to an interaction protocol. In an embodiment, the application programming interface is partially located on a client computing device. For example, software routines operative on the client computing device may be activated, and when activated interact with a protocol of a distributed database system to perform the requested operation. In an embodiment, an application programming interface comprises a put interface202and a get interface204. In an embodiment, an interface comprises a defined means by which two software modules may interact. For example, an interface may conform to a defined means by which an application programming interface200and an application program (not depicted) may interact. Although the term interface is sometimes used to refer only to the defined means of interacting, as used herein the term interface may also comprise software instructions which implement the defined means of interaction. In an embodiment, a put interface202may include instructions for receiving input parameters206-208and for providing a return value210. The input parameters206-208may comprise an automatic key generation indicator212and a data item214. In an embodiment, an automatic key generation indicator212may comprise a flag or other value indicating that a key should be automatically generated. For example, the put interface202may accept string values for the input parameter206. In some instances, the application invoking the put interface202might supply the value of an existing key. If so, the put interface202might then cause the corresponding data item to be stored on whatever node was indicated by hashing the supplied key value. However, the client application might instead supply a string value indicating that the key should be automatically generated. In an embodiment, a client application may supply a pre-defined value, such as “A” or “AUTO-GEN,” to indicate that the key should be automatically generated. It will be appreciated that these examples are intended to be illustrative, and should not be construed as limiting the scope of the present disclosure to include only those examples explicitly provided. In an embodiment, procedures used to define or specify data types are used to specify that a key value is to be auto-generated. For example, in an embodiment the distributed database may receive metadata that describes a collection of data that is to be maintained. The metadata may further describe the types of key values, such as strings, integers, and so forth. In an embodiment, the metadata may indicate that the key is to be auto-generated. In an embodiment, the key be specified to be of an auto-generated type, where the distributed database determines what data type is actually used. In an embodiment, the metadata may specify both a type indication and an indication that the key is to be auto-generated. In an embodiment, a client application may supply data that indicates not only that the key should be generated, but also supplies criteria for selecting a node. For example, a client application might supply values such as CRITERIA=READ_FREQUENCY, and so forth to indicate what criteria should be considered by the system when selecting a node to store a data item. It will be appreciated that these examples are intended to be illustrative, and should not be construed as limiting the scope of the present disclosure to include only those examples explicitly provided. In an embodiment, the put interface202may provide, as a return value210, a generated key216. The key216may be provided, in an example embodiment, as the return value of a function. In an embodiment, the key216,218may be provided as an output parameter, or some other means. It will be appreciated that these examples are intended to be illustrative, and should not be construed as limiting the scope of the present disclosure to include only those examples explicitly provided. In an embodiment, the application programming interface may comprise a get interface204comprising instructions for receiving an input parameter220and for providing a return value222. As depicted inFIG.2, a client application may provide a key218that corresponds to the key216,218returned by the put interface202. The get interface204may, accordingly, use the key216,218to identify the node on which the data item214was stored. The get interface204may then provide a copy of the stored data item214,224as a return value222. In an embodiment, a hash function used by the put interface202is also used by the get interface204, such that, given the same key216,218, the output of the hash function, is consistent between the two interfaces. FIGS.3A and3Bdepict an embodiment of a process for generating a key. In an embodiment, a key generation process300may be performed by a key generation module302. The key generation module302may iteratively generate candidate keys until a key is determined to comprise information identifying a node selected to store a data item. In an embodiment, the key generation module302may comprise a randomization function304and a key-generation function306. In an embodiment, the randomization function304generates random or pseudo-random numbers or strings that may be used by the key-generation function306to generate keys. The key-generation function306, using output of the randomization function304, produces a candidate key308based on the output. In an embodiment, the key-generation function306is configured such that it generates candidate keys that, when hashed, refer to nodes312-318with a desired random distribution. In an embodiment, the distribution is uniform between all possible nodes312-318. In an embodiment, the distribution is biased toward a node316selected to store the data item. It will be appreciated that these examples are intended to be illustrative, and should not be construed as limiting the scope of the present disclosure to include only those examples explicitly provided. FIG.3Adepicts that the candidate key308, when hashed by the hash function310, refers to a node314other than the node316selected to store the data item. They key generation process300may then proceed, as depicted inFIG.3B, to produce an additional candidate key320. The key generation module302may then apply the hash function310to the candidate key320and determine that the candidate key308corresponds, as desired, to the node316selected to store the data item. The key generation module302may then accept the candidate key320as the key for the data item, and provided as a return value from the put function. In an embodiment, the key generation module302may accept a key even when it does not correspond to the node316selected to store the data item. Instead, the key generation module302may employ a randomization function304and key-generation function306which produces key values that, when hash, are biased towards the selected node316, but which may at times be randomly generated to point to other nodes. In an embodiment, the key generation module302is configured so that it is biased between a plurality of candidate nodes. FIG.4depicts an embodiment of a process for automatically generating keys in a distributed database. Although depicted as a sequence of blocks, those of ordinary skill in the art will appreciate that the depicted order should not be construed as limiting the scope of the present disclosure to only embodiments which precisely match the depicted order, and that at least some of the operations referred to in the depicted blocks may be altered, omitted, reordered, supplemented with additional operations, or performed in parallel. Block400depicts an embodiment receiving a request to store a data item. In an embodiment, an application interface comprising a put interface may be provided. The put interface, upon invocation, may cause execution of machine-readable instructions which cause a distributed database to store the data item on a selected node. In an embodiment, the put interface may comprise input parameters which may be used to indicate that a key should be automatically generated for the data item that is to be stored. In an embodiment, the put interface comprises a flag parameter indicative of using an automatically generated key. Block402depicts an embodiment selecting, in response to the request, a node on which the data item may be stored. In an embodiment, the application programming interface, or another component of the distributed database, determines that a node has greater capacity to store the data item than at least one other node of the plurality of nodes. In an embodiment, the application programming interface, or another component of the distributed database, determines that a node has greater capacity to retrieve the data item than at least one other node. A node may therefore be selected based on its read capacity and/or its write capacity. The selected node may be the node with the greatest read capacity and/or write capacity, or a node with relatively good read capacity and/or write capacity compared to other nodes. Block404depicts an embodiment generating a key to identify the data item and which comprises information indicative of the node selected for storing the data item. In an embodiment, the key is generated based at least in part on a randomization function, and used for identifying the data item based on a determination that a hash of the key identifies the selected node. In an embodiment, a candidate key is generated based on a randomization function and rejected upon determining that the candidate key does not identify the selected node. In an embodiment, a candidate key is generated based on a randomization function and rejected upon determining that the candidate key corresponds to a second data item that has already been stored in the distributed database. In an embodiment, the candidate key is selected based on a randomization function that is biased towards the selected node. In this instance, a key may be accepted for identifying the data item even if the key, when hashed, does not correspond to the initially selected node. In this instance, the node indicated by the candidate key may be used as the selected node. Block406depicts an embodiment storing the data item on the selected node. This may typically comprise writing the data item to a storage device and updating one or more indexes. The indexes may be updated to map from the generated key to the data item. Block408depicts an embodiment providing the generated key in response to the request. In an embodiment, the generated key may be returned to the application that invoked the put interface, so that the application may subsequently refer to the paired key and data item. Block410depicts that, in an embodiment, subsequent requests to retrieve the item may be based on the generated key. Upon receiving a request to retrieve a data item, the distributed database may determine which node the data item is stored on by applying a hash function to the key. That node may then be instructed to retrieve the data item, and the data item provided in response to the request. FIG.5depicts an embodiment of a process for identifying a node of a distributed database for storing a data item. Although depicted as a sequence of blocks, those of ordinary skill in the art will appreciate that the depicted order should not be construed as limiting the scope of the present disclosure to only embodiments which precisely match the depicted order, and that at least some of the operations referred to in the depicted blocks may be altered, omitted, reordered, supplemented with additional operations, or performed in parallel. Block500depicts an embodiment receiving selection criteria. In an embodiment, the selection criteria may be provided from a user interface. For example, an administrator might indicate, through the user interface, that the workload of the distributed database is expected to be write-heavy, read-heavy, or some combination thereof. In an embodiment, the administrator might indicate that storage capacity should be considered as criteria for selecting a node on which a data item should be stored. It will be appreciated that these examples are intended to be illustrative, and should not be viewed as limiting the scope of the present disclosure to only those examples explicitly described. Block502depicts an embodiment receiving usage data from the nodes of a distributed database. In an embodiment, the usage information includes data indicating amounts of available storage capacity on a node, available processing capacity, amount of read traffic, amount of write traffic, and so forth. It will be appreciated that these examples are intended to be illustrative, and should not be viewed as limiting the scope of the present disclosure to only those examples explicitly described. Block504depicts an embodiment analyzing the usage data with respect to the selection criteria. In an embodiment, a node selection module identifies those nodes of the distributed database which are suitable for storing the data item. In an embodiment, a best-fit analysis is performed with respect to the criteria. In an embodiment, one or more nodes having the greatest amount of a desired capacity, such as storage space or processor utilization, is selected. Block506depicts an embodiment selecting one or more candidate nodes to store the data item, based on the analysis of the usage data. In an embodiment, a single node is selected. In another aspect, an embodiment of the node selection module may identify a list of nodes that are suitable for storing the data item. Block508depicts an embodiment generating a key that, when hashed, refers to a candidate node. In the event that a list of candidate nodes was identified, the generated key may refer to any one of the selected candidate nodes. In the event that a single node was identified, the generated key may refer to that node. FIG.6depicts an embodiment of a process for generating a key comprising information indicative of a selected node. Although depicted as a sequence of blocks, those of ordinary skill in the art will appreciate that the depicted order should not be construed as limiting the scope of the present disclosure to only embodiments which precisely match the depicted order, and that at least some of the operations referred to in the depicted blocks may be altered, omitted, reordered, supplemented with additional operations, or performed in parallel. Block600depicts an embodiment using a hash function used by the distributed database in get and push operations. In an embodiment, a distributed database comprises N nodes. A selected hash function, in this embodiment, may map from an input string to a numeric value. The numeric value may in turn correspond to one of the N nodes. In this manner, application of the hash function to a key value maps a given key value to one and only one node of the distributed database. Block602depicts an embodiment using a randomization function to generate a random portion of a candidate key. In an embodiment, the random portion constitutes the entire key. In another embodiment, the random portion constitutes a subset of the key. In an embodiment, the randomization and randomized portion of the key are selected so that the output of the hash function randomly refers to one of the N nodes of the distributed database. Block604depicts an embodiment generating a candidate key using the randomization function. Block606depicts an embodiment determining if a hash of the candidate key is unique and identifies a candidate node. In an embodiment, a randomly generated key may at times correspond to a data item already stored. In an embodiment, a key generation module may determine if the key has already been used and if so, reject the candidate key. In an embodiment, this operation is performed after determining that the candidate key identifies a selected node. In an embodiment, the operation is combined with other operations, such as storing the data item. In an embodiment, the key generation module determines if the candidate key identifies one of the candidate nodes. This step may, in an embodiment, comprise applying the hash function to the candidate key and checking to see that the result corresponds to a selected node. Then, as depicted by branch608, the process may continue with generating another candidate key at block604, or using the candidate key to identify a data item at block610. If the candidate key matches at least one of the candidate nodes, it may be used as the key that will identify the data item. FIG.7is a diagram depicting an example of a distributed computing environment on which aspects of the present invention may be practiced. Various users700amay interact with various client applications, operating on any type of computing device702a, to communicate over communications network804with processes executing on various computing nodes710a,710b, and710cwithin a data center720. Alternatively, client applications702bmay communicate without user intervention. Communications network704may comprise any combination of communications technology, including the Internet, wired and wireless local area networks, fiber optic networks, satellite communications, and so forth. Any number of networking protocols may be employed. Communication with processes executing on the computing nodes710a,710b, and710c, operating within data center720, may be provided via gateway706and router708. Numerous other network configurations may also be employed. Although not explicitly depicted inFIG.7, various authentication mechanisms, web service layers, business objects, or other intermediate layers may be provided to mediate communication with the processes executing on computing nodes710a,710b, and710c. Some of these intermediate layers may themselves comprise processes executing on one or more of the computing nodes. Computing nodes710a,710b, and710c, and processes executing thereon, may also communicate with each other via router708. Alternatively, separate communication paths may be employed. In some embodiments, data center720may be configured to communicate with additional data centers, such that the computing nodes and processes executing thereon may communicate with computing nodes and processes operating within other data centers. Computing node710ais depicted as residing on physical hardware comprising one or more processors716, one or more memories818, and one or more storage devices714. Processes on computing node710amay execute in conjunction with an operating system or alternatively may execute as a bare-metal process that directly interacts with physical resources, such as processors716, memories718, or storage devices714. Computing nodes710band710care depicted as operating on virtual machine host712, which may provide shared access to various physical resources, such as physical processors, memory, and storage devices. Any number of virtualization mechanisms might be employed to host the computing nodes. The various computing nodes depicted inFIG.7may be configured to host web services, database management systems, business objects, monitoring and diagnostic facilities, and so forth. A computing node may refer to various types of computing resources, such as personal computers, servers, clustered computing devices, and so forth. A computing node may, for example, refer to various computing devices, such as cell phones, smartphones, tablets, embedded device, and so on. When implemented in hardware form, computing nodes are generally associated with one or more memories configured to store computer-readable instructions and one or more processors configured to read and execute the instructions. A hardware-based computing node may also comprise one or more storage devices, network interfaces, communications buses, user interface devices, and so forth. Computing nodes also encompass virtualized computing resources, such as virtual machines implemented with or without a hypervisor, virtualized bare-metal environments, and so forth. A virtualization-based computing node may have virtualized access to hardware resources as well as non-virtualized access. The computing node may be configured to execute an operating system as well as one or more application programs. In some embodiments, a computing node might also comprise bare-metal application programs. In at least some embodiments, a server that implements a portion or all of one or more of the technologies described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.FIG.8depicts a general-purpose computer system that includes or is configured to access one or more computer-accessible media. In the illustrated embodiment, computing device800includes one or more processors810a,810b, and/or810n(which may be referred herein singularly as a processor810or in the plural as the processors810) coupled to a system memory820via an input/output (“I/O”) interface830. Computing device800further includes a network interface840coupled to I/O interface830. In various embodiments, computing device800may be a uniprocessor system including one processor810or a multiprocessor system including several processors810(e.g., two, four, eight, or another suitable number). Processors810may be any suitable processors capable of executing instructions. For example, in various embodiments, processors810may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (“ISAs”), such as the x86, PowerPC, SPARC or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors810may commonly, but not necessarily, implement the same ISA. In some embodiments, a graphics processing unit (“GPU”)812may participate in providing graphics rendering and/or physics processing capabilities. A GPU may, for example, comprise a highly parallelized processor architecture specialized for graphical computations. In some embodiments, processors810and GPU812may be implemented as one or more of the same type of device. System memory820may be configured to store instructions and data accessible by processor(s)910. In various embodiments, system memory820may be implemented using any suitable memory technology, such as static random access memory (“SRAM”), synchronous dynamic RAM (“SDRAM”), nonvolatile/Flash®-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory820as code825and data826. In one embodiment, I/O interface830may be configured to coordinate I/O traffic between processor810, system memory820, and any peripherals in the device, including network interface840or other peripheral interfaces. In some embodiments, I/O interface830may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processor810). In some embodiments, I/O interface830may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (“PCI”) bus standard or the Universal Serial Bus (“USB”) standard, for example. In some embodiments, the function of I/O interface830may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface830, such as an interface to system memory820, may be incorporated directly into processor810. Network interface840may be configured to allow data to be exchanged between computing device800and other device or devices860attached to a network or networks850, such as other computer systems or devices, for example. In various embodiments, network interface840may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example. Additionally, network interface840may support communication via telecommunications/telephony networks, such as analog voice networks or digital fiber communications networks, via storage area networks, such as Fibre Channel SANs (storage area networks), or via any other suitable type of network and/or protocol. In some embodiments, system memory820may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent, or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device800via I/O interface830. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media, such as RAM (e.g., SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device800as system memory820or another type of memory. Further, a computer-accessible medium may include transmission media or signals, such as electrical, electromagnetic or digital signals, conveyed via a communication medium, such as a network and/or a wireless link, such as those that may be implemented via network interface840. Portions or all of multiple computing devices, such as those illustrated inFIG.8, may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices or special-purpose computer systems, in addition to or instead of being implemented using general-purpose computer systems. The term “computing device,” as used herein, refers to at least all these types of devices and is not limited to these types of devices. The system memory820may be reconfigured by the operation of one or more of the processors810. The processors810may execute the instructions of a code module and thereby reconfigure the system memory820to form data structures and data elements. Forming a data element may therefore refer to operations of the processor810to reconfigure the system memory820. The GPU812, network interface840, and I/O interface may also, in some cases, form data structures by reconfiguring the system memory820. Accordingly, the terms “form” and “forming” may also refer to the operations of these and other devices860which may cause the a data structure or data element to be stored in the system memory820. A compute node, which may be referred to also as a computing node, may be implemented on a wide variety of computing environments, such as tablet computers, personal computers, smartphones, game consoles, commodity-hardware computers, virtual machines, web services, computing clusters, and computing appliances. Any of these computing devices or environments may, for convenience, be described as compute nodes or as computing nodes. A network set up by an entity, such as a company or a public sector organization, to provide one or more web services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to a distributed set of clients may be termed a provider network. Such a provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment, and the like, needed to implement and distribute the infrastructure and web services offered by the provider network. The resources may in some embodiments be offered to clients in various units related to the web service, such as an amount of storage capacity for storage, processing capability for processing, as instances, as sets of related services, and the like. A virtual computing instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor). A number of different types of computing devices may be used singly or in combination to implement the resources of the provider network in different embodiments, including general-purpose or special-purpose computer servers, storage devices, network devices, and the like. In some embodiments a client or user may be provided direct access to a resource instance, e.g., by giving a user an administrator login and password. In other embodiments the provider network operator may allow clients to specify execution requirements for specified client applications and schedule execution of the applications on behalf of the client on execution platforms (such as application server instances, Java™ virtual machines (“JVMs”), general-purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages, such as Ruby, Perl, Python, C, C++, and the like, or high-performance computing platforms) suitable for the applications, without, for example, requiring the client to access an instance or an execution platform directly. A given execution platform may utilize one or more resource instances in some implementations; in other implementations multiple execution platforms may be mapped to a single resource instance. In many environments, operators of provider networks that implement different types of virtualized computing, storage and/or other network-accessible functionality may allow customers to reserve or purchase access to resources in various resource acquisition modes. The computing resource provider may provide facilities for customers to select and launch the desired computing resources, deploy application components to the computing resources, and maintain an application executing in the environment. In addition, the computing resource provider may provide further facilities for the customer to quickly and easily scale up or scale down the numbers and types of resources allocated to the application, either manually or through automatic scaling, as demand for or capacity requirements of the application change. The computing resources provided by the computing resource provider may be made available in discrete units, which may be referred to as instances. An instance may represent a physical server hardware platform, a virtual machine instance executing on a server, or some combination of the two. Various types and configurations of instances may be made available, including different sizes of resources executing different operating systems (“OS”) and/or hypervisors, and with various installed software applications, runtimes, and the like. Instances may further be available in specific availability zones, representing a logical region, a fault tolerant region, a data center, or other geographic location of the underlying computing hardware, for example. Instances may be copied within an availability zone or across availability zones to improve the redundancy of the instance, and instances may be migrated within a particular availability zone or across availability zones. As one example, the latency for client communications with a particular server in an availability zone may be less than the latency for client communications with a different server. As such, an instance may be migrated from the higher latency server to the lower latency server to improve the overall client experience. In some embodiments the provider network may be organized into a plurality of geographical regions, and each region may include one or more availability zones. An availability zone (which may also be referred to as an availability container) in turn may comprise one or more distinct locations or data centers, configured in such a way that the resources in a given availability zone may be isolated or insulated from failures in other availability zones. That is, a failure in one availability zone may not be expected to result in a failure in any other availability zone. Thus, the availability profile of a resource instance is intended to be independent of the availability profile of a resource instance in a different availability zone. Clients may be able to protect their applications from failures at a single location by launching multiple application instances in respective availability zones. At the same time, in some implementations inexpensive and low latency network connectivity may be provided between resource instances that reside within the same geographical region (and network transmissions between resources of the same availability zone may be even faster). Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computers or computer processors. The code modules may be stored on any type of non-transitory computer-readable medium or computer storage device, such as hard drives, solid state memory, optical disc, and/or the like. The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The results of the disclosed processes and process steps may be stored, persistently or otherwise, in any type of non-transitory computer storage, such as, e.g., volatile or non-volatile storage. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments. It will also be appreciated that various items are illustrated as being stored in memory or on storage while being used, and that these items or portions thereof may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Furthermore, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits (“ASICs”), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (“FPGAs”), complex programmable logic devices (“CPLDs”), etc. Some or all of the modules, systems, and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network, or a portable media article to be read by an appropriate device or via an appropriate connection. The systems, modules, and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable-based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, the present invention may be practiced with other computer system configurations. Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein.
46,894
11860820
DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for processing data through a storage system in a data pipeline in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSF’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (EMU). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110B) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an application specific integrated circuit (‘ASIC’), a field programmable gate array (‘FPGA’), a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate4(‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120n. The stored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Modes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., a field programmable gate array (FPGA). In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA (field programmable gate array), flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g. partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (MP), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider302offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider302offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. The cloud services provider302may be further configured to provide services to the storage system306and users of the storage system306through the implementation of an authentication as a service (‘AaaS’) service model where the cloud services provider302offers authentication services that can be used to secure access to applications, data sources, or other resources. The cloud services provider302may also be configured to provide services to the storage system306and users of the storage system306through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. Public cloud and private cloud deployment models may differ and may come with various advantages and disadvantages. For example, because a public cloud deployment involves the sharing of a computing infrastructure across different organization, such a deployment may not be ideal for organizations with security concerns, mission-critical workloads, uptime requirements demands, and so on. While a private cloud deployment can address some of these issues, a private cloud deployment may require on-premises staff to manage the private cloud. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premises with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include storage resources308, which may be embodied in many forms. For example, in some embodiments the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate. In some embodiments, the storage resources308may include 3D crosspoint non-volatile memory in which bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. In some embodiments, the storage resources308may include flash memory, including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, and others. In some embodiments, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM, in which data is stored through the use of magnetic storage elements. In some embodiments, the example storage resources308may include non-volatile phase-change memory (‘PCM’) that may have the ability to hold multiple bits in a single cell as cells can achieve a number of distinct intermediary states. In some embodiments, the storage resources308may include quantum memory that allows for the storage and retrieval of photonic quantum information. In some embodiments, the example storage resources308may include resistive random-access memory (‘ReRAM’) in which data is stored by changing the resistance across a dielectric solid-state material. In some embodiments, the storage resources308may include storage class memory (‘SCM’) in which solid-state nonvolatile memory may be manufactured at a high density using some combination of sub-lithographic patterning techniques, multiple bits per cell, multiple layers of devices, and so on. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of storage-class memory (‘SCM’). SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC networks. The communications resources310can also include FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks. The communications resources310can also include InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters. The communications resources310can also include NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more application-specific integrated circuits (‘ASICs’) that are customized for some particular purpose as well as one or more central processing units (‘CPUs’). The processing resources312may also include one or more digital signal processors (‘DSPs’), one or more field-programmable gate arrays (‘FPGAs’), one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform various tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. Through the use of such data protection techniques, business continuity and disaster recovery objectives may be met as a failure of the storage system may not result in the loss of data stored in the storage system. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. Readers will appreciate that the presence of such software resources314may provide for an improved user experience of the storage system306, an expansion of functionality supported by the storage system306, and many other benefits. Consider the specific example of the software resources314carrying out data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. In such an example, the systems described herein may more reliably (and with less burden placed on the user) perform backup operations relative to interactive backup management systems that require high degrees of user interactivity, offer less robust automation and feature sets, and so on. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318, the cloud-based storage system318may be used to provide storage services to users of the cloud-based storage system318through the use of solid-state storage, and so on. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system318to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318and providing such data to users of the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. In such an example, in order to save costs, the cloud computing instance320that operates as the primary controller may be deployed on a relatively high-performance and relatively expensive cloud computing instance while the cloud computing instance322that operates as the secondary controller may be deployed on a relatively low-performance and relatively inexpensive cloud computing instance. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322. Consider an example in which the cloud computing environment316is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, AWS offers many types of EC2 instances. For example, AWS offers a suite of general purpose EC2 instances that include varying levels of memory and processing power. In such an example, the cloud computing instance320that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance322that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance322that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance322that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance320that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance320that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance322that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance320operates as the primary controller and the second cloud computing instance322operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. In such an example, a controller failure may take more time to recover from as a new cloud computing instance that includes the storage controller application would need to be spun up rather than having an already created cloud computing instance take on the role of servicing I/O operations that would have otherwise been handled by the failed cloud computing instance. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340ndepicted inFIG.3Cmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block-storage342,344,346that is offered by the cloud computing environment316. The block-storage342,344,346that is offered by the cloud computing environment316may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance340a, a second EBS volume may be coupled to a second cloud computing instance340b, and a third EBS volume may be coupled to a third cloud computing instance340n. In such an example, the block-storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud computing instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud computing instance340a,340b,340n. In an alternative embodiment, rather than using the block-storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.3C, the cloud computing instances340a,340b,340nwith local storage330,334,338may be utilized, by cloud computing instances320,322that support the execution of the storage controller application324,326to service I/O operations that are directed to the cloud-based storage system318. Consider an example in which a first cloud computing instance320that is executing the storage controller application324is operating as the primary controller. In such an example, the first cloud computing instance320that is executing the storage controller application324may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system318from users of the cloud-based storage system318. In such an example, the first cloud computing instance320that is executing the storage controller application324may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Readers will appreciate that when a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to not only write the data to its own local storage330,334,338resources and any appropriate block-storage342,344,346that are offered by the cloud computing environment316, but the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance340a,340b,340n. In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. Readers will appreciate that, as described above, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318. While the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. In order to address this, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. Consider an example in which data is written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nin 1 MB blocks. In such an example, assume that a user of the cloud-based storage system318issues a request to write data that, after being compressed and deduplicated by the storage controller application324,326results in the need to write 5 MB of data. In such an example, writing the data to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nis relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage348, 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage348, 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage348, and so on. As such, in some embodiments, each object that is written to the cloud-based object storage348may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage348may be incorporated into the cloud-based storage system318to increase the durability of the cloud-based storage system318. Continuing with the example described above where the cloud computing instances340a,340b,340nare EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances340a,340b,340nwith local storage330,334,338as the only source of persistent data storage in the cloud-based storage system318may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system318may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.999999999% durability, meaning that a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system318depicted inFIG.3Cnot only stores data in S3 but the cloud-based storage system318also stores data in local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, such that read operations can be serviced from local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, thereby reducing read latency when users of the cloud-based storage system318attempt to read data from the cloud-based storage system318. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. As described above, when the cloud computing instances340a,340b,340nwith local storage330,334,338are embodied as EC2 instances, the cloud computing instances340a,340b,340nwith local storage330,334,338are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance340a,340b,340nwith local storage330,334,338. As such, one or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances340a,340b,340nwith local storage330,334,338failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage348. Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage348such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage348as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage348, less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318. The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318in order to more rapidly pull data from the cloud-based object storage348and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system318. In such embodiments, once the data stored by the cloud-based storage system318has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system318have written to the cloud-based storage system318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system318have written to the cloud-based storage system318and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage348in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the could-based storage system318via communications with one or more of the cloud computing instances320,322that each are used to support the execution of a storage controller application324,326, via monitoring communications between cloud computing instances320,322,340a,340b,340n, via monitoring communications between cloud computing instances320,322,340a,340b,340nand the cloud-based object storage348, or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318. In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system318, an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances340a,340b,340nhas reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances340a,340b,340n, such that data stored in an already existing cloud computing instance340a,340b,340ncan be migrated to the one or more new cloud computing instances and the already existing cloud computing instance340a,340b,340ncan be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system318may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system318, but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system318may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system318may be dynamically scaled, the cloud-based storage system318may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system318described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system318described here can always ‘add’ additional storage, the cloud-based storage system318can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system318may implement a policy that garbage collection only be performed when the number of IOPS being serviced by the cloud-based storage system318falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system318is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services that may not include all of the components depicted inFIG.3C. In some embodiments, especially in embodiments where the cloud-based object storage348resources are embodied as Amazon S3, the cloud-based storage system318may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object—and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system318does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components depicted inFIG.3Bmay be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system306while also reducing various costs associated with the establishment and operation of the storage system306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage system306depicted inFIG.3Bmay be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may be also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. A GPU is a modern processor with thousands of cores, well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Furthermore, AI may impact a wide variety of industries and sectors. For example, AI solutions may be used in healthcare to take clinical notes, patient files, research data, and other inputs to generate potential treatment options for doctors to explore. Likewise, AI solutions may be used by retailers to personalize consumer recommendations based on a person's digital footprint of behaviors, profile data, or other data. Training deep neural networks, however, requires both high quality input data and large amounts of computation. GPUs are massively parallel processors capable of operating on large amounts of data simultaneously. When combined into a multi-GPU cluster, a high throughput pipeline may be required to feed input data from storage to the compute engines. Deep learning is more than just constructing and training models. There also exists an entire data pipeline that must be designed for the scale, iteration, and experimentation necessary for a data science team to succeed. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. A data scientist works to improve the usefulness of the trained model through a wide variety of approaches: more data, better data, smarter training, and deeper models. In many cases, there will be teams of data scientists sharing the same datasets and working in parallel to produce new and improved training models. Often, there is a team of data scientists working within these phases concurrently on the same shared datasets. Multiple, concurrent workloads of data processing, experimentation, and full-scale training layer the demands of multiple access patterns on the storage tier. In other words, storage cannot just satisfy large file reads, but must contend with a mix of large and small file reads and writes. Finally, with multiple data scientists exploring datasets and models, it may be critical to store data in its native format to provide flexibility for each user to transform, clean, and use the data in a unique way. The storage systems described above may provide a natural shared storage home for the dataset, with data protection redundancy (e.g., by using RAID6) and the performance necessary to be a common access point for multiple developers and multiple experiments. Using the storage systems described above may avoid the need to carefully copy subsets of the data for local work, saving both engineering and GPU-accelerated servers use time. These copies become a constant and growing tax as the raw data set and desired transformations constantly update and change. Readers will appreciate that a fundamental reason why deep learning has seen a surge in success is the continued improvement of models with larger data set sizes. In contrast, classical machine learning algorithms, like logistic regression, stop improving in accuracy at smaller data set sizes. As such, the separation of compute resources and storage resources may also allow independent scaling of each tier, avoiding many of the complexities inherent in managing both together. As the data set size grows or new data sets are considered, a scale out storage system must be able to expand easily. Similarly, if more concurrent training is required, additional GPUs or other compute resources can be added without concern for their internal storage. Furthermore, the storage systems described above may make building, operating, and growing an AI system easier due to the random read bandwidth provided by the storage systems, the ability to of the storage systems to randomly read small files (50 KB) high rates (meaning that no extra effort is required to aggregate individual data points to make larger, storage-friendly files), the ability of the storage systems to scale capacity and performance as either the dataset grows or the throughput requirements grow, the ability of the storage systems to support files or objects, the ability of the storage systems to tune performance for large or small files (i.e., no need for the user to provision filesystems), the ability of the storage systems to support non-disruptive upgrades of hardware and software even during production model training, and for many other reasons. Small file performance of the storage tier may be critical as many types of inputs, including text, audio, or images will be natively stored as small files. If the storage tier does not handle small files well, an extra step will be required to pre-process and group samples into larger files. Storage, built on top of spinning disks, that relies on SSD as a caching tier, may fall short of the performance needed. Because training with random input batches results in more accurate models, the entire data set must be accessible with full performance. SSD caches only provide high performance for a small subset of the data and will be ineffective at hiding the latency of spinning drives. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. Distributed deep learning can be used to significantly accelerate deep learning with distributed computing on GPUs (or other form of accelerator or computer program instruction executor), such that parallelism can be achieved. In addition, the output of training machine learning and deep learning models, such as a fully trained machine learning model, may be used for a variety of purposes and in conjunction with other tools. For example, trained machine learning models may be used in conjunction with tools like Core ML to integrate a broad variety of machine learning model types into an application. In fact, trained models may be run through Core ML converter tools and inserted into a custom application that can be deployed on compatible devices. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. Readers will further appreciate that the systems described above may be deployed in a variety of ways to support the democratization of AI, as AI becomes more available for mass consumption. The democratization of AI may include, for example, the ability to offer AI as a Platform-as-a-Service, the growth of Artificial general intelligence offerings, the proliferation of Autonomous level 4 and Autonomous level 5 vehicles, the availability of autonomous mobile robots, the development of conversational AI platforms, and many others. For example, the systems described above may be deployed in cloud environments, edge environments, or other environments that are useful in supporting the democratization of AI. As part of the democratization of AI, a movement may occur from narrow AI that consists of highly scoped machine learning solutions that target a particular task to artificial general intelligence where the use of machine learning is expanded to handle a broad range of use cases that could essentially perform any intelligent task that a human could perform and could learn dynamically, much like a human. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. Such blockchains may be embodied as a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block in a blockchain may contain a hash pointer as a link to a previous block, a timestamp, transaction data, and so on. Blockchains may be designed to be resistant to modification of the data and can serve as an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. This makes blockchains potentially suitable for the recording of events, medical records, and other records management activities, such as identity management, transaction processing, and others. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM′ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Readers will appreciate that blockchain technologies may impact a wide variety of industries and sectors. For example, blockchain technologies may be used in real estate transactions as blockchain based contracts whose use can eliminate the need for 3rdparties and enable self-executing actions when conditions are met. Likewise, universal health records can be created by aggregating and placing a person's health history onto a blockchain ledger for any healthcare provider, or permissioned health care providers, to access and update. Readers will appreciate that the usage of blockchains is not limited to financial transactions, contracts, and the like. In fact, blockchains may be leveraged to enable the decentralized aggregation, ordering, timestamping and archiving of any type of information, including structured data, correspondence, documentation, or other data. Through the usage of blockchains, participants can provably and permanently agree on exactly what data was entered, when and by whom, without relying on a trusted intermediary. For example, SAP's recently launched blockchain platform, which supports MultiChain and Hyperledger Fabric, targets a broad range of supply chain and other non-financial applications. One way to use a blockchain for recording data is to embed each piece of data directly inside a transaction. Every blockchain transaction may be digitally signed by one or more parties, replicated to a plurality of nodes, ordered and timestamped by the chain's consensus algorithm, and stored permanently in a tamper-proof way. Any data within the transaction will therefore be stored identically but independently by every node, along with a proof of who wrote it and when. The chain's users are able to retrieve this information at any future time. This type of storage may be referred to as on-chain storage. On-chain storage may not be particularly practical, however, when attempting to store a very large dataset. As such, in accordance with embodiments of the present disclosure, blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Each hash may serve as a commitment to its input data, with the data itself being stored outside of the blockchain. Readers will appreciate that any blockchain participant that needs an off-chain piece of data cannot reproduce the data from its hash, but if the data can be retrieved in some other way, then the on-chain hash serves to confirm who created it and when. Just like regular on-chain data, the hash may be embedded inside a digitally signed transaction, which was included in the chain by consensus. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). While typical PoW systems only depend on the previous block in order to generate each successive block, the PoA algorithm may incorporate data from a randomly chosen previous block. Combined with the blockweave data structure, miners do not need to store all blocks (forming a blockchain), but rather can store any previous blocks forming a weave of blocks (a blockweave). This enables increased levels of scalability, speed and low-cost and reduces the cost of data storage in part because miners need not store all blocks, thereby resulting in a substantial reduction in the amount of electricity that is consumed during the mining process because, as the network expands, electricity consumption decreases because a blockweave demands less and less hashing power for consensus as data is added to the system. Furthermore, blockweaves may be deployed on a decentralized storage network in which incentives are created to encourage rapid data sharing. Such decentralized storage networks may also make use of blockshadowing techniques, where nodes only send a minimal block “shadow” to other nodes that allows peers to reconstruct a full block, instead of transmitting the full block itself. The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In memory computing involves the storage of information in RAM that is distributed across a cluster of computers. In-memory computing helps business customers, including retailers, banks and utilities, to quickly detect patterns, analyze massive data volumes on the fly, and perform their operations quickly. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the systems described above may be better suited for the applications described above relative to other systems that may include, for example, a distributed direct-attached storage (DDAS) solution deployed in server nodes. Such DDAS solutions may be built for handling large, less sequential accesses but may be less able to handle small, random accesses. Readers will further appreciate that the storage systems described above may be utilized to provide a platform for the applications described above that is preferable to the utilization of cloud-based resources as the storage systems may be included in an on-site or in-house infrastructure that is more secure, more locally and internally managed, more robust in feature sets and performance, or otherwise preferable to the utilization of cloud-based resources as part of a platform to support the applications described above. For example, services built on platforms such as IBM's Watson may require a business enterprise to distribute individual user information, such as financial transaction information or identifiable patient records, to other institutions. As such, cloud-based offerings of AI as a service may be less desirable than internally managed and offered AI as a service that is supported by storage systems such as the storage systems described above, for a wide array of technical reasons as well as for various business reasons. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Such platforms may seamlessly collect, organize, secure, and analyze data across an enterprise, as well as simplify hybrid data management, unified data governance and integration, data science and business analytics with a single solution. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. Likewise, machines like locomotives and gas turbines that generate large amounts of information through the use of a wide array of data-generating sensors may benefit from the rapid data processing capabilities of an edge solution. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. Consider a specific example of inventory management in a warehouse, distribution center, or similar location. A large inventory, warehousing, shipping, order-fulfillment, manufacturing or other operation has a large amount of inventory on inventory shelves, and high resolution digital cameras that produce a firehose of large data. All of this data may be taken into an image processing system, which may reduce the amount of data to a firehose of small data. All of the small data may be stored on-premises in storage. The on-premises storage, at the edge of the facility, may be coupled to the cloud, for external reports, real-time control and cloud storage. Inventory management may be performed with the results of the image processing, so that inventory can be tracked on the shelves and restocked, moved, shipped, modified with new products, or discontinued/obsolescent products deleted, etc. The above scenario is a prime candidate for an embodiment of the configurable processing and storage systems described above. A combination of compute-only blades and offload blades suited for the image processing, perhaps with deep learning on offload-FPGA or offload-custom blade(s) could take in the firehose of large data from all of the digital cameras, and produce the firehose of small data. All of the small data could then be stored by storage nodes, operating with storage units in whichever combination of types of storage blades best handles the data flow. This is an example of storage and function acceleration and integration. Depending on external communication needs with the cloud, and external processing in the cloud, and depending on reliability of network connections and cloud resources, the system could be sized for storage and compute management with bursty workloads and variable conductivity reliability. Also, depending on other inventory management aspects, the system could be configured for scheduling and resource management in a hybrid edge/cloud environment. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. Big data analytics applications enable data scientists, predictive modelers, statisticians and other analytics professionals to analyze growing volumes of structured transaction data, plus other forms of data that are often left untapped by conventional business intelligence (BI) and analytics programs. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. Big data analytics is a form of advanced analytics, which involves complex applications with elements such as predictive models, statistical algorithms and what-if analyses powered by high-performance analytics systems. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain—computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. Readers will appreciate that some transparently immersive experiences may involve the use of digital twins of various “things” such as people, places, processes, systems, and so on. Such digital twins and other immersive technologies can alter the way that humans interact with technology, as conversational platforms, augmented reality, virtual reality and mixed reality provide a more natural and immersive interaction with the digital world. In fact, digital twins may be linked with the real-world, perhaps even in real-time, to understand the state of a thing or system, respond to changes, and so on. Because digital twins consolidate massive amounts of information on individual assets and groups of assets (even possibly providing control of those assets), digital twins may communicate with each other to digital factory models of multiple linked digital twins. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. Furthermore, application monitoring and visibility tools may be deployed to move application workloads around different clouds, identify performance issues, and perform other tasks. In addition, security and compliance tools may be deployed for to ensure compliance with security requirements, government regulations, and so on. Such a multi-cloud environment may also include tools for application delivery and smart workload management to ensure efficient application delivery and help direct workloads across the distributed and heterogeneous infrastructure, as well as tools that ease the deployment and maintenance of packaged and custom applications in the cloud and enable portability amongst clouds. The multi-cloud environment may similarly include tools for data portability. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Such crypto-anchors may take many forms including, for example, as edible ink, as a mobile sensor, as a microchip, and others. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. Through the use of a parallel file system, file contents may be distributed over a plurality of storage servers using striping and metadata may be distributed over a plurality of metadata servers on a directory level, with each server storing a part of the complete file system tree. Readers will appreciate that in some embodiments, the storage servers and metadata servers may run in userspace on top of an existing local file system. Furthermore, dedicated hardware is not required for client services, the metadata servers, or the hardware servers as metadata servers, storage servers, and even the client services may be run on the same machines. Readers will appreciate that, in part due to the emergence of many of the technologies discussed above including mobile devices, cloud services, social networks, big data analytics, and so on, an information technology platform may be needed to integrate all of these technologies and drive new business opportunities by quickly delivering revenue-generating products, services, and experiences—rather than merely providing the technology to automate internal business processes. Information technology organizations may need to balance resources and investments needed to keep core legacy systems up and running while also integrating technologies to build an information technology platform that can provide the speed and flexibility in areas such as, for example, exploiting big data, managing unstructured data, and working with cloud applications and services. One possible embodiment of such an information technology platform is a composable infrastructure that includes fluid resource pools, such as many of the systems described above that, can meet the changing needs of applications by allowing for the composition and recomposition of blocks of disaggregated compute, storage, and fabric infrastructure. Such a composable infrastructure can also include a single management interface to eliminate complexity and a unified API to discover, search, inventory, configure, provision, update, and diagnose the composable infrastructure. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, a clustering and scheduling tool for Docker containers that enables IT administrators and developers to establish and manage a cluster of Docker nodes as a single virtual system. Likewise, containerized applications may be managed through the use of Kubernetes, a container-orchestration system for automating deployment, scaling and management of containerized applications. Kubernetes may execute on top of operating systems such as, for example, Red Hat Enterprise Linux, Ubuntu Server, SUSE Linux Enterprise Servers, and others. In such examples, a master node may assign tasks to worker/minion nodes. Kubernetes can include a set of components (e.g., kubelet, kube-proxy, cAdvisor) that manage individual nodes as a well as a set of components (e.g., etcd, API server, Scheduler, Control Manager) that form a control plane. Various controllers (e.g., Replication Controller, DaemonSet Controller) can drive the state of a Kubernetes cluster by managing a set of pods that includes one or more containers that are deployed on a single node. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. MEC technology is designed to be implemented at the cellular base stations or other edge nodes, and enables flexible and rapid deployment of new applications and services for customers. MEC may also allow cellular operators to open their radio access network (‘RAN’) to authorized third-parties, such as application developers and content provider. Furthermore, edge computing and micro data centers may substantially reduce the cost of smartphones that work with the 5G network because customer may not need devices with such intensive processing power and the expensive requisite components. Readers will appreciate that 5G networks may generate more data than previous network generations, especially in view of the fact that the high network bandwidth offered by 5G networks may cause the 5G networks to handle amounts and types of data (e.g., sensor data from self-driving cars, data generated by AR/VR technologies) that weren't as feasible for previous generation networks. In such examples, the scalability offered by the systems described above may be very valuable as the amount of data increases, adoption of emerging technologies increase, and so on. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.4sets forth a flow chart illustrating an example method for processing data through a storage system in a data pipeline according to some embodiments of the present disclosure. Although depicted in less detail, the storage system (408) depicted inFIG.4may be similar to the storage systems described above with reference toFIGS.1A-1D,FIGS.2A-2G,FIGS.3A-3C, or any combination thereof. In fact, the storage system depicted inFIG.4may include the same, fewer, additional components as the storage systems described above. For example, the storage system (408) may be entirely within a single chassis or single blade as described above. The storage system (408) along with the data analyzer (410) and data indexer (412) make up a portion of a data pipeline configured to process large amounts (e.g., 30 terabytes per day) of unstructured data in the form of datasets. Such datasets may include data in the form of log lines. The dataset may be part of a data stream generated by a data producer (400). The example method depicted inFIG.4includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402). Receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402) may be carried out by the collector (402) transmitting the dataset (404) directly to the storage system (408) using the collector (402). The data producer (400) depicted inFIG.4is a system that generates the dataset (404) and transmits the dataset (404) to the storage system (408) via the collector. The data producer (400) may be one of many data producers, each executing a collector, providing datasets to the same storage system (408). The data producer (400) depicted inFIG.4may also be embodied, for example, as a simulation of a storage system that is executed in order to test hardware and software components within the storage system that is being tested. Consider an example in which software for a storage system is developed and tested utilizing a continuous integration (‘CI’) model in which all developer working copies of system software are frequently merged to a shared mainline. In such an example, such software may be tested by running a simulation of the storage system and running automated tests against the simulated storage system, thereby generating a very large dataset (404) that consisted of log files, error logs, or some other form of data that describes the operational state of the simulated storage system. Although the data producer (402) is depicted as residing outside of the storage system (408) in the embodiment depicted inFIG.4, in other embodiments, the data producer (400) may actually be executing on the storage system (408) itself and may even write the dataset directly to storage resources within the storage system (408). The data set may be received directly from the collector (402) on the data producer (400). The collector (402) is an application executing within the data producer (400) that accesses storage on the data producer (400) and transmits the data to the storage system (408). The collector (402) may monitor a particular directory on the data producer (400) for new data to transmit. The collector (402) enables file streaming even as files are being written to. If data communications between the collector (402) and the storage system (408) fail, the collector (402) may pause the transmission of the datasets, and resume transmission once data communications are reestablished. Consequently, a communications failure does not cause data generated by the data producer to be lost. Further, planned or simulated connectivity losses similarly do not cause data generated by the data producer to be lost. The metadata (406) describes the data in the dataset (404). The metadata may include information that may be used to access a dataset or a particular location within a dataset. Specifically, the metadata may include an identifier of the data producer that generated a particular entry, an identifier of the application that was executing, the time and date of the entry, etc. The metadata may then be queried for some portion of the dataset and provide relevant entries. The collector (402) itself may use the metadata to track the progress of the transmission of the dataset and to demarcate the last sent data. The metadata (406) may then be provided to the data analyzer (410) or other entities so that those entities may also demarcate the last sent data. The dataset (404) is disaggregated from the metadata (406) for the dataset (404) by the collector (402) in that the dataset (404) and the metadata (406) are transmitted separately to separate entities. Specifically, disaggregation means that the dataset (404) and metadata (406) are not sent from the collector (402) together in the same data transmission. Rather, the dataset (404) is transmitted to one target and the metadata (406) is transmitted to a separate, second target. The separate targets may be separate systems or different applications within the same system (e.g., using different ports). Consequently, both storage and network resources are conserved. The collector (402) may send metadata (406) to the data analyzer (410). The data analyzer (410) depicted inFIG.4may be embodied, for example, as a system that examines datasets in order to draw conclusions about the information contained in the datasets, including drawing conclusions about the data producer (400). The data analyzer (410) may include artificial intelligence or machine learning components, components that transform unstructured data into structured or semi-structured data, big data components, and many others. The data analyzer (410) may examine datasets as they are generated (i.e., in real-time), analyze datasets periodically as a whole, or both. The data analyzer (410) may be part of the storage system (408) or may be external to the storage system (408). Specifically, the data analyzer (410) may execute within the storage system (408) and receive the metadata (406) from the collector (402) separately from other parts of the storage system (408) receiving the dataset (404). The collector (402) may send metadata (406) indirectly (e.g. via the data analyzer (410)) or directly to the data indexer (412). The data indexer (412) depicted inFIG.4may be embodied, for example, as a system that indexes the dataset (404) by retrieving data in the dataset (404) from the storage system (408), organizing the dataset (404) into an indexed directory structure, and sending the indexed data back to the storage system (408). The indexed directory structure may be created by generating the indexed data in such a way so as to facilitate fast and accurate searching of the directory structure. In fact, large datasets such as the log files that are generated during testing may be generated with names that include elements such as a timestamp, an identification of the cluster that generated the log file, and so on and organized in the directory structure according to an indexing scheme. As such, the indexed file system may essentially be used as a database that can be quickly searched, but without the limitations of a database that causes databases to perform poorly on very, very large datasets. The method ofFIG.4further includes storing (422) the dataset (404) on the storage system (408). Storing (422) the dataset (404) on the storage system (408) may be carried out by the storage system (408) placing the data set (404) on storage devices within the storage system. The dataset (404) may be stored within the storage system (408) in a variety of ways. For example, the dataset (404) may be stored by the collector (402) itself accessing the storage system (408) directly, by system software and system hardware on the storage system causing the dataset (404) (or the slices thereof) to be written to storage devices in the storage system (406), or in some other way. The method ofFIG.4further includes receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400). Receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400) may be carried out by the storage system (408) receiving the request that targets particular data within the dataset (404) using the metadata (406). Using the metadata (406), the data indexer (412) is able to locate the data from within the storage system (408). Consequently, the data indexer (412) may also be able to read and write directly to the storage system (408). If the storage system (408) receives the metadata (406) (e.g., via a data analyzer (410) within the storage system (408)), the storage system (408) may expose the metadata (406) to the data indexer (412). Specifically, the storage system (408) may make the metadata (406) available to the data indexer (412) for search or retrieval. The data indexer (412) may then utilize the exposed metadata (406) to access and index the data. The storage system (408) may provide notifications to the data indexer (412) or other systems in response to data from the dataset (404) matching set conditions. For example, the storage system (408) may notify the data indexer (412) that a dataset (404) or data (416) in a dataset (404) is prepared for indexing. The method ofFIG.4further includes servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416). Servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416) may be carried out by the storage system (408) accessing the data (416) within the dataset (404) using the received metadata (406). The metadata received from the data indexer (412) may be altered from the version of the metadata received from the collector (402). Further, the location identified in the metadata received from the data indexer (412) may require interpretation or conversion by the storage system (408) in order to locate the data (416). For example, the metadata received from the data indexer (412) may be an offset to an identified location within the dataset. The data indexer (412) may perform various tasks using the requested data (416) in addition to indexing the data (416). For example, the data indexer (412) may parse, filter, and transform the data (416). Further, the data indexer (412) may use a data specification language to transform data as it is processed by the data indexer (412). Transformation using a data specification language may include converting unstructured data into structured data (e.g., that conforms to a pre-defined data model) that may be organized and queried. The transformation may be performed using the metadata (406) gathered by the collector (402) on the data producer (400). For example, a data model for unstructured data may be identified using metadata such as the application that generated the unstructured data. The method ofFIG.4further includes receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). Receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400) may be carried out by the storage system (408) storing the indexed data (418) on storage devices within the storage system (408). The indexed data (418) may then be available for other systems and applications to extract information about the data producer (400). The above limitations improve the operation of a computer system by minimizing the transmission of data between systems in order to generate indexed data hosted on a shared storage system. Using the above process, an indexed dataset may be received and stored on the storage system after a minimal number of copies from the data producer. Further, by employing a collector on the data producer, a communications failure does not result in data loss from the data producer because the collector manages data transmission to the storage system. For further explanation,FIG.5sets forth a flow chart illustrating a further exemplary method for processing data through a storage system in a data pipeline according to embodiments of the present invention that includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402); storing (422) the dataset (404) on the storage system (408); receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400); servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416); and receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). The method ofFIG.5differs from the method ofFIG.4, however, in that receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402) includes receiving (502), by the storage system (408), the dataset (404) as a continuation of a previous dataset interrupted by a pause in data communications. Receiving (502), by the storage system (408), the dataset (404) as a continuation of a previous dataset interrupted by a pause in data communications may be carried out by the storage system (408) detecting that transmission from the collector (402) has stopped and resumed, and once the data transmission has resumed, receiving and storing the newly received dataset as a continuation of the previously received dataset. The resulting continued dataset will include no missing data due to the data communication interruption. For further explanation,FIG.6sets forth a flow chart illustrating a further exemplary method for processing data through a storage system in a data pipeline according to embodiments of the present invention that includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402); storing (422) the dataset (404) on the storage system (408); receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400); servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416); and receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). The method ofFIG.6differs from the method ofFIG.4, however, in that receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402) includes receiving (602) the dataset (404) as a line delineated stream; and organizing (604) the line delineated stream into one or more data objects. Receiving (602) the dataset (404) as a line delineated stream may be carried out by the collector (402) transmitting the dataset (404) as a data stream to the storage system (408). The data stream may be made up of line delineated data and the storage system (408) may include a port or socket that accepts the line delineated data stream. Organizing (604) the line delineated stream into one or more data objects may be carried out by the storage system (408) breaking down the line delineated data from the data stream into data objects of a predetermined or dynamic size. The data objects may be created dynamically by creating the data object beginning with a particular point within the data stream. The data objects may then be stored on the storage devices of the storage system (408). For further explanation,FIG.7sets forth a flow chart illustrating a further exemplary method for processing data through a storage system in a data pipeline according to embodiments of the present invention that includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402); storing (422) the dataset (404) on the storage system (408); receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400); servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416); and receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). The method ofFIG.7differs from the method ofFIG.4, however, in that storing (422) the dataset (404) on the storage system (408) includes receiving (702) a log line from the dataset (404); identifying (704) a log line type for the log line; and generating (706) a structure for the log line using the log line type. Receiving (702) a log line from the dataset (404) may be carried out by the storage system (408) identifying a segment of received data as a log line. There may be a finite amount of log line types received by the storage system (408) and some or all log lines received may correspond to a known log line type. Identifying (704) a log line type for the log line may be carried out by comparing elements of the log line to elements of known log line types. If a threshold number of elements between the log line and a known log line type match, the received log line may be identified as the matched log line type. Generating (706) a structure for the log line using the log line type may be carried out by altering the log line to conform to the structure for the matched log line type. The log line types and log line structures may be extrapolated from a code base for the data producer (400). For further explanation,FIG.8sets forth a flow chart illustrating a further exemplary method for processing data through a storage system in a data pipeline according to embodiments of the present invention that includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402); storing (422) the dataset (404) on the storage system (408); receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400); servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416); and receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). The method ofFIG.8differs from the method ofFIG.4, however, in that storing (422) the dataset (404) on the storage system (408) includes organizing (802) the dataset (404) in tiers within the storage system (408) based on previously received requests for data. Organizing (802) the dataset (404) in tiers within the storage system (408) based on previously received requests for data may be carried out by the storage system (408) moving data between tiers based on the frequency of the requests for that data. Different tiers of data storage in the storage system may be subject to different read speeds, or subject to other efficiencies or inefficiencies. The storage system (408) may track the requests received and detect that a certain category of data is requested at a higher or lower rate. The storage system (408) may move frequently requested data from datasets or indexed data to tiers with lower latency and move less frequently requested data from datasets or indexed data to tiers with higher latency. The storage system (408) may also store incoming data from datasets or indexed data that matches elements of the frequently requested data to lower latency tiers. Similarly, the storage system (408) may store incoming data from datasets or indexed data that matches elements of the less frequently requested data to higher latency tiers. For further explanation,FIG.9sets forth a flow chart illustrating a further exemplary method for processing data through a storage system in a data pipeline according to embodiments of the present invention that includes receiving (420), by the storage system (408), a dataset (404) from a collector (402) on a data producer (400), wherein the dataset (404) is disaggregated from metadata (406) for the dataset (404) by the collector (402); storing (422) the dataset (404) on the storage system (408); receiving (424), by the storage system (408) from a data indexer (412), a request (414) for data (416) from the dataset (404), wherein the request (414) for the data (416) comprises the metadata (406) gathered by the collector (402) on the data producer (400); servicing (426), by the storage system (408), the request (414) for the data (416) by locating the data (416) using the metadata (406) gathered by the collector (402) on the data producer (400) and received in the request (414) for the data (416); and receiving (428), from the data indexer (412), indexed data (418) indexed using the metadata (406) gathered by the collector (402) on the data producer (400). The method ofFIG.9differs from the method ofFIG.4, however, in that the method ofFIG.9further includes receiving (902) a request for data, wherein the request comprises a pattern and an index; servicing (904) the request; and prefetching (906) additional data based on the pattern and index. Receiving (902) a request for data, wherein the request comprises a pattern and an index may be carried out by a requesting system sending the storage system (408) a request for data that includes a pattern associated with the data and an index for the data. Servicing (904) the request may be carried out by using the pattern and index to retrieve the requested data and transmit the requested data to the requesting system. Prefetching (906) additional data based on the pattern and index may be carried out by accessing the additional data from a different pattern using the same or similar index. The additional data may then be copied to a location that may be read at a lower latency in preparation for receiving a request for the additional data. For example, a file system may organize indexed data (418) using the file structure “/<pattern_name>/<year>/<month>/<day>/<host>/<software>/<file_hour>”. If the storage system (408) receives one or more requests for data for a particular index (e.g., <day>), the storage system (408) may then begin to prefetch data for a different pattern using the same or similar index. Prefetching (906) additional data may also be based on object tags. The received request may include a number of object tags, and additional data objects may be prefetched using a combination of matching and similar object tags. In view of the explanations set forth above, readers will recognize that the benefits of processing data through a storage system in a data pipeline according to embodiments of the present invention include:Improving the operation of a computing system by minimizing the transmission of data between systems in order to generate indexed data hosted on a shared storage system, increasing computing system efficiency and functionality.Improving the operation of a computing system by employing a collector on the data producer that provides resiliency against data loss from communications failures, increasing computing system reliability and robustness. Example embodiments are described largely in the context of a fully functional computer system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. Embodiments can include be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to some embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Advantages and features of the present disclosure can be further described by the following statements: 1. A method of receiving, by the storage system, a dataset from a collector on a data producer, wherein the dataset is disaggregated from metadata for the dataset by the collector; storing the dataset on the storage system; receiving, by the storage system from a data indexer, a request for data from the dataset, wherein the request for the data comprises the metadata gathered by the collector on the data producer; servicing, by the storage system, the request for the data by locating the data using the metadata gathered by the collector on the data producer and received in the request for the data; and receiving, from the data indexer, indexed data indexed using the metadata gathered by the collector on the data producer. 2. The method of statement 1 wherein receiving, by the storage system, the dataset from the collector on the data producer comprises receiving, by the storage system, the dataset as a continuation of a previous dataset interrupted by a pause in data communications. 3. The method of statement 2 or statement 1 wherein receiving, by the storage system, the dataset from the collector on the data producer comprises: receiving the dataset as a line delineated stream; and organizing the line delineated stream into one or more data objects. 4. The method of statement 3, statement 2, or statement 1 wherein storing the dataset on the storage system comprises: receiving a log line from the dataset; identifying a log line type for the log line; and generating a structure for the log line using the log line type. 5. The method of statement 4, statement 3, statement 2, or statement 1 wherein storing the dataset on the storage system comprises organizing the dataset in tiers within the storage system based on previously received requests for data. 6. The method of statement 5, statement 4, statement 3, statement 2, or statement 1 further comprising receiving a request for data, wherein the request comprises a pattern and an index; servicing the request; and prefetching additional data based on the pattern and index. 7. The method of statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1 wherein the metadata is received by the storage system separately from the dataset and exposed to the data indexer.
237,737
11860821
DETAILED DESCRIPTION A computer-implemented method for deploying an application to a data intake and query system comprises receiving a source application package comprising code and a plurality of files and identifying a plurality of server groups of the data intake and query system, wherein a first server group receives data from a data source and forwards the received data to a second server group, wherein the second server group indexes the received data and stores the indexed data in a data store, and wherein a third server group searches the indexed data. The computer implemented method further comprises accessing one or more partitioning rules that associate each of the server groups with one or more of the plurality of files and one or more portions of the code and generating a target application package for each server group, wherein each target application package includes the one or more of the plurality of files and the one or more portions of code that are associated with the server group based on the partitioning rules. A non-transitory computer-readable storage medium comprises instructions stored thereon, which when executed by one or more processors, cause the one or more processors to perform operations comprising receiving a source application package comprising code and a plurality of files and identifying a plurality of server groups of a data intake and query system, wherein a first server group receives data from a data source and forwards the received data to a second server group, wherein the second server group indexes the received data and stores the indexed data in a data store, and wherein a third server group searches the indexed data. The instructions further cause the one or more processors to perform operations comprising accessing one or more partitioning rules that associate each of the server groups with one or more of the plurality of files and one or more portions of the code and generating a target application package for each server group, wherein each target application package includes the one or more of the plurality of files and the one or more portions of code that are associated with the server group based on the partitioning rules. A system for deploying an application to a data intake and query system comprises at least one memory having instructions stored thereon at least one processor configured to execute the instructions to receive a source application package comprising code and a plurality of files and identify a plurality of server groups of the data intake and query system, wherein a first server group receives data from a data source and forwards the received data to a second server group, wherein the second server group indexes the received data and stores the indexed data in a data store, and wherein a third server group searches the indexed data. The at least one processor is further configured to execute the instructions to access one or more partitioning rules that associate each of the server groups with one or more of the plurality of files and one or more portions of the code and generate a target application package for each server group, wherein each target application package includes the one or more of the plurality of files and the one or more portions of code that are associated with the server group based on the partitioning rules. 1.0. General Overview Modern data centers and other computing environments can comprise anywhere from a few host computer systems to thousands of systems configured to process data, service requests from remote clients, and perform numerous other computational tasks. During operation, various components within these computing environments often generate significant volumes of machine-generated data. For example, machine data is generated by various components in the information technology (IT) environments, such as servers, sensors, routers, mobile devices, Internet of Things (IoT) devices, etc. Machine-generated data can include system logs, network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc. In general, machine-generated data can also include performance data, diagnostic information, and many other types of data that can be analyzed to diagnose performance problems, monitor user interactions, and to derive other insights. A number of tools are available to analyze machine data, that is, machine-generated data. In order to reduce the size of the potentially vast amount of machine data that may be generated, many of these tools typically pre-process the data based on anticipated data-analysis needs. For example, pre-specified data items may be extracted from the machine data and stored in a database to facilitate efficient retrieval and analysis of those data items at search time. However, the rest of the machine data typically is not saved and discarded during pre-processing. As storage capacity becomes progressively cheaper and more plentiful, there are fewer incentives to discard these portions of machine data and many reasons to retain more of the data. This plentiful storage capacity is presently making it feasible to store massive quantities of minimally processed machine data for later retrieval and analysis. In general, storing minimally processed machine data and performing analysis operations at search time can provide greater flexibility because it enables an analyst to search all of the machine data, instead of searching only a pre-specified set of data items. This may enable an analyst to investigate different aspects of the machine data that previously were unavailable for analysis. However, analyzing and searching massive quantities of machine data presents a number of challenges. For example, a data center, servers, or network appliances may generate many different types and formats of machine data (e.g., system logs, network packet data (e.g., wire data, etc.), sensor data, application program data, error logs, stack traces, system performance data, operating system data, virtualization data, etc.) from thousands of different components, which can collectively be very time-consuming to analyze. In another example, mobile devices may generate large amounts of information relating to data accesses, application performance, operating system performance, network performance, etc. There can be millions of mobile devices that report these types of information. These challenges can be addressed by using an event-based data intake and query system, such as the SPLUNK® ENTERPRISE system developed by Splunk Inc. of San Francisco, California. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index, and search machine-generated data from various websites, applications, servers, networks, and mobile devices that power their businesses. The SPLUNK® ENTERPRISE system is particularly useful for analyzing data which is commonly found in system log files, network data, and other data input sources. Although many of the techniques described herein are explained with reference to a data intake and query system similar to the SPLUNK® ENTERPRISE system, these techniques are also applicable to other types of data systems. In the SPLUNK® ENTERPRISE system, machine-generated data are collected and stored as “events”. An event comprises a portion of the machine-generated data and is associated with a specific point in time. For example, events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time. In general, each event can be associated with a timestamp that is derived from the raw data in the event, determined through interpolation between temporally proximate events having known timestamps, or determined based on other configurable rules for associating timestamps with events, etc. In some instances, machine data can have a predefined format, where data items with specific data formats are stored at predefined locations in the data. For example, the machine data may include data stored as fields in a database table. In other instances, machine data may not have a predefined format, that is, the data is not at fixed, predefined locations, but the data does have repeatable patterns and is not random. This means that some machine data can comprise various data items of different data types and that may be stored at different locations within the data. For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing raw data that includes different types of performance and diagnostic information associated with a specific point in time. Examples of components which may generate machine data from which events can be derived include, but are not limited to, web servers, application servers, databases, firewalls, routers, operating systems, and software applications that execute on computer systems, mobile devices, sensors, Internet of Things (IoT) devices, etc. The data generated by such data sources can include, for example and without limitation, server log files, activity log files, configuration files, messages, network packet data, performance measurements, sensor measurements, etc. The SPLUNK® ENTERPRISE system uses flexible schema to specify how to extract information from the event data. A flexible schema may be developed and redefined as needed. Note that a flexible schema may be applied to event data “on the fly,” when it is needed (e.g., at search time, index time, ingestion time, etc.). When the schema is not applied to event data until search time it may be referred to as a “late-binding schema.” During operation, the SPLUNK® ENTERPRISE system starts with raw input data (e.g., one or more system logs, streams of network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc.). The system divides this raw data into blocks (e.g., buckets of data, each associated with a specific time frame, etc.), and parses the raw data to produce timestamped events. The system stores the timestamped events in a data store. The system enables users to run queries against the stored data to, for example, retrieve events that meet criteria specified in a query, such as containing certain keywords or having specific values in defined fields. As used herein throughout, data that is part of an event is referred to as “event data”. In this context, the term “field” refers to a location in the event data containing one or more values for a specific data item. As will be described in more detail herein, the fields are defined by extraction rules (e.g., regular expressions) that derive one or more values from the portion of raw machine data in each event that has a particular field specified by an extraction rule. The set of values so produced are semantically-related (such as IP address), even though the raw machine data in each event may be in different formats (e.g., semantically-related values may be in different positions in the events derived from different sources). As noted above, the SPLUNK® ENTERPRISE system utilizes a late-binding schema to event data while performing queries on events. One aspect of a late-binding schema is applying “extraction rules” to event data to extract values for specific fields during search time. More specifically, the extraction rules for a field can include one or more instructions that specify how to extract a value for the field from the event data. An extraction rule can generally include any type of instruction for extracting values from data in events. In some cases, an extraction rule comprises a regular expression where a sequence of characters form a search pattern, in which case the rule is referred to as a “regex rule.” The system applies the regex rule to the event data to extract values for associated fields in the event data by searching the event data for the sequence of characters defined in the regex rule. In the SPLUNK® ENTERPRISE system, a field extractor may be configured to automatically generate extraction rules for certain field values in the events when the events are being created, indexed, or stored, or possibly at a later time. Alternatively, a user may manually define extraction rules for fields using a variety of techniques. In contrast to a conventional schema for a database system, a late-binding schema is not defined at data ingestion time. Instead, the late-binding schema can be developed on an ongoing basis until the time a query is actually executed. This means that extraction rules for the fields in a query may be provided in the query itself, or may be located during execution of the query. Hence, as a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules for use the next time the schema is used by the system. Because the SPLUNK® ENTERPRISE system maintains the underlying raw data and uses late-binding schema for searching the raw data, it enables a user to continue investigating and learn valuable insights about the raw data. In some embodiments, a common field name may be used to reference two or more fields containing equivalent data items, even though the fields may be associated with different types of events that possibly have different data formats and different extraction rules. By enabling a common field name to be used to identify equivalent fields from different types of events generated by disparate data sources, the system facilitates use of a “common information model” (CIM) across the disparate data sources (further discussed with respect toFIG.5). 2.0. Operating Environment FIG.1illustrates a networked computer system100in which an embodiment may be implemented. Those skilled in the art would understand thatFIG.1represents one example of a networked computer system and other embodiments may use different arrangements. The networked computer system100comprises one or more computing devices. These one or more computing devices comprise any combination of hardware and software configured to implement the various logical components described herein. For example, the one or more computing devices may include one or more memories that store instructions for implementing the various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. In an embodiment, one or more client devices102are coupled to one or more host devices106and a data intake and query system108via one or more networks104. Networks104broadly represent one or more LANs, WANs, cellular networks (e.g., LTE, HSPA, 3G, and other cellular technologies), and/or networks using any of wired, wireless, terrestrial microwave, or satellite links, and may include the public Internet. 2.1. Host Devices In the illustrated embodiment, a system100includes one or more host devices106. Host devices106may broadly include any number of computers, virtual machine instances, and/or data centers that are configured to host or execute one or more instances of host applications114. In general, a host device106may be involved, directly or indirectly, in processing requests received from client devices102. Each host device106may comprise, for example, one or more of a network device, a web server, an application server, a database server, etc. A collection of host devices106may be configured to implement a network-based service. For example, a provider of a network-based service may configure one or more host devices106and host applications114(e.g., one or more web servers, application servers, database servers, etc.) to collectively implement the network-based application. In general, client devices102communicate with one or more host applications114to exchange information. The communication between a client device102and a host application114may, for example, be based on the Hypertext Transfer Protocol (HTTP) or any other network protocol. Content delivered from the host application114to a client device102may include, for example, HTML documents, media content, etc. The communication between a client device102and host application114may include sending various requests and receiving data packets. For example, in general, a client device102or application running on a client device may initiate communication with a host application114by making a request for a specific resource (e.g., based on an HTTP request), and the application server may respond with the requested content stored in one or more response packets. In the illustrated embodiment, one or more of host applications114may generate various types of performance data during operation, including event logs, network data, sensor data, and other types of machine-generated data. For example, a host application114comprising a web server may generate one or more web server logs in which details of interactions between the web server and any number of client devices102is recorded. As another example, a host device106comprising a router may generate one or more router logs that record information related to network traffic managed by the router. As yet another example, a host application114comprising a database server may generate one or more logs that record information related to requests sent from other host applications114(e.g., web servers or application servers) for data managed by the database server. 2.2. Client Devices Client devices102ofFIG.1represent any computing device capable of interacting with one or more host devices106via a network104. Examples of client devices102may include, without limitation, smart phones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, and so forth. In general, a client device102can provide access to different content, for instance, content provided by one or more host devices106, etc. Each client device102may comprise one or more client applications110, described in more detail in a separate section hereinafter. 2.3. Client Device Applications In an embodiment, each client device102may host or execute one or more client applications110that are capable of interacting with one or more host devices106via one or more networks104. For instance, a client application110may be or comprise a web browser that a user may use to navigate to one or more websites or other resources provided by one or more host devices106. As another example, a client application110may comprise a mobile application or “app.” For example, an operator of a network-based service hosted by one or more host devices106may make available one or more mobile apps that enable users of client devices102to access various resources of the network-based service. As yet another example, client applications110may include background processes that perform various operations without direct interaction from a user. A client application110may include a “plug-in” or “extension” to another application, such as a web browser plug-in or extension. In an embodiment, a client application110may include a monitoring component112. At a high level, the monitoring component112comprises a software component or other logic that facilitates generating performance data related to a client device's operating state, including monitoring network traffic sent and received from the client device and collecting other device and/or application-specific information. Monitoring component112may be an integrated component of a client application110, a plug-in, an extension, or any other type of add-on component. Monitoring component112may also be a stand-alone process. In one embodiment, a monitoring component112may be created when a client application110is developed, for example, by an application developer using a software development kit (SDK). The SDK may include custom monitoring code that can be incorporated into the code implementing a client application110. When the code is converted to an executable application, the custom code implementing the monitoring functionality can become part of the application itself. In some cases, a SDK or other code for implementing the monitoring functionality may be offered by a provider of a data intake and query system, such as a system108. In such cases, the provider of the system108can implement the custom code so that performance data generated by the monitoring functionality is sent to the system108to facilitate analysis of the performance data by a developer of the client application or other users. In an embodiment, the custom monitoring code may be incorporated into the code of a client application110in a number of different ways, such as the insertion of one or more lines in the client application code that call or otherwise invoke the monitoring component112. As such, a developer of a client application110can add one or more lines of code into the client application110to trigger the monitoring component112at desired points during execution of the application. Code that triggers the monitoring component may be referred to as a monitor trigger. For instance, a monitor trigger may be included at or near the beginning of the executable code of the client application110such that the monitoring component112is initiated or triggered as the application is launched, or included at other points in the code that correspond to various actions of the client application, such as sending a network request or displaying a particular interface. In an embodiment, the monitoring component112may monitor one or more aspects of network traffic sent and/or received by a client application110. For example, the monitoring component112may be configured to monitor data packets transmitted to and/or from one or more host applications114. Incoming and/or outgoing data packets can be read or examined to identify network data contained within the packets, for example, and other aspects of data packets can be analyzed to determine a number of network performance statistics. Monitoring network traffic may enable information to be gathered particular to the network performance associated with a client application110or set of applications. In an embodiment, network performance data refers to any type of data that indicates information about the network and/or network performance. Network performance data may include, for instance, a URL requested, a connection type (e.g., HTTP, HTTPS, etc.), a connection start time, a connection end time, an HTTP status code, request length, response length, request headers, response headers, connection status (e.g., completion, response time(s), failure, etc.), and the like. Upon obtaining network performance data indicating performance of the network, the network performance data can be transmitted to a data intake and query system108for analysis. Upon developing a client application110that incorporates a monitoring component112, the client application110can be distributed to client devices102. Applications generally can be distributed to client devices102in any manner, or they can be pre-loaded. In some cases, the application may be distributed to a client device102via an application marketplace or other application distribution system. For instance, an application marketplace or other application distribution system might distribute the application to a client device based on a request from the client device to download the application. Examples of functionality that enables monitoring performance of a client device are described in U.S. patent application Ser. No. 14/524,748, entitled “UTILIZING PACKET HEADERS TO MONITOR NETWORK TRAFFIC IN ASSOCIATION WITH A CLIENT DEVICE”, filed on 27 Oct. 2014, and which is hereby incorporated by reference in its entirety for all purposes. In an embodiment, the monitoring component112may also monitor and collect performance data related to one or more aspects of the operational state of a client application110and/or client device102. For example, a monitoring component112may be configured to collect device performance information by monitoring one or more client device operations, or by making calls to an operating system and/or one or more other applications executing on a client device102for performance information. Device performance information may include, for instance, a current wireless signal strength of the device, a current connection type and network carrier, current memory performance information, a geographic location of the device, a device orientation, and any other information related to the operational state of the client device. In an embodiment, the monitoring component112may also monitor and collect other device profile information including, for example, a type of client device, a manufacturer and model of the device, versions of various software applications installed on the device, and so forth. In general, a monitoring component112may be configured to generate performance data in response to a monitor trigger in the code of a client application110or other triggering application event, as described above, and to store the performance data in one or more data records. Each data record, for example, may include a collection of field-value pairs, each field-value pair storing a particular item of performance data in association with a field for the item. For example, a data record generated by a monitoring component112may include a “networkLatency” field (not shown in the Figure) in which a value is stored. This field indicates a network latency measurement associated with one or more network requests. The data record may include a “state” field to store a value indicating a state of a network connection, and so forth for any number of aspects of collected performance data. 2.4. Data Server System FIG.2depicts a block diagram of an exemplary data intake and query system108, similar to the SPLUNK® ENTERPRISE system. System108includes one or more forwarders204that receive data from a variety of input data sources202, and one or more indexers206that process and store the data in one or more data stores208. These forwarders and indexers can comprise separate computer systems, or may alternatively comprise separate processes executing on one or more computer systems. Each data source202broadly represents a distinct source of data that can be consumed by a system108. Examples of a data source202include, without limitation, data files, directories of files, data sent over a network, event logs, registries, etc. During operation, the forwarders204identify which indexers206receive data collected from a data source202and forward the data to the appropriate indexers. Forwarders204can also perform operations on the data before forwarding, including removing extraneous data, detecting timestamps in the data, parsing data, indexing data, routing data based on criteria relating to the data being routed, and/or performing other data transformations. In an embodiment, a forwarder204may comprise a service accessible to client devices102and host devices106via a network104. For example, one type of forwarder204may be capable of consuming vast amounts of real-time data from a potentially large number of client devices102and/or host devices106. The forwarder204may, for example, comprise a computing device which implements multiple data pipelines or “queues” to handle forwarding of network data to indexers206. A forwarder204may also perform many of the functions that are performed by an indexer. For example, a forwarder204may perform keyword extractions on raw data or parse raw data to create events. A forwarder204may generate time stamps for events. Additionally or alternatively, a forwarder204may perform routing of events to indexers. Data store208may contain events derived from machine data from a variety of sources all pertaining to the same component in an IT environment, and this data may be produced by the machine in question or by other components in the IT environment. 2.5. Data Ingestion FIG.3depicts a flow chart illustrating an example data flow performed by Data Intake and Query system108, in accordance with the disclosed embodiments. The data flow illustrated inFIG.3is provided for illustrative purposes only; those skilled in the art would understand that one or more of the steps of the processes illustrated inFIG.3may be removed or the ordering of the steps may be changed. Furthermore, for the purposes of illustrating a clear example, one or more particular system components are described in the context of performing various operations during each of the data flow stages. For example, a forwarder is described as receiving and processing data during an input phase; an indexer is described as parsing and indexing data during parsing and indexing phases; and a search head is described as performing a search query during a search phase. However, other system arrangements and distributions of the processing steps across system components may be used. 2.5.1. Input At block302, a forwarder receives data from an input source, such as a data source202shown inFIG.2. A forwarder initially may receive the data as a raw data stream generated by the input source. For example, a forwarder may receive a data stream from a log file generated by an application server, from a stream of network data from a network device, or from any other source of data. In one embodiment, a forwarder receives the raw data and may segment the data stream into “blocks”, or “buckets,” possibly of a uniform data size, to facilitate subsequent processing steps. At block304, a forwarder or other system component annotates each block generated from the raw data with one or more metadata fields. These metadata fields may, for example, provide information related to the data block as a whole and may apply to each event that is subsequently derived from the data in the data block. For example, the metadata fields may include separate fields specifying each of a host, a source, and a source type related to the data block. A host field may contain a value identifying a host name or IP address of a device that generated the data. A source field may contain a value identifying a source of the data, such as a pathname of a file or a protocol and port related to received network data. A source type field may contain a value specifying a particular source type label for the data. Additional metadata fields may also be included during the input phase, such as a character encoding of the data, if known, and possibly other values that provide information relevant to later processing steps. In an embodiment, a forwarder forwards the annotated data blocks to another system component (typically an indexer) for further processing. The SPLUNK® ENTERPRISE system allows forwarding of data from one SPLUNK® ENTERPRISE instance to another, or even to a third-party system. SPLUNK® ENTERPRISE system can employ different types of forwarders in a configuration. In an embodiment, a forwarder may contain the essential components needed to forward data. It can gather data from a variety of inputs and forward the data to a SPLUNK® ENTERPRISE server for indexing and searching. It also can tag metadata (e.g., source, source type, host, etc.). Additionally or optionally, in an embodiment, a forwarder has the capabilities of the aforementioned forwarder as well as additional capabilities. The forwarder can parse data before forwarding the data (e.g., associate a time stamp with a portion of data and create an event, etc.) and can route data based on criteria such as source or type of event. It can also index data locally while forwarding the data to another indexer. 2.5.2. Parsing At block306, an indexer receives data blocks from a forwarder and parses the data to organize the data into events. In an embodiment, to organize the data into events, an indexer may determine a source type associated with each data block (e.g., by extracting a source type label from the metadata fields associated with the data block, etc.) and refer to a source type configuration corresponding to the identified source type. The source type definition may include one or more properties that indicate to the indexer to automatically determine the boundaries of events within the data. In general, these properties may include regular expression-based rules or delimiter rules where, for example, event boundaries may be indicated by predefined characters or character strings. These predefined characters may include punctuation marks or other special characters including, for example, carriage returns, tabs, spaces, line breaks, etc. If a source type for the data is unknown to the indexer, an indexer may infer a source type for the data by examining the structure of the data. Then, it can apply an inferred source type definition to the data to create the events. At block308, the indexer determines a timestamp for each event. Similar to the process for creating events, an indexer may again refer to a source type definition associated with the data to locate one or more properties that indicate instructions for determining a timestamp for each event. The properties may, for example, instruct an indexer to extract a time value from a portion of data in the event, to interpolate time values based on timestamps associated with temporally proximate events, to create a timestamp based on a time the event data was received or generated, to use the timestamp of a previous event, or use any other rules for determining timestamps. At block310, the indexer associates with each event one or more metadata fields including a field containing the timestamp (in some embodiments, a timestamp may be included in the metadata fields) determined for the event. These metadata fields may include a number of “default fields” that are associated with all events, and may also include one more custom fields as defined by a user. Similar to the metadata fields associated with the data blocks at block304, the default metadata fields associated with each event may include a host, source, and source type field including or in addition to a field storing the timestamp. At block312, an indexer may optionally apply one or more transformations to data included in the events created at block306. For example, such transformations can include removing a portion of an event (e.g., a portion used to define event boundaries, extraneous characters from the event, other extraneous text, etc.), masking a portion of an event (e.g., masking a credit card number), removing redundant portions of an event, etc. The transformations applied to event data may, for example, be specified in one or more configuration files and referenced by one or more source type definitions. 2.5.3. Indexing At blocks314and316, an indexer can optionally generate a keyword index to facilitate fast keyword searching for event data. To build a keyword index, at block314, the indexer identifies a set of keywords in each event. At block316, the indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. In some embodiments, the keyword index may include entries for name-value pairs found in events, where a name-value pair can include a pair of keywords connected by a symbol, such as an equals sign or colon. This way, events containing these name-value pairs can be quickly located. In some embodiments, fields can automatically be generated for some or all of the name-value pairs at the time of indexing. For example, if the string “dest=10.0.1.2” is found in an event, a field named “dest” may be created for the event, and assigned a value of “10.0.1.2”. At block318, the indexer stores the events with an associated timestamp in a data store208. Timestamps enable a user to search for events based on a time range. In one embodiment, the stored events are organized into “buckets,” where each bucket stores events associated with a specific time range based on the timestamps associated with each event. This may not only improve time-based searching, but also allows for events with recent timestamps, which may have a higher likelihood of being accessed, to be stored in a faster memory to facilitate faster retrieval. For example, buckets containing the most recent events can be stored in flash memory rather than on a hard disk. Each indexer206may be responsible for storing and searching a subset of the events contained in a corresponding data store208. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. For example, using map-reduce techniques, each indexer returns partial responses for a subset of events to a search head that combines the results to produce an answer for the query. By storing events in buckets for specific time ranges, an indexer may further optimize data retrieval process by searching buckets corresponding to time ranges that are relevant to a query. Moreover, events and buckets can also be replicated across different indexers and data stores to facilitate high availability and disaster recovery as described in U.S. patent application Ser. No. 14/266,812, entitled “SITE-BASED SEARCH AFFINITY”, filed on 30 Apr. 2014, and in U.S. patent application Ser. No. 14/266,817, entitled “MULTI-SITE CLUSTERING”, also filed on 30 Apr. 2014, each of which is hereby incorporated by reference in its entirety for all purposes. 2.6. Query Processing FIG.4is a flow diagram that illustrates an exemplary process that a search head and one or more indexers may perform during a search query. At block402, a search head receives a search query from a client. At block404, the search head analyzes the search query to determine what portion(s) of the query can be delegated to indexers and what portions of the query can be executed locally by the search head. At block406, the search head distributes the determined portions of the query to the appropriate indexers. In an embodiment, a search head cluster may take the place of an independent search head where each search head in the search head cluster coordinates with peer search heads in the search head cluster to schedule jobs, replicate search results, update configurations, fulfill search requests, etc. In an embodiment, the search head (or each search head) communicates with a master node (also known as a cluster master, not shown in Fig.) that provides the search head with a list of indexers to which the search head can distribute the determined portions of the query. The master node maintains a list of active indexers and can also designate which indexers may have responsibility for responding to queries over certain sets of events. A search head may communicate with the master node before the search head distributes queries to indexers to discover the addresses of active indexers. At block408, the indexers to which the query was distributed, search data stores associated with them for events that are responsive to the query. To determine which events are responsive to the query, the indexer searches for events that match the criteria specified in the query. These criteria can include matching keywords or specific values for certain fields. The searching operations at block408may use the late-binding schema to extract values for specified fields from events at the time the query is processed. In an embodiment, one or more rules for extracting field values may be specified as part of a source type definition. The indexers may then either send the relevant events back to the search head, or use the events to determine a partial result, and send the partial result back to the search head. At block410, the search head combines the partial results and/or events received from the indexers to produce a final result for the query. This final result may comprise different types of data depending on what the query requested. For example, the results can include a listing of matching events returned by the query, or some type of visualization of the data from the returned events. In another example, the final result can include one or more calculated values derived from the matching events. The results generated by the system108can be returned to a client using different techniques. For example, one technique streams results or relevant events back to a client in real-time as they are identified. Another technique waits to report the results to the client until a complete set of results (which may include a set of relevant events or a result based on relevant events) is ready to return to the client. Yet another technique streams interim results or relevant events back to the client in real-time until a complete set of results is ready, and then returns the complete set of results to the client. In another technique, certain results are stored as “search jobs” and the client may retrieve the results by referring the search jobs. The search head can also perform various operations to make the search more efficient. For example, before the search head begins execution of a query, the search head can determine a time range for the query and a set of common keywords that all matching events include. The search head may then use these parameters to query the indexers to obtain a superset of the eventual results. Then, during a filtering stage, the search head can perform field-extraction operations on the superset to produce a reduced set of search results. This speeds up queries that are performed on a periodic basis. 2.7. Field Extraction The search head210allows users to search and visualize event data extracted from raw machine data received from homogenous data sources. It also allows users to search and visualize event data extracted from raw machine data received from heterogeneous data sources. The search head210includes various mechanisms, which may additionally reside in an indexer206, for processing a query. Splunk Processing Language (SPL), used in conjunction with the SPLUNK® ENTERPRISE system, can be utilized to make a query. SPL is a pipelined search language in which a set of inputs is operated on by a first command in a command line, and then a subsequent command following the pipe symbol “|” operates on the results produced by the first command, and so on for additional commands. Other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. In response to receiving the search query, search head210uses extraction rules to extract values for the fields associated with a field or fields in the event data being searched. The search head210obtains extraction rules that specify how to extract a value for certain fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the relevant fields. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, a transformation rule may truncate a character string, or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. The search head210can apply the extraction rules to event data that it receives from indexers206. Indexers206may apply the extraction rules to events in an associated data store208. Extraction rules can be applied to all the events in a data store, or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the event data and examining the event data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. FIG.5illustrates an example of raw machine data received from disparate data sources. In this example, a user submits an order for merchandise using a vendor's shopping application program501running on the user's system. In this example, the order was not delivered to the vendor's server due to a resource exception at the destination server that is detected by the middleware code502. The user then sends a message to the customer support503to complain about the order failing to complete. The three systems501,502, and503are disparate systems that do not have a common logging format. The order application501sends log data504to the SPLUNK® ENTERPRISE system in one format, the middleware code502sends error log data505in a second format, and the support server503sends log data506in a third format. Using the log data received at one or more indexers206from the three systems the vendor can uniquely obtain an insight into user activity, user experience, and system behavior. The search head210allows the vendor's administrator to search the log data from the three systems that one or more indexers206are responsible for searching, thereby obtaining correlated information, such as the order number and corresponding customer ID number of the person placing the order. The system also allows the administrator to see a visualization of related events via a user interface. The administrator can query the search head210for customer ID field value matches across the log data from the three systems that are stored at the one or more indexers206. The customer ID field value exists in the data gathered from the three systems, but the customer ID field value may be located in different areas of the data given differences in the architecture of the systems—there is a semantic relationship between the customer ID field values generated by the three systems. The search head210requests event data from the one or more indexers206to gather relevant event data from the three systems. It then applies extraction rules to the event data in order to extract field values that it can correlate. The search head may apply a different extraction rule to each set of events from each system when the event data format differs among systems. In this example, the user interface can display to the administrator the event data corresponding to the common customer ID field values507,508, and509, thereby providing the administrator with insight into a customer's experience. Note that query results can be returned to a client, a search head, or any other system component for further processing. In general, query results may include a set of one or more events, a set of one or more values obtained from the events, a subset of the values, statistics calculated based on the values, a report containing the values, or a visualization, such as a graph or chart, generated from the values. 3.0. Application Development and Deployment System FIG.6depicts an exemplary application development and deployment system for a data intake and query system in accordance with some embodiments of the present disclosure. Although an application development and deployment system may be implemented in any suitable environments for use by any suitable user types, in an embodiment the application development and deployment system may include an application development system602operated by application developers610and an application deployment system604operated by data intake and query system (DIQS) administrators612. An application developer610may operate the application development system602to create an application as a set of application packages (e.g., application packages630,632, and6341-634n) that are provided to the application deployment system604. The DIQS administrator may then utilize the application deployment system604to provide the application to the application partitioner624to generate deployment packages (e.g., deployment packages640,642, and6441-644m) that are distributed and installed at the data intake and query system606. As described herein, a data intake and query system may include a number of devices performing different functions within the data intake and query system. In the data intake and query system606ofFIG.6, the data intake and query system includes physical and logical groups. In an embodiment, physical groups my refer to classes of servers that perform a unique function within the data intake and query system, such that all indexers650may form one physical group, all search heads652may form another physical group, and all forwarders654may form yet another physical group. Because each physical group performs a unique function, it is not necessary for each physical group to receive and install a complete application, which may include a significant amount of code that is not germane to the operations of the physical group. Similarly, any of the physical groups may include one or more logical groups, each of which may define a unique sub-task that is performed by the servers within a particular logical group. In an embodiment, the physical group of forwarders654may include a plurality of logical groups6541-654m(e.g., forwarder groups). Each logical group6541-654mof forwarders may perform a unique sub-task, such as receiving and forwarding data from a particular data source. In an embodiment, the application development system602may include application development environment622and application partitioner624. An application developer610may interact with the application development environment622to create, develop, troubleshoot, test, and emulate an application for eventual deployment to data intake and query system606. In some embodiments, application development environment may exchange information with application partitioner624to assist in creating, developing, troubleshooting, testing, and emulating the application. Once an application has been developed, the source application package626(e.g., source code, configuration files, pre-deployment configuration settings, dependency settings, and an application manifest) may be provided to the application partitioner624from the application development environment622. In an embodiment, the source application package may be a single package that includes all of the necessary information to provide application code to each of the components of the data intake and query system606. Application partitioner624may utilize the information provided in the source application package626to generate targeted application packages to be provided to the application deployment system604, in order to provide targeted deployment packages to the physical and logical groups of the data intake and query system606. In an embodiment, the application partitioner624may generate an indexer application package630, a search head application package632, and forwarder group application packages6341-634n. The application packages630,632, and6341-634nmay be provided to the application deployment system604from the application partitioner624of the application development system602. Application packages may be provided from the application partitioner624in such a way that the application packages may be deployed as deployment packages to a number of disparate data intake and query systems606, as long as the target system includes the appropriate physical and logical groups required for the application. The application deployment system604may be associated with and may configure the application packages for deployment to a particular data intake and query system606, i.e., to target the specific physical and logical groups of the particular data intake and query system606. In an embodiment, the application deployment system604may have information about the components of the data intake and query system606(e.g., indexers650, search heads652, and forwarder groups6541-654m). The application deployment system604may provide a user interface (e.g., a GUI and/or command line interface) that allows the DIQS administrator612to configure the deployment and updating the application packages for the particular deployment, and providing deployment packages (e.g., indexer deployment package640, search head deployment package642, and forwarder group deployment packages6441-644m) to the data intake and query system606. The deployment packages may then be installed at the appropriate physical and logical groups of the data intake and query system (e.g., indexer deployment package640may be installed at each of indexers650, search head deployment package642may be installed at each of search heads652, and forwarder group deployment packages6441-644mmay be stored at corresponding forwarder groups6541-654m). 3.1. Application Development Environment FIG.7depicts an exemplary application development environment622in accordance with some embodiments of the present disclosure. Although application development environment622is depicted as a set of blocks including particular functionality, it will be understood that the application development environment may be implemented on suitable computing devices including processers executing instructions stored in memory, to perform the functions described herein. In an embodiment, application development environment622may be implemented as an integrated development environment in which application developers610may access aspects of the application development environment622from local devices (e.g., workstations, desktops, laptops, tablets, etc.) while other aspects of the application development environment are accessible from servers (e.g., local, remote, and/or cloud servers). In this manner, application developers610can work from shared resources of the application development environment622to create complex applications for eventual deployment at data intake and query systems. Although an application development environment622may include any suitable components, in an embodiment the application development environment622may include a set of modules that provide for aspects of the operation of the application development environment622. Although each module may be implemented in any suitable manner, in an embodiment the modules may be implemented as software instructions that are executed on one or more processors of one or more devices or servers of the application development environment, as well as data stored in memory that may be utilized by the modules. In an embodiment, the modules of the application development environment may include an application generator module702, a configuration file module704, a source code module706, a pre-deployment configuration module708, a dependency management module710, and an application manifest module712. Although these modules are depicted and described herein as being components of the application development environment622, it will be understood that in some embodiments one or more of these modules or any suitable functionality thereof may be located elsewhere within the application development system602(e.g., at application partitioner624). In an embodiment, an application developer610accesses all of these modules through a single application development interface. In an embodiment, application generator module702may perform a variety of functions in order to generate the source application package626based on interactions with application developers610, the operation and information associated with the other modules of application development environment622, and communications with application partitioner624. In an embodiment, application generator module702may provide a rich user interface and/or information necessary to generate a rich user interface for application developers610to interact with modules of the application development environment. An exemplary user interface may allow a user to access and modify parameters and I/O for configuration files of configuration file module704, create source code for source code module706, set pre-deployment configuration parameters for physical or logical groups for pre-deployment configuration module708, define dependency relationships for applications with dependency module710, and create an application manifest for use in defining deployment parameters with application manifest module712. Based on the user inputs of the application developer610, the application generator module702may update code, information, and parameters associated with the other modules, and generate (e.g., automatically or in response to a request by an application developer) a complete source application package626for the application. In some embodiments, application generation module702may automatically perform certain functionality, such as generating some or all sections of an application manifest, identifying and defining dependency relationships, setting available configuration parameters for pre-deployment configuration, and developing (e.g., emulating, testing, and debugging) source code and configuration files. In some embodiments, application generator module702may communicate with one or more external components (e.g., application partitioner624) to perform some aspects of this functionality. Configuration file module704may include configuration files that are customized for data intake and query systems, and that provide specific functionality for components of data intake and query systems. By utilizing configuration files of configuration file module704, an application developer610can develop applications that implement functionality that is known to be operational, and by providing inputs to and receiving outputs from those files, may more easily develop complex applications. Configuration file module704may include interfaces that define the portions of the configuration that may be modified, as well as inputs and outputs that the application developer610is permitted to utilize. An exemplary listing of configuration files and summary of the functionality provided by these configuration files is provided in Table 1. TABLE 1Configuration FilesConfiguration FileDescriptionalert_actions.confDefines saved search actions. Saved searches areconfigured in savedsearches.conf.app.confIdentifies and defines the state of a given app. Allowsaspects of the application to be customized.authorize.confConfigures roles and granular access controls.collections.confDefines the KV Store collections for an app.commands.confDeclares custom search commands.crawl.confConfigures the crawl search command which examines thefile system to discover new sources to index, using built-insettings.datamodels.confDefines data models.datatypesbnf.confConfigures how the search assistant shows the syntax forsearch commands.distsearch.confConfigures distributed search.eventtypes.confConfigure event types and their properties.fields.confConfigures multi-value field handling, distinguishesbetween indexed and extracted fields, and improves searchperformance by telling the search processor how to handlefield values.indexes.confDefines DIQS indexes and their properties.inputs.confDefines inputs to DIQS.limits.confSets limits for search commands.macros.confDefines search language macros.multikv.confDefines rules by which table-like events are extracted atsearch time.props.confDefines data processing rules.restmap.confDefines REST endpoints. App use cases includemanagement interfaces for custom search commands,modular alerts, and modular inputs.savedsearches.confDefines saved searches.searchbnf.confConfigures the search assistant.segmenters.confDefines segmentation rules to be applies at index time for ahost, source, or sourcetype.tags.confSets any number of tags for indexed or extracted fields.times.confDefines custom time ranges.transactiontypes.confDefines transaction searches and their properties.transforms.confDefines search- and index-time transformations.ui-prefs.confDefines UI preferences for a view.user-prefs.confDefines some of the settings that can be configured on aper-user basis for use by the DIQS Web UI.viewstates.confRecords the state of a view.workflow_actions.confConfigures workflow actions.audit.confConfigures auditing and event signing in audit.confwmi.confConfigures WMI data collection from a Windows machine. Source code module706may interface source code written by an application developer610to interface with the configuration files of configuration file module704as well as other modules of the application development environment622. In an embodiment, coding tools may be provided that generate portions of code to assist developers in creating the source code and interfacing with the other modules. Menus, prompts, and sample code may also be provided to assist in developing the source code. Although any suitable programming language may be supported by source code module706, in an exemplary embodiment source code module may support C++ and Python. Pre-deployment configuration module708may provide an interface for configuring the application for particular target resources of data intake and query systems. In some embodiments, the pre-deployment configuration module708may define the parameters that need to be configured for a particular installation (e.g., by a DIQS administrator612operating an application deployment system604). For example, credentials may be associated with particular logical groups and may be defined by the application developer610via the pre-deployment configuration module708(e.g., a forwarder group servicing Microsoft Exchange servers, and requiring Microsoft Exchange credentials). Dependency management module710may provide an interface for an application developer to manage dependencies among applications. Numerous applications may be operational on a particular data intake and query system606, and in some embodiments, an application may be developed that calls a second level of applications. In some embodiments, the second level applications may call additional applications, and so on. Some applications may depend from a number of other applications, and applications may be situated at multiple different levels of dependency with respect to other applications. Furthermore, each application may define a version tolerance associated with dependent applications, which defines the range of versions of the dependent application that will are permissible for the calling application. These complex dependency relationships may result in a dependency tree for an application being developed within the application development environment622. In an embodiment, dependency management module710may provide tools for defining and resolving dependency relationships within an application. Dependency management module710may provide tools that allow an application developer610to define dependency relationships. Although the dependency management tools may be implemented in a variety of manners (e.g., dependency listings, etc.), in an embodiment dependency relationships may be defined based on text information provided in a dependency portion of an application manifest. Although the dependency relationships may be defined in any suitable manner, in an embodiment the information defining a dependency relationship may include information such as an application definition (e.g., a name defining the dependent application within a particular higher-level application), a dependency type (e.g., required or optional), an application group, an application name, a version tolerance (e.g., defining acceptable versions of the dependent application), and an application package (e.g., providing a location for application code for the dependent application). In some embodiments, the application developer610may also utilize dependency management module710to define anti-dependencies, which indicate that the application should not be installed if another particular application or dependency relationship is installed at the data intake and query system606. For example, an anti-dependency relationship may be defined for certain sophisticated “heavy” applications which consume a lot of system resources or deprecated application which the application developer is aware of and prefers not to have installed with the application. Each dependent application may include a similar manifest defining its own dependencies. An example of portions of an application manifest defining dependency relationships is provided in the example below, in which a “Splunk App for Microsoft Exchange” has dependent applications “Splunk Add-on for Microsoft IIS” and “Splunk Add-on for Microsoft Windows.” {“schemaVersion”: “1.0”,“info”: {“title”: “Splunk App for Microsoft Exchange”, “id”: { “group”: “com.splunk.app”,“name”: “microsoft exchange”, “version”: “3.1.3”},...},“dependencies”: [“Splunk Add-on for Microsoft IIS”: {“type”: “required”,“app”: {“group”: “com.splunk.addon”,“name”: “microsoft_iis”,“version”: “[2.1,2.2)”},“package”: “com.splunk.addon-microsoft_windows-2.1.1.tar.gz”},“Splunk Add-on for Microsoft Windows”: {“type”: “required”,“app”: {“group”: “com.splunk.addon”,“name”: “microsoft_windows”,“version”: “[4.7,4.8)”},“package”: “com.splunk.addon-microsoftwindows-4.7.5.tar.gz”}...}} Dependency management module710may utilize the dependency relationships (e.g., as provided in the application manifest) to determine dependent application compatibility based on a set of rules for resolving conflicts between dependencies. In an embodiment, the dependency management module710may identify each instance within the dependency tree in which a dependent application is required, and determine the version tolerance associated with each instance. The version tolerances may then be used to determine how or if the dependent application should be packaged within the source application package626. In an embodiment, an application may access the version tolerances for each application, and for an application that is a dependent application for multiple applications, may determine a range of versions that is acceptable for all of the defined dependency relationships. If there is no overlap between acceptable versions, a warning or similar indication may be provided to the application developer610. In some embodiments, the dependency management module710may communicate with other components of the application development system to determine if the versions defined by the application developer are compatible with dependencies for existing applications that may also be installed at a data intake and query system606. In some embodiments, this analysis may be performed in conjunction with dependency rules908of application deployment system604. Application manifest module712may provide for the creation and development of an application manifest for the application. Although an application manifest may include any suitable information for an application, in an exemplary embodiment the application manifest may provide descriptive information, group information, and dependency information, which may be parsed by the application partitioner624in order to determine how to partition the source application package626into a plurality of application packages (e.g., indexer application package630, search head application package632, and forwarder group application packages6341-634n). In some embodiments, aspects of the application manifest may be generated automatically, based on inputs from an application developer610, information about a target data intake and query system, standard application manifest formats, related applications, or any combination thereof. The application developer610may then work from this example application manifest to develop a complete application manifest. An exemplary application manifest, including descriptive information (e.g., “schema version,” “info,” “author,” “releaseDate,” “description classification,” “license,” and “privacyPolicy”) and information regarding logical groups (e.g., inputs for forwarder groups “all,” “Active Directory Domain Services,” “DHCP Server,” “Windows Event Log,” “Windows Host Monitor,” ‘Windows Performance Monitor,” “Windows Network Monitor,” “Windows Print Monitor,” Windows Registry,” and “Windows Update Monitor”) is provided below. {“schemaVersion”: “1.0”,“info”: {“title”: “Splunk Add-on for Microsoft Windows”,“id”: {“group”: “com.splunk.addon”,“name”: “microsoft_windows”,“version”: “4.7.5”},“author”: [{“name”: “author”,“email”: “[email protected]”,“company”: “Splunk, Inc.”}],“releaseDate”: “10-22-2015”,“description”: “Splunk can be used to monitor your Windows machines forchanges, performance over time, and important system information such as securityaudits and alerts. The Splunk Add-on for Windows is a collection of inputs andknowledge to accelerate your use of Splunk for common Windows monitoring tasks.”,“classification”: {“intendedAudience”: “System Administrators”,“categories”: [“IT Operations”,“Application Management”,“Add-on”],“developmentStatus”: “Production/Stable”},“license”: {“name”: “Splunk Software License Agreement”,“text”: “./app-license.html”,“uri”:“http://www.splunk.com/en_us/legal/splunk-software-license-agreement.html”},“privacyPolicy”: {“name”: “Splunk Privacy Policy”,“text”: “./app-privacy.html”,“uri”:“http://www.splunk.com/en_us/legal/splunk-software-privacy-policy.html”}},“forwarderGroups”: {“all”: {“inputs”: [“admon://default”,“monitor://$WINDIR\\System32\\DHCP”,“monitor://$WINDIR\\WindowsUpdate.log”,“perfmon://CPU”,“perfmon://LogicalDisk”,“perfmon://Memory”,“perfmon://Network”,“perfmon://PhysicalDisk”,“perfmon://Process”,“perfmon://System”,“script://.\\bin\\win_installed_apps.bat”,“script://.\\bin\\win_listening_ports.bat”,“WinEventLog://Application”,“WinEventLog://Security”,“WinEventLog://System”,“WinHostMon://Application”,“WinHostMon://Computer”,“WinHostMon://Disk”,“WinHostMon://Driver”,“WinHostMon://NetworkAdapter”,“WinHostMon://OperatingSystem”,“WinHostMon://Process”,“WinHostMon://Processor”,“WinHostMon://Roles”,“WinHostMon://Service”,“WinNetMon://inbound”,“WinNetMon://outbound”,“WinPrintMon://driver”,“WinPrintMon://port”,“WinPrintMon://printer”,“WinRegMon://default”,“WinRegMon://hkcu_run”,“WinRegMon://hklm_run”]},“Active Directory Domain Services”: {“inputs”: [“admon://default”]},“DHCP Server”: {“inputs”: [“monitor://$WINDIR\\System32\\DHCP”]},“Windows Event Log”: {“inputs”: [“WinEventLog://Application”,“WinEventLog://Security”,“WinEventLog://System”]},“Windows Host Monitor”: {“inputs”: [“WinHostMon://Application”,“WinHostMon://Computer”,“WinHostMon://Disk”,“WinHostMon://Driver”,“WinHostMon://NetworkAdapter”,“WinHostMon://OperatingSystem”,“WinHostMon://Process”,“WinHostMon://Processor”,“WinHostMon://Roles”,“WinHostMon://Service”]},“Windows Performance Monitor”: {“inputs”: [“perfmon://CPU”,“perfmon://LogicalDisk”,“perfmon://Memory”,“perfmon://Network”,“perfmon://PhysicalDisk”,“perfmon://Process”,“perfmon://System”]},“Windows Network Monitor”: {“inputs”: [“WinNetMon://inbound”,“WinNetMon://outbound”]},“Windows Print Monitor”: {“inputs”: [“WinPrintMon://driver”,“WinPrintMon://port”,“WinPrintMon://printer”]},“Windows Registry”: {“inputs”: [“WinRegMon://default”,“WinRegMon://hkcu_run”,“WinRegMon://hklm_run”],{“name”: “Windows”,“version”: “[6.3,)”}],“features”: [ ]}},“Windows Update Monitor”: {“inputs”: [“monitor://$WINDIR\\WindowsUpdate.log”]}}} An application developer610may test, emulate, and debug an application within the application development environment622. In some embodiments, a preliminary source application package626may be provided to the application partitioner624in order to determine whether the source application can be properly partitioned (e.g., based on properly defined dependencies, dependency version tolerances, logical groups, pre-deployment configuration settings, and source code/configuration file relationships). Once an application has been developed the completed source application package626may be provided to the application partitioner624to be partitioned and distributed to the application deployment system604for eventual installation at the data intake and query system606. 3.2. Application Partitioner FIG.8depicts an exemplary application partitioner624in accordance with some embodiments of the present disclosure. Although application partitioner624is depicted as a set of blocks including particular functionality, it will be understood that the application partitioner may be implemented on suitable computing devices including processers executing instructions stored in memory, to perform the functions described herein. In an embodiment, application partitioner624may be implemented as a rule-based automated system in which most or all of the operations required to partition a source application package626are performed automatically by the hardware and software of the application partitioner. In some embodiments, at least a portion of the operations of the application partitioner624may operate on some of the same devices and servers as the application development environment622of application development system602, or in some embodiments, application partitioner may operate entirely on separate devices and servers (e.g., local, remote, and/or cloud servers). Although an application partitioner624may include any suitable components, in an embodiment the application partitioner624may include a set of modules that provide for aspects of the operation of the application partitioner624. Although each module may be implemented in any suitable manner, in an embodiment the modules may be implemented as software instructions that are executed on one or more processors of one or more devices or servers of the application development environment, as well as data stored in memory that may be utilized by the modules. In an embodiment, the modules of the application partitioner may include an application partitioning module802, a partitioning rules module804and a DIQS configuration module806. Although these modules are depicted and described herein as being components of the application partitioner624, it will be understood that in some embodiments one or more of these modules or any suitable functionality thereof may be located elsewhere within the application development system602(e.g., at application development environment622) or at the application deployment system604. Application partitioning module802may receive the source application package626(e.g., including a package of the source code, configuration files, pre-deployment configuration settings, dependency definitions, and an application manifest) and may partition the received package into a plurality of packages that are targeted for particular physical and/or logical groups of a data intake and query system606. In an embodiment, the partitioning of the source application package626may be performed automatically based on the information provided in the source application package626and the other modules of the application partitioner624. Although an application may be partitioned in any suitable manner for any suitable target system, in an embodiment the application may be partitioned into three physical groups, with each physical group targeted to the unique functionality of an indexer, search head, or forwarder. The resulting application packages—e.g., indexer application package630, search head application package632, and forwarder application package634—may include only the application components that are necessary for the targeted physical group. In some embodiments, one or more of the physical groups may also include a plurality of logical groups, each of which performs a unique sub-task of the physical group. In an embodiment of forwarders as a physical group, the forwarder group may include a plurality of logical groups that handle the particular forwarding sub-tasks, for example, based on the source of the data that is being provided to the data intake and query system. Each logical group may receive its own unique application package, which may include only the application components that are necessary for the targeted logical group (e.g., each of forwarder application packages6341-634n). Partitioning rules module804may create, update, and execute partitioning rules that may be used to determine which portions of the source application package626are allocated to each of the application packages630-634. In an embodiment, partitioning rules module804may dynamically generate and update the partitioning rules, for example, based on data and feedback from application developers610(e.g., via application development environment622), DIQS administrators (e.g., via application deployment system604), other operator-provided input (e.g., selections of an operator (not depicted) of application partitioner), and machine-generated data from any of application development system604, application deployment system606, and data intake and query system606. In some embodiments, partitioning rules module804may use this data to automatically create or update partitioning rules using machine learning techniques (e.g., providing the input data to a neural network). In some embodiments, the partitioning rules may be created or further modified by a user of the partitioning rules module804. Although partitioning rules may be implemented in any suitable manner, in an embodiment partitioning rules may be implemented based on the configuration files and application manifest of the source application package626. In an embodiment, each configuration file may be known to implement functionality of particular target physical or logical groups (e.g., indexer, search head, or forwarder), and for certain configuration files, particular settings for the configuration file may be known to implement functionality of particular target physical or logical groups. For example, the configuration files of Table 1 above may be associated with physical groups as indicated in Table 2 below. TABLE 2Configuration File/Physical Group AssociationsConfiguration FileGroupalert_actions.confSearch Headapp.confIndexer, Search Head, Forwarderauthorize.confSearch Headcollections.confSearch Headcommands.confSearch Headcrawl.confSearch Headdatamodels.confSearch Headdatatypesbnf.confSearch Headdistsearch.confSearch Headeventtypes.confSearch Headfields.confIndexer, Search Headindexes.confIndexer, Search Headinputs.confForwarderlimits.confIndexer, Search Headmacros.confSearch Headmultikv.confSearch Headprops.confIndexer, Search Head, or Forwarder, dependingon settingrestmap.confIndexer, Search Headsavedsearches.confSearch Headsearchbnf.confSearch Headsegmenters.confIndexertags.confSearch Headtimes.confSearch Headtransactiontypes.confSearch Headtransforms.confIndexer or Search Head, depending on settingui-prefs.confSearch Headuser-prefs.confSearch Headviewstates.confSearch Headworkflow_actions.confSearch Headaudit.confIndexer, Forwarderwmi.confForwarder Based on the associations between configuration files (and settings of configuration files) and physical groups, the correct configuration files may be assigned only to the proper target application packages (e.g., indexer application package630, search head application package632, and/or forwarder application packages6341-634n). The source code from the source application package626may also be partitioned and assigned to particular target application packages. In an embodiment, the source code may be partitioned based on the configuration files and settings that the source code utilizes. Each portion of code (e.g., each subroutine, function call, etc.) may be associated with the configuration files or settings that it utilizes, and only distributed to the application package (e.g., indexer application package630, search head application package632, and/or forwarder application packages6341-634n) that is associated with the configuration file or setting. In an embodiment, partitioning rules804may also coordinate the partitioning of the source application package626among logical groups (e.g., forwarder application packages6341-634n). In an embodiment, the application manifest may include information that associates particular logical groups with particular configuration files and/or portions of source code. Once the source application package has been partitioned into physical groups, it can be further partitioned into logical groups based on the application manifest. DIQS configuration module806may include information about the configurations of data intake and query systems for use by application partitioning module802. In an embodiment, DIQS configuration module may include information relating to different types of installations of data intake and query systems that may be utilized to properly package the application portions for different target environments. In an embodiment, different partitioning or dependency rules may be applied to different target deployment topologies/environments. The application partitioning module806may obtain information about types of installations (e.g., based on an application manifest) that are targeted by the source application package626, and based on this information, may adjust settings of the application partitioning module802and partitioning rules module804. 3.3. Application Deployment System FIG.9depicts an exemplary application deployment system604in accordance with some embodiments of the present disclosure. Although application deployment system604is depicted as a set of blocks including particular functionality, it will be understood that the application deployment system may be implemented on suitable computing devices including processers executing instructions stored in memory, to perform the functions described herein. In an embodiment, application deployment system604may be implemented as an integrated deployment environment in which a DIQS administrator612may access aspects of the application deployment system604from local devices (e.g., workstations, desktops, laptops, tablets, etc.) while other aspects of the application deployment system are accessible from servers (e.g., local, remote, and/or cloud servers). In this manner, DIQS administrators612can work from shared resources of the application deployment system604to deploy complex applications to a data intake and query system606. Although an application deployment system604may include any suitable components, in an embodiment the application deployment system604may include a set of modules that provide for aspects of the operation of the application deployment system604. Although each module may be implemented in any suitable manner, in an embodiment the modules may be implemented as software instructions that are executed on one or more processors of one or more devices or servers of the application deployment system, as well as data stored in memory that may be utilized by the modules. In an embodiment, the modules of the application deployment system604may include an application deployment module902, an application configuration module904, a DIQS configuration module906, and a dependency rules module908. Application deployment module902may interface with DIQS administrator612to control the deployment of the target application packages630-634to the target resources data intake and query system606as deployment packages640-644, as well as the configuration of the application and data intake and query system. In an embodiment, aspects of the deployment and configuration may be performed automatically based on information provided in the incoming target application packages, by the application configuration module904, and the DIQS configuration module906. Exemplary information that may be determined automatically includes the deployment of certain target application packages to certain physical groups (e.g., indexer application package630may be packaged as indexer deployment package640for installation at indexers650, and search head application package632may be packaged as search head deployment package642for installation at search heads652) as well as the determination of certain configuration information. Other information for the deployment and configuration of the application (e.g., assignment of forwarder group application packages6341-634nto forwarder groups6541-654m, for packaging as forwarder group deployment packages6441-644m) may be provided by the DIQS administrator via user interfaces (e.g., the user interfaces depicted inFIGS.10-17herein). Once all of the target application packages630-634have been assigned to particular target resources (e.g., indexers650, search heads652, and forwarder groups6541-654m) of the data intake and query system606, the target application packages630-634may be packaged as indexer deployment package640, search head deployment package642, and forwarder group deployment packages6441-644m, based on the configuration information determined by the application deployment module902and DIQS administrator612. Application configuration module904may provide information relating to DIQS administrators612and a user interface for DIQS administrators to interact with application deployment system604(e.g., as depicted inFIGS.10-17). In an embodiment, DIQS administrators612may have login or other authentication information that allows them to access the application development system, and in some embodiments, different deployment and configuration options may be available based on permissions associated with particular DIQS administrators. DIQS administrators may be able to save common configuration settings for reuse, as well as partially complete configurations. Information provided via the application configuration module904may be used by the application deployment module902to configure and deploy the deployment packages to the components of the data intake and query system606(e.g., indexers650, search heads652, and forwarder groups6541-654m). DIQS configuration module906may include information related to a data intake and query system606that is associated with the application deployment system. In an embodiment, the DIQS configuration module may access information relating to each of the components of the data intake and query system606(e.g., indexers650, search heads652, and forwarder groups6541-654m) such as location, address, installed applications and their versions, license information, dependencies, configuration parameters, physical group, and logical group. This information may be used along with the incoming target application packages630-634to configure and deploy the deployment packages to the components of the data intake and query system606. Dependency rules module908may provide sets of rules that resolve conflicts between dependencies of the source application package626and installed applications in a data intake and query system606. In some embodiments, the dependency rules module908may have stored information (e.g., installed versions, dependencies, and version tolerances) relating to the applications that are currently located at different data intake and query systems606. Although the dependency rules module908may resolve dependencies in any suitable manner, in an embodiment the dependency rules module908may implement a series of dependency rules. One exemplary dependency rule may resolve conflicts where an application to be installed (e.g., any dependent application) is not compatible with existing dependencies and version tolerances of applications at the data intake and query system606. Dependency rules module may compare the dependencies and version tolerances of the application to be installed at a particular physical or logical group to the dependencies and version tolerances of applications that are already installed at the particular physical or logical group. If any version that is to be installed falls outside of the version tolerances of the applications that are already installed (e.g., an application A of the data intake and query system depends on B, with version tolerance [1.0-2.0], and the version of B to be installed is version 2.1), the installation may fail. In an embodiment, a message describing the conflict may be provided to an application developer610(e.g., via application development environment622) or to a DIQS administrator612(e.g., via application deployment system604). Another exemplary dependency rule may resolve conflicts where an application to be installed (e.g., any application or dependent application) is compatible with existing dependencies and version tolerances of applications at the data intake and query system606, but would cause the installed application to be modified (e.g., upgraded or downgraded). Dependency rules module908may compare the dependencies and version tolerances of the application to be installed at a particular physical or logical group to the dependencies and version tolerances of the application to be installed at the particular physical or logical group. If any version that is to be installed is within the version tolerances of the applications that are already installed (e.g., an application A of the data intake and query system depends on B, with version tolerance [1.0-2.0], the installed version of B is version 1.1, and the version of B to be installed is version 1.2), the installation is allowable but will require a change to an installed application. In an embodiment, a message describing the requested change may be provided to an application developer610(e.g., via application development environment622) or to a DIQS administrator612(e.g., via application deployment system604), and the application developer or DIQS administrator may decide whether to move forward with the installation. Another exemplary dependency rule may move forward with an installation where an application to be installed (e.g., any application or dependent application) is compatible with existing dependencies and version tolerances of applications at the data intake and query system606, and would not require any installed application to be modified (e.g., upgraded or downgraded). Dependency rules module may compare the dependencies and version tolerances of the application to be installed at a particular physical or logical group to the dependencies and version tolerances of the application to be installed at the particular physical or logical group. If all of the versions that are to be installed are the same as the versions of the applications that are already installed (e.g., an application A of the data intake and query system depends on B, the installed version of B is version 1.1, and the version of B to be installed is version 1.1), the installation is allowable and may be performed automatically, without input from a user. Another exemplary dependency rule may define anti-dependencies to indicate that an application must not be installed under certain circumstances. In an embodiment, an anti-dependency may define certain applications, dependent applications, and/or application and dependency versions as anti-dependencies. For example, an anti-dependency relationship may be defined for certain sophisticated “heavy” applications which consume a lot of system resources or deprecated application which the application developer is aware of and prefers not to have installed with the application. If an anti-dependency is identified, the application installation may be aborted or the existing installation may be modified. Once the DIQS administrator612and the application deployment module902have completed the configuration of the application for the data intake and query system, each of the incoming target application packages may be repackaged as deployment packages (e.g., indexer application package630may be packaged as indexer deployment package640, search application package632may be packaged as search head deployment package642, and forwarder group application packages6341-634nmay be packaged as forwarder group deployment packages6441-644m). The deployment packages may then be distributed to the appropriate resources (e.g., workloads and/or servers, based on the deployment topology) of the data intake and query system606(e.g., indexers650, search heads652, and forwarder groups6541-654m). FIG.10depicts an exemplary application acquisition interface1000of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, the application acquisition interface is one of a series of user interface screens presented to a DIQS administrator602during configuration and deployment of an application. In an embodiment, application acquisition interface1000may include deployment interface tabs1002, deployment timeline1004, deployment navigation1006, and application locator1008. In an embodiment, deployment interface tabs may include tabs that allow a DIQS administrator to navigate between different functionality of the application deployment system604, such as application deployment (depicted), interface configuration, administrator settings (e.g., for application configuration module904), and DIQS settings (e.g., for DIQS configuration module906). In an embodiment, deployment timeline1004may provide a series of steps (e.g., a “wizard”) for deploying the application and depicting the progress of the DIQS administrator612in completing those steps. In an embodiment, deployment navigation1006may provide navigation tools for navigating between steps of the deployment timeline1004. In an embodiment, application locator1008may provide a tool to allow the DIQS administrator612to select an application to deploy. Although the application locator1008may utilize any suitable tool, in an embodiment the application locator1008may provide a pull-down menu that allows the DIQS administrator612to select an application for deployment. Once an application is selected, the DIQS administrator612may select next from deployment navigation1006to continue to the next screen of the deployment user interface. FIG.11depicts an exemplary application pre-deployment configuration interface1100of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, application pre-deployment configuration interface1100may include deployment interface tabs1102, deployment timeline1104, deployment navigation1106, and application settings1110. In an embodiment, each of deployment interface tabs1102, deployment timeline1104, and deployment navigation1106may function in the same manner as described herein with respect to application acquisition interface1000. Application settings1110may include any suitable user interface elements (e.g., text interfaces, pull down menus, scroll bars, radio buttons, etc.) to configure the application for deployment. Example application settings1110that may be configured include authentication credentials/tokens for integrating with the 3rdparty systems, geographical regions or business divisions to collect the data from, polling interval, subset of services to be monitored, or thresholds to use when triggering/throttling alerts. Once application selections are completed, the DIQS administrator612may select next from deployment navigation1106to continue to the next screen of the deployment user interface. FIG.12depicts an exemplary application staging interface1200of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, application staging interface1200may include deployment interface tabs1202, deployment timeline1204, deployment navigation1206, and forwarder selections1208-1212. In an embodiment, each of deployment interface tabs1202, deployment timeline1204, and deployment navigation1206may function in the same manner as described herein with respect to application acquisition interface1000. As described herein, the application deployment system604may receive forwarder application packages6341-634n. Forwarder selections1208-1212may populate a selection interface (e.g., a drop down menu) with names of available forwarder groups6541-654m. Based on the selections of the DIQS administrator612, the forwarder application packages6341-634n. may be assigned to forwarder groups6541-654mas forwarder deployment packages6441-644m. Once forwarder groups have been selected, the DIQS administrator612may select next from deployment navigation1206to continue to the next screen of the deployment user interface. FIG.13depicts an exemplary dependency overlap interface1300of an application deployment system604in accordance with some embodiments of the present disclosure. If the installed application includes dependencies that require input from a user (e.g., as a result of a dependency overlap or a conflict), a dependency interface such as dependency overlap interface1300or dependency resolution interface1400may be displayed. In an embodiment, dependency overlap interface1300may include deployment interface tabs1302, deployment timeline1304, deployment navigation1306, dependency notifications1308, and dependency overlap selection1310. In an embodiment, each of deployment interface tabs1302, deployment timeline1304, and deployment navigation1306may function in the same manner as described herein with respect to application acquisition interface1000. Dependency notifications1308may provide information to DIQS administrator612about dependent applications that are required for the application. Dependency overlap selection1310may provide a notification that a dependency overlap exists that may result in installation of a dependent application version that is different than currently installed versions, but is nonetheless within the tolerance ranges of installed applications. DIQS administrator612may be provided with a selection of whether they wish to proceed with the installation. FIG.14depicts an exemplary dependency resolution interface1400of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, dependency resolution interface1400may include deployment interface tabs1402, deployment timeline1404, deployment navigation1406, dependency notifications1408, and dependency conflict selection1410. In an embodiment, each of deployment interface tabs1402, deployment timeline1404, and deployment navigation1406may function in the same manner as described herein with respect to application acquisition interface1000. Dependency notifications1408may provide information to DIQS administrator612about dependent applications that are required for the application. Dependency conflicts selection1410may provide a notification that there are conflicts between version tolerances of dependent applications. DIQS administrator612may be provided with a selection of whether they wish to keep the existing versions, replace the versions, or abort the installation. FIG.15depicts an exemplary deployment confirmation interface1500of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, deployment confirmation interface1500may include deployment interface tabs1502, deployment timeline1504, deployment navigation1506, and deployment timing selection1508. In an embodiment, each of deployment interface tabs1502, deployment timeline1504, and deployment navigation1506may function in the same manner as described herein with respect to application acquisition interface1000. Although the deployment timing may be determined in any suitable manner, in an embodiment the application may be deployed immediately or at a time and date set by DIQS administrator612based on deployment timing selection1508. FIG.16depicts an exemplary deployment review interface1600of an application deployment system604in accordance with some embodiments of the present disclosure. Deployment review interface1600may be accessed by a selection of deployment confirmation interface (e.g., “REVIEW DEPLOYMENT INFO”). In an embodiment, application staging interface1600may include deployment interface tabs1602, deployment timeline1604, deployment navigation1606, physical group deployment1610, logical group deployment1620, and deployment staging1630. In an embodiment, each of deployment interface tabs1602, deployment timeline1604, and deployment navigation1606may function in the same manner as described herein with respect to application acquisition interface1000. Physical group deployment1610may provide an interface for DIQS administrator612to review and deploy the deployment packages (e.g., indexer deployment package640and search head deployment package642) that are being deployed to the physical groups of the data intake and query system606(e.g., indexers650and search heads652). In an embodiment, the deployment packages may be automatically assigned to the corresponding physical group as depicted at1612and1614, while in other embodiments (not depicted), the DIQS administrator612may modify the assignment of deployment packages to physical groups. Physical group deployment1610may also allow the DIQS administrator612to view the contents of the deployment package and the physical groups. Logical group deployment1620may provide an interface for DIQS administrator612to review and deploy the deployment packages (e.g., forwarder deployment packages6441-644m) that are being deployed to the logical groups of the data intake and query system606(e.g., forwarders6541-654m). In an embodiment, the deployment packages may be named and an interface may be provided to view the contents of the of each deployment package. Selection interfaces1622and1624may be provided for the DIQS administrator612to select between logical groups, as well as the ability to view information (e.g., configuration and hardware) about the logical group. Deployment staging1630may allow DIQS administrator612to determine the timing of the deployment of the application deployment packages to the components of the data intake and query system606. Although the deployment timing may be determined in any suitable manner, in an embodiment the application may be deployed immediately or at a time and date set by DIQS administrator612at staging interface1632. Once the application staging has been completed, the user interface may continue to the next screen to confirm deployment, and then to the confirmation screen. FIG.17depicts an exemplary deployment confirmation interface1700of an application deployment system604in accordance with some embodiments of the present disclosure. In an embodiment, deployment confirmation interface1700may include deployment interface tabs1702, deployment timeline1704, deployment navigation1706, confirmation message1708, update settings1710, and application management1712. In an embodiment, each of deployment interface tabs1702, deployment timeline1704, and deployment navigation1706may function in the same manner as described herein with respect to application acquisition interface1000. Confirmation message1708may include a message regarding whether or not the deployment was successful, and an interface (e.g., “see log” selection) to view details about the application deployment. In an embodiment, update settings1710may allow DIQS administrator to select settings for deploying application updates (e.g., selecting auto-update versus manual update). Application management1712may allow DIQS administrator612to navigate to an application management tab to manage the newly deployed application as well as other installed applications for the data intake and query system. In an embodiment, application management1712may also allow DQIS administrator1612to uninstall an installed application, and revert to earlier installed application deployments. 3.4. Application Development and Deployment Methods FIGS.18-20depict exemplary steps for operating an application development and deployment system in accordance with some embodiments of the present disclosure. The steps depicted byFIGS.18-20are provided for illustrative purposes only; those skilled in the art will understand that additional steps may be included, that or more steps may be removed, and that the ordering of the steps ofFIGS.18-20may be modified in any suitable manner. It will be understood that while particular hardware, software, system components, development techniques, partitioning methods, and deployment procedures may be described in the context ofFIGS.18-20, that the steps described herein are not so limited. FIG.18depicts exemplary steps for creating an application for a data intake and query system in accordance with some embodiments of the present disclosure. Although the steps ofFIG.18may be performed in any suitable manner using any suitable equipment, in an embodiment the steps ofFIG.18may be performed by an application development environment622of an application development system602, based on inputs provided by an application developer610. At step1802, application code may be created, based on an application developer610interacting with a configuration file module704, source code module706, and application generator module702of application development environment622. Based on those inputs, the application developer610may define settings for configuration files and draft source code that interfaces with the configuration files, as described herein. Once the application code has been created, processing may continue to step1804. At step1804, pre-deployment configuration settings may be created and modified based on an application developer610interacting with pre-deployment configuration module708and application generator module702of application development environment622. As described herein, the application developer610may define aspects of the configuration for certain components (e.g., logical groups). Once the pre-deployment configuration settings have been set, processing may continue to step1806. At step1806, dependency definitions may be provided by an application developer610interacting with dependency management module710and application generator module702of application development environment622. As described herein, the application developer610may define a variety of information about application dependencies (e.g., dependency names and version tolerances) for the application and its dependent applications. In an embodiment, the dependency definitions may be provided as a portion of the application manifest. Once the dependency definitions have been provided, processing may continue to step1808. At step1808, an application manifest may be provided by an application developer610interacting with application manifest module712and application generator module702of application development environment622. As described herein, the application developer610may provide information about an application (e.g., application name, version, and privacy policy) and logical groups (e.g., logical group name and inputs). Once the application manifest has been completed, processing may continue to step1810. At step1810, the application may be tested by application generator module702of application development environment622. Although the application may be tested in any suitable manner, in an embodiment the application generator module may run through a number of test routines for each of the configuration files, source code, pre-deployment configuration settings, dependency definitions, and application manifest. These tests may determine whether the application meets certain pre-defined criteria (e.g., that the settings provided by the source code have acceptable values and format for the configuration files, an initial confirmation that dependency version tolerances match, and a confirmation that the application manifest is in the proper format). In some embodiments, a test application package may be provided to application partitioner624, which may test whether the application is capable of being partitioned. Once the application has been tested, processing may continue to step1812, in which it may be determined whether the application has passed the required tests. If the application did not pass the required tests, one or more notifications regarding the errors may be provided to the application developer610and processing may return to step1802. If the application passed the tests, processing may continue to step1814. At step1814, application generator module702of application development environment622may generate the source application package626based on the source code, configuration files, pre-deployment configuration settings, dependency definitions, and application manifest, and transmit the source application package626to application partitioner624, as described herein. The processing ofFIG.14may then end. FIG.19depicts exemplary steps for partitioning an application for a data intake and query system in accordance with some embodiments of the present disclosure. Although the steps ofFIG.19may be performed in any suitable manner using any suitable equipment, in an embodiment the steps ofFIG.19may be performed by an application partitioner624of an application development system602, based on the source application package626from application development environment622. At step1902, source application package626may be received from application development environment622. The components of the received application package (e.g., source code, configuration files, pre-deployment configuration settings, dependency definitions, and the application manifest) may be identified by the application partitioning module802of application partitioner624, and processing may continue to step1904. At step1904, application partitioning module802of application partitioner624may access the DIQS configuration information from DIQS configuration module806. As described herein, the DIQS configuration module806may determine and store information about data intake and query systems that may be utilized by the application partitioning module802to partition the source application package626. Once the DIQS configuration information has been accessed, processing may continue to step1906. At step1906, application partitioning module802of application partitioner624may partition the configuration files of the source application package626. As described herein, the application partitioning module802and partitioning rules module804may associate each configuration file, or in some cases settings of certain configuration files, with physical groups and/or logical groups of the data intake and query system. Configuration files may be assigned to one or more of the indexer application package630, search head application package632, and forwarder group application packages6341-634n. Once the configuration files have been partitioned, processing may continue to step1908. At step1908, application partitioning module802of application partitioner624may partition the source code of the source application package626. As described herein, the application partitioning module802and partitioning rules module804may associate portions of source code with configuration files, based on the interactions between portions of the source code and configuration file and configuration file settings. Once each portion of source code is associated with one or more configuration files, the portion of source code may be assigned to one or more of the indexer application package630, search head application package632, and forwarder group application packages6341-634n, based on the assignments of the configuration files. Once the configuration files have been partitioned, processing may continue to step1910. At step1910, application partitioning module802of application partitioner624may package each of the indexer application package630, search head application package632, and forwarder group application packages6341-634nbased on the DIQS configuration and partitioning rules, as described herein. Once the application has been partitioned and the application portions distributed to application deployment system604, the processing ofFIG.19may end. FIG.20depicts exemplary steps for partitioning an application for a data intake and query system in accordance with some embodiments of the present disclosure. Although the steps ofFIG.20may be performed in any suitable manner using any suitable equipment, in an embodiment the steps ofFIG.20may be performed by an application deployment system604, based on the received indexer application package630, search head application package632, and forwarder group application packages6341-634nfrom application partitioner624, and based on inputs from the DIQS administrator612. At step2002, indexer application package630, search head application package632, and forwarder group application packages6341-634nmay be received from application partitioner624. The received application packages may be identified by the application deployment module902of application deployment system604, and processing may continue to step2004. At step2004, application deployment module902of application deployment system604may access the administrator configuration settings from application configuration module904. As described herein, the application configuration module904may include information such as predetermined configurations for an administrator, permissions, and saved work. Once the administrator configuration settings have been accessed, processing may continue to step2006. At step2006, application deployment module902of application deployment system604may access the DIQS configuration settings from DIQS configuration module906. As described herein, the DIQS configuration module906may include information relating to the components of the data intake and query system (e.g., indexers650, search heads652, and forwarder groups6541-654m), such as location, address, installed applications, dependencies, configuration parameters, physical group, and logical group. Once the DIQS configuration settings have been accessed, processing may continue to step2008. At step2008, application deployment module902of application deployment system604may access dependency information from dependency rules module908. As described herein, the dependency rules module908may include rules for determining how conflicts between dependent applications and dependencies should be resolved, and procedures for resolving those dependencies. Once the dependency information has been accessed, processing may continue to step2010. At step2010, application deployment module902of application deployment system604may provide interfaces to receive inputs from DIQS administrator612in order to configure and deploy the application for the data intake and query system606. As depicted and described herein, DIQS administrator612may select an application for deployment to the data intake and query system606, provide configuration settings for the application, assign target deployment packages of the application to physical and logical groups, and stage the deployment of the deployment packages. Once the user input has been received at step2008, processing may continue to step2012. At step2012, application deployment module902of application deployment system602may package each of the indexer application package630, search head application package632, and forwarder group application packages6341-634nfor deployment based on the administrator configuration settings, dependency selections, DIQS configuration settings, and user inputs. As described herein, the packaging may result in the creation of indexer application package640, search head application package642, and forwarder group application packages6441-644m. These components may be respectively deployed to indexers650, search heads652, and forwarder groups6541-654m. Once the deployment packages have been distributed to data intake and query system606, processing of the steps ofFIG.20may end. The foregoing provides illustrative examples of the present disclosure, which are not presented for purposes of limitation. It will be understood by a person having ordinary skill in the art that various modifications may be made by within the scope of the present disclosure. It will also be understood that the present disclosure need not take the specific form explicitly described herein, and the present disclosure is intended to include variations to and modifications thereof, consistent with the appended claims. It will also be understood that variations of the systems, apparatuses, and processes may be made to further optimize those systems, apparatuses, and processes. The disclosed subject matter is not limited to any single embodiment described herein, but rather should be construed in breadth and scope in accordance with the appended claims.
117,669
11860822
DETAILED DESCRIPTION OF ONE OR MORE EMBODIMENTS The disclosed system and method allow data that has been placed on a distributed ledger to subsequently be rendered inaccessible to the network, and effectively destroyed. The system and method allows data to be destroyed from a distributed ledger without violating the integrity and availability of the ledger. Specifically, the destruction process must not interfere with the existing ledger functions and operations. For example, the destruction process must not introduce processing delays, cause network downtime, require network reconfiguration, render unrelated data inaccessible temporarily or permanently, or otherwise prevent the execution of smart contracts. The disclosed system and method also enables the efficient destruction of data relating to a first individual from an immutable distributed ledger. Specifically, the destruction process must be automated and conducted quickly, for example in response to a time-sensitive “right to be forgotten” request issued by an individual. In this case, the destruction process must destroy all of the identified data that is relevant to the data subject. This data includes all copies, backups, logs, or replicas across all system components and storage systems including file systems, databases, and queues. The disclosed system and method also allows a user to restore their data back to the distributed ledger system after their data has previously been destroyed. Specifically, before destroying the data subject's data, the system may provide the data subject with a secret only known to the data subject, which the data subject can later use to restore their previously destroyed data. The disclosed system and method accordingly comprises the several steps and the relation of one or more of such steps with respect to each of the others, and the apparatus embodying features of construction, combinations of elements and arrangement of parts that are adapted to affect such steps, all is exemplified in the following detailed disclosure. FIG.1. illustrates an example immutable distributed ledger environment100that enables data destruction functions. The environment100conducts distributed ledger functions including the execution of smart contracts in the distributed ledger node108using a smart contract engine311that read from and write to a state database316shown inFIG.3. The environment supports a process that destroys data from the distributed ledger. A user101interacts with a GUI103using a device102that includes at least a processor104and memory105. For example, an employee at a insurance company uses a browser on his laptop to access a web portal that displays insurance claims. The device102connects to a network106over an interface110ato access and manage a distributed ledger oracle server, or server,109that is connected to the network106using an interface110d. The server109communicates with a data subject database (DSDB)107that is connected to the network106over an interface110b, and communicates with a distributed ledger node108that is connected to the network106over an interface110c. Within an environment100there are possibly multiple users101, devices102, servers109, and distributed ledger nodes108, connected over a single network106. In some embodiments, users101belong to one or more organizations, for example insurance companies, and operate and manage the components in the environment100on behalf of their respective organization. In a preferred embodiment, a plurality of environments100connect to a single network106. For example, a first insurance company manages a first environment, a second insurance company manages a second environment, and the first environment and second environment are interconnected via a common network106. In a preferred embodiment, a plurality of environments100connect over a single network106, that share a single common data subject database107. In a preferred embodiment, the data subject database107is maintained by distributed ledger nodes108, and is stored within a side database317. In some embodiments, a device102, data subject database107, and distributed ledger oracle server109, are physically located on the premises of an organization; and the distributed ledger node108is physically located on the premises of a Cloud infrastructure provider. In some embodiments, a device102, data subject database107, distributed ledger oracle server109, and distributed ledger node108, are physically located on the premises of an organization. n some embodiments, a device102, data subject database107, distributed ledger oracle server109, and distributed ledger node108, are physically located on the premises of a Cloud infrastructure provider. In some embodiments, distributed ledger oracle server109functions are executed on a device102. A distributed ledger node, or node,108communicates with possibly multiple other distributed ledger nodes via an interface110cand network106. A node108provides an execution environment for smart contracts311, and communicates with other nodes108to establish a blockchain network that coordinates the execution of smart contracts. In a preferred embodiment, nodes108coordinate the execution of smart contracts that run, among other workflows, steps from the workflows illustrated inFIG.5.,FIG.6., andFIG.8. Additionally, nodes108coordinate the execution of smart contracts that execute application specific business logic810, for example to process insurance claims. As shown inFIG.4, the data subject database (DSDB)107stores data subject records (DSR)330a,330bspecific users101. These records are populated and managed by the distributed ledger oracle server109. A DSR330a,330bconsists of at least a data subject ID (DSID)331a,331b, data subject profile (DSP)332a,332b, and a data subject key (DSK)333a,333b. The DSID331a,331bis a unique identifier within the environment100that corresponds to a data subject. For a given DSID331a,331b, the server109or node108queries the DSDB107to lookup the corresponding DSR330a,330bthat contains that DSID331a,331b. The DSP332a,332bcontains identifying information about the data subject that is also used by the server109or node108to lookup a data subject's respective DSR330a,330b, for example when the DSID331a,331bis not available. The DSK333a,333bis a secret key that is used to encrypt507and decrypt607data that belongs to a data subject, where this data is eligible for subsequent destruction. In some embodiments, the DSP332a,332bincludes the DSID331a,331b. In a preferred embodiment, the DSID331a,331bis a Universally Unique Identifier (UUID). In some embodiments, the DSID331a,331bis constructed using a cryptographic hash algorithm. In a preferred embodiment, the DSP332a,332bincludes a user's email address. In some embodiments, the DSP332a,332bincludes a user ID that corresponds to user identity information maintained by an external system, for example by a separate Identity Provider (IdP). In a preferred embodiment, the DSK333a,333bincludes a random 32 byte sequence that is a private key that is used to perform AES-256 encryption. In a preferred embodiment, the DSDB107is located within, or directly managed by, a distributed ledger node108. For example, within a side database317that nodes108keeps synchronized with the distributed ledger315using transactions323a,323b. In a preferred embodiment, the DSK333a,333bis generated by a server109and is passed to a smart contract as a transient field in a transaction. In this case, the smart contract stores the DSK333a,333bwithin a DSR330a,3330bthat is stored in the DSDB107, and the DSBD107is contained within, and maintained by, the side database317. In some embodiments, the DSK333a,333bis generated by the node108within a smart contract. In this case, the smart contract is non-deterministic and the key is generated at random by the Smart Contract Execution Engine311. In this case, the blockchain network must support the execution of non-deterministic smart contracts, for example by setting the endorsement policy of the non-deterministic smart contract to allow the endorsement by a single organization. In some embodiments, the data subject database107is implemented as a Relational Database Management System (RDMS), and data subject records330a,330bare records stored in that RDMS. In some embodiments there are multiple DSRs330a,330bthat correspond to a single data subject and DSP332a,332b. FIG.2. Illustrates an example distributed ledger oracle server, or server109, and its components. The server109consists of at least a processor201, memory202, and private keys stored in a wallet203. The server109communicates with one or more distributed ledger nodes108, a data subject database107, and devices102, to process and submit data corresponding to a user101to a distributed ledger node109for processing by one or more smart contracts. In some embodiments, the distributed ledger oracle server109consists of a number of services that intercommunicate over a network. In some embodiments, the distributed ledger oracle server109is managed and deployed using container orchestration software such as Kubernetes. The API (Application Programming Interface) Engine210receives (process701shown inFIG.7) formatted request messages, for example a first message, originally issued by one or more devices102. The first message consists of one or more fields which have corresponding values, for example illustrated inFIG.9. The API engine210verifies that the received messages conform to a predetermined message format, and returns an error to the device102that issued the first message if this message validation fails. The message may contain fields with sensitive data as values, where the values pertain to a data subject, and are stored on a distributed ledger, and are deleted at a later time. The reception of a message by the API Engine210triggers the server109to initiate the steps illustrated inFIG.7. The API Engine210sends valid messages to the Authorization Engine211for subsequent processing. The API Engine210receives a response corresponding to the first message request from the Response Decode Engine214, and sends this response back707to the original device102which issued the first message. In a preferred embodiment, the first message either requests data from, or sends data to, a smart contract311that receives and processes the first message. In some embodiments, the API Engine210is implemented using an HTTP server that exposes a REST/JSON interface, Google Remote Procedure Call (gRPC) interface, and a SOAP/XML interface. In a preferred embodiment, the server109uses the message format to determine a message type. The message type is used to lookup a configuration that determines which data fields within the message are sensitive and determines which data fields within the message pertain to what data subjects. In some embodiments, the message includes metadata that denotes which data fields are sensitive and which data fields pertain to what data subjects. The Authorization Engine211receives request messages from the API Engine210and determines whether or not the issuer of the request is authenticated and authorized to make the request702. As part of this authorization and authentication determination the Authorization Engine211examines both data about the authenticated issuer of the request and the type of request. If the request is a data destruction request, then the Authorization Engine211passes the message to the Destruction Engine217for subsequent processing. Otherwise, the Authorization Engine211passes the message to the Request Encode Engine212for subsequent processing. If the request message is not authorized, then the Authorization Engine211returns an authorization error to the API Engine210, which forwards the error to the original issuer device102. In a preferred embodiment, the issuer is a user101who has authenticated with the server109using multi-factor authentication (MFA). In some embodiments, the issuer is a process running on a device102. In some embodiments, the Authorization Engine211inspects a role that is defined within in a JSON Web Token (JWT) that is included in the request and generated by the device102on behalf of the user101, to determine whether the user101has the necessary permissions to issue the request. In some embodiments, the Authorization Engine211communicates with one or more distributed ledger nodes108via a authorization service to make an authorization and authentication determination. In some embodiments, the Authorization Engine211communicates with one or more distributed ledger nodes via a smart contract108to make an authorization and authentication determination. In some embodiments, the Authorization Engine211makes a preliminary authorization and authentication determination, and a smart contract311running on one or more distributed ledger nodes108executes subsequent validation checks to determine whether the request is authorized. The Request Encode Engine212receives a first request message220a,220bshown inFIG.4and converts it into an encoded form703that is later included within a blockchain transaction payload. The Request Encode Engine212constructs an encoded message using an encoding process illustrated inFIG.5. and the engine passes the encoded message to the Transaction Engine216for subsequent placement and processing by the blockchain network704. Specifically, the Request Encode Engine212triggers an encoding process illustrated inFIG.5to transform the first message into an encoded message702. The Request Encode Engine212subsequently passes the encoded request message to the Transaction Engine213which submits the encoded request message to the blockchain network704. A smart contract311receives the encoded request message and executes the steps of the process illustrated inFIG.8. As a result of the execution of the process illustrated inFIG.8, the Transaction Engine213receives705an encoded response, and passes the encoded response to the Response Decode Engine214. In a preferred embodiment, the encoding process illustrated inFIG.5. is executed by a smart contract311. Specifically, the Request Encode Engine212passes the first request message220a,220bto the Transaction Engine213. The Transaction Engine213places the first request message220a,220bwithin the transient data field of a transaction, and submits the transaction to the blockchain network. The Transaction Engine213generates secret keys (e.g., using a secure random number generator) DSKs333a,333band places them as transient data within the transaction. A smart contract311running on the blockchain network receives the transaction, executes the steps illustrated inFIG.5., and returns the encoded response to Transaction Engine213. If step505is executed, then for each time this step is executed the smart contract will use a unique secret key contained within the transient data field of the transaction to generate a unique DSK333a,333b. In some embodiments, the DSDB107is not managed by or stored within a side database317. In this case, the encoding process illustrated inFIG.5is executed by the Request Encode Engine212on the Distributed Ledger Oracle Server109. The Request Encode Engine212does not interact with the Transaction Engine213to issue a transaction. Specifically, the Request Encode Engine212connects directly to an off-chain DSDB107to lookup and create the DSRs330a,330bin step505. In addition to all of the steps illustrated inFIG.5. The Request Encode Engine212performs the encryption operation507to generate the encoded response. The Transaction Engine216constructs distributed ledger transactions323a,323b, submits them to one or more distributed ledger nodes108for processing, and receives transaction responses which include the results of the network executing each transaction. The Transaction Engine216includes an encoded message within the transaction payload, as well metadata that may include transient data and a smart contract identifier. The Transaction Engine216submits a transaction to one or more distributed ledger nodes108that run smart contracts311that receive messages contained within transaction payloads, execute workflows to process the encoded message on the ledger308and update the State Database316and the Side Database317, and generate transaction execution responses. Transactions are validated and confirmed by the network of distributed ledger nodes108and is placed into blocks320a,320bthat are stored on the distributed ledger315. Each block320acontains metadata321a,321bassociated with their transactions, along with a timestamp322awhich denotes when the block320awas created. The Transaction Engine216uses keys stored in a wallet203to generate digital signatures that are included within transactions, and to encrypt network106communication. In a preferred embodiment, the Transaction Engine216uses a permissioned blockchain, for example Hyperledger Fabric, to construct transactions323a,323band submit them to a distributed ledger node108running the peer software. In a preferred embodiment, the Transaction Engine216interacts with a blockchain that uses an EOV architecture. In this case, the Transaction Engine216first submits the transaction to one or more nodes108to collect endorsements. The Transaction Engine216receives endorsement responses from one or more nodes108, inspects the responses to determine if the transaction has sufficient endorsements depending on the smart contract311endorsement policy, and then submits the endorsed transactions to the blockchain network for ordering and placement within a block312. In some embodiments, the Transaction Engine216includes DSKs333a,333bwithin the transient data field of the transaction. Specifically, the Transaction Engine216includes DSKs333a,333bthat correspond to DSIDs324a,324breferenced in the encoded request within the transaction payload. In a preferred embodiment, the Transaction Engine216includes initialization vector (IV) data generated securely at random on the server109within the transient data field of the transaction. This IV data is used by the encode process illustrated inFIG.5. to encrypt sensitive data507. In one embodiment, the Transaction Engine216includes a request message within the transient data field of a transaction. In this case, a smart contract311processes the transaction by executing the steps illustrated inFIG.5., and the smart contract311accesses the DSDB107and corresponding DSKs333a,333bdirectly through a side database317. In the case of executing step505, the smart contract311references DSKs333a,333bincluded within the transaction's transient data fields. The Transaction Engine216receives the resulting encoded request from the node108. In a preferred embodiment, the DSDB107is stored within a side database317and the Transaction Engine216includes an encoded response message within the payload of a transaction. In this case, a smart contract311processes the encoded response message by executing the steps illustrated inFIG.6where the smart contract311processes the encoded response as a decode request, and the Smart Contract accesses the DSDB107and corresponding DSKs333a,333bdirectly through a side database. The Transaction Engine216receives the resulting decode response from the node108. The Transaction Engine216does not submit the transaction for ordering, commitment, or placement into a block, to avoid storing the sensitive details on the distributed ledger315. In a preferred embodiment, the Transaction Engine216includes an encoded request message within the payload of a transaction. In this case, a smart contract311processes the encoded message by executing the steps illustrated inFIG.8., and the smart contract311accesses the DSDB107and corresponding DSKs333a,333bdirectly through a side database317. The Transaction Engine216receives the resulting encoded response812from the node108. In a preferred embodiment, the Transaction Engine216includes a data destruction request message within the transient data field of a transaction. In this case, a smart contract311processes the data destruction request and accesses the DSDB107to delete the corresponding DSRs330a,330breferenced in the request by a DSID331a,331bor a DSP332a,332b. The Response Decode Engine214receives encoded response messages and decodes the message706to construct a decoded response. The Response Decode Engine214triggers the decode process illustrated inFIG.6. to construct the decoded response610. The Response Decode Engine214sends the decoded response610back to the API Engine210for subsequent processing. In a preferred embodiment, the DSDB107is stored within a side database317and the decode process illustrated inFIG.6is executed by a smart contract311. The Response Decode Engine214passes the encoded response message to the Transaction Engine213which includes the encoded response message within a transaction payload, and the Transaction Engine213submits the transaction to the blockchain network. The Transaction Engine216does not submit the transaction for ordering, commitment, or placement into a block, to avoid storing the sensitive details on the distributed ledger315. In some embodiments, the DSDB107is not managed by or stored within a side database317. In this case, the decode process illustrated inFIG.6is executed by the Response Decode Engine214on the server109. In this case, the Response Decode Engine214does not interact with the Transaction Engine213to issue a transaction. Specifically, the Response Decode Engine214connects directly to the DSDB107to lookup604the DSK333a,333bin step606. In addition to all of the steps illustrated inFIG.6., the Response Decode Engine214uses the DSK333a,333bto perform the decryption operation607. The Destruction Engine217triggers a destruction process to destroy data stored on the blockchain corresponding to a data subject, by making the data inaccessible through the deletion of DSRs330a,330band their respective DSKs333a,333bstored in the DSDB107. The Destruction Engine217receives delete requests that specify which data subject whose data must be destroyed. These requests reference the data subject either by specifying the data subject's corresponding DSID331a,331b, or by specifying a DSP332a,332b. In the case that the destruction request specifies a DSID331a,331b, then the destruction process looks up the corresponding DSR330a,330bthat contains the specified DSID331a,331b. In the case that the destruction request specifies a DSP332a,332b, then the destruction process looks up the corresponding DSR330a,330bthat has a profile332a,332bthat matches the one specified in the request. The destruction process issues a delete operation to the DSDB107which subsequently deletes the DSR330a,330band corresponding DSK333a,333b, belonging to the data subject. In a preferred embodiment, the DSDB107is stored within a side database317and the destruction process is executed by a destruction smart contract311. The Destruction Engine217passes the destruction request to the Transaction Engine213which includes the request as a transient data field of a transaction. Specifically, the transient data field includes the DSID331a,331b, or DSP332a,332b. The Transaction Engine213submits the transaction to the blockchain network. A destruction smart contract311processes this transaction and deletes the corresponding DSR330a,330bfrom the DSDB107, using the DSID331a,331bor DSP332a,332bin the transient data field to reference the DSR330a,330b. In some embodiments, there are multiple DSRs330a,330bthat correspond to a data subject, and the Destruction Engine217deletes all of the data subject's DSRs330a,330b. In a preferred embodiment, the DSDB107is a distributed database that deletes the DSR330a,330bby performing overwriting of the database records containing the DSR330a,330b, and overwriting all of the database record replicas in the environment100. In some embodiments, the Destruction Engine217supports a restoration operation to restore data that was previously deleted from the environment100. In this case, the Destruction Engine217receives a restoration request which includes a previously deleted DSR330a,330b. The Destruction Engine217triggers a restoration process. The restoration process inserts the previously deleted DSR330a,330bback into the DSDB107. In some embodiments, the DSDB107is stored within a side database317and the restoration process is executed by a restoration smart contract311. The Destruction Engine217sends the DSR330a,330bto the restoration smart contract311by including the DSR330a,330bas a transient data field of a transaction. The restoration process inserts the previously deleted DSR330a,330bback into the DSDB107. FIG.3. illustrates an example distributed ledger node108, or node, and its components. The node108consists of at least a processor301, memory302, and private keys stored in a wallet303. The node108communicates with zero or more other nodes108, one or more servers109, to operate, maintain, and execute blockchain network services and functions. The node108maintains at least a state database316, and possibly a side database317. The node108executes one or more smart contract workflows311, possibly including the processes illustrated inFIG.5,FIG.6,FIG.8., a restoration and destruction process. In some embodiments, the distributed ledger node108consists of a number of services that communicate over a network. In some embodiments, the distributed ledger node108is managed and deployed using container orchestration software such as Kubernetes. In some embodiments, the distributed ledger node108runs the Hyperledger Fabric peer software. The Transaction Engine310receives transactions that have been issued by a server109. The Transaction Engine310validates that the transaction was issued by an authorized server109and determines the transaction's destination smart contract311. The Transaction Engine310passes the transaction to the Smart Contract Engine311for execution by the destination smart contract. The Transaction Engine310receives a transaction execution response from the Smart Contract Engine311, and forwards this response back to the original server109that issued the transaction. In a preferred embodiment, to validate a transaction the Transaction Engine310inspects a digital signature included in the transaction metadata and determines whether the corresponding signing certificate was signed by a pre-configured and approved certificate authority. The Smart Contract Engine311receives a transaction from the Transaction Engine310and processes the transaction by executing the destination smart contract workflow, where the transaction payload is an input parameter to the workflow. As part of the execution of the smart contract, the Smart Contract Engine311reads from, and writes to, data contained within the State Database316, and possibly a Side Database317. The result of the execution of a smart contract with the transaction payload as input is an execution response that is passed back to the Transaction Engine310. The transaction response includes a flag that indicates whether the smart contract311determined that the transaction is valid. In a preferred embodiment, the Smart Contract Engine311executes Smart Contracts that perform the steps illustrated inFIG.5.,FIG.6., andFIG.8. In this case, the Smart Contract Engine311reads from, and writes to, data stored in a State Database316. The smart contract311executes application specific business logic810that has previously been installed on the blockchain network. Additionally, the Smart Contract Engine311executes a smart contract that performs data destruction steps, where this process deletes DSRs330a,330bstored in the DSDB107. In a preferred embodiment, the result of the Smart Contract Engine311execution of a transaction is a response message that includes metadata about the data that is read from, and written to, the state database316. This metadata is also known as a read-write set. The Smart Contract Engine311does not immediately update, or commit, the changes to the State Database316and Side Database317. Instead, the Smart Contract Engine311passes the read-write set to the Transaction Engine310which sends the transaction response to a server109. The transaction response includes a digital signature over the transaction payload and is signed by the node108. The transaction response is known as a transaction endorsement. The server109subsequently inspects the transaction response to make a determination of whether the transaction updates to the State Database316and Side Database317should be committed. The Consensus Engine312receives transactions323a,323bfrom other nodes108and servers109that require ordering and commitment to the distributed ledger315. The Consensus Engine312communicates with zero or more other nodes108to determine whether a transaction323a,323bis valid, and to generate a block320a,320bthat includes the transaction323a,323b, possibly along with other transactions. This block320a,320bis validated by the Consensus Engine312and if it is valid then the Consensus Engine312appends the block to the distributed ledger315. The Consensus Engine312updates the State Database316and possibly the Side Database317upon appending a block to the distributed ledger315. For each of the valid transaction specified in the block, the Consensus Engine312applies the resulting State Database316and Side Database317updates specified in the transaction execution responses, where each response is generated by the Smart Contract Engine311. In a preferred embodiment, the Consensus Engine312inspects a transaction response generated by the Smart Contract Engine311to determine whether the transaction is valid. As part of the validation, the Consensus Engine312inspects the digital signatures included in the transaction response, and consults an endorsement policy to determine if the transaction has the necessary digital signatures as required by the policy. In a preferred embodiment, the Consensus Engine312generates a block by triggering a consensus protocol that is executed by an ordering service. Each distributed ledger node108that is connected to the network106also connects to the ordering service in order for all of the nodes to reach agreement on the next block to be added to the distributed ledger315, and consequently reach agreement on the distributed ledger315. The ordering service is possibly executed by the Consensus Engine312, or by one or more processes running on separate servers. In a preferred embodiment, the ordering service executes a crash fault tolerant consensus protocol using the Apache Kafka and Zookeeper software suite. In some embodiments, the ordering service executes a Byzantine Fault Tolerant consensus protocol, for example the PBFT protocol. In some embodiments, the Consensus Engine312implements the ordering service directly by communicating with Consensus Engines312on other nodes108. The Data Subject Engine313provides DSR330a,330blookup504,604, creation505, and deletion functions to processes executed by the Smart Contract Engine311. In some embodiments, the Data Subject Engine313normalizes a DSP221a,221bbefore querying504,604the DSDB107contained within and managed by a Side Database317, for DSRs330a,330bwith matching223a,223bDSPs332a,332b. For example, the DSP221a,221bincludes a username and the normalization process converts the username to all lower case. In some embodiments, the Data Subject Engine313performs a DSR330a,330blookup by including DSPs221a,221bwithin a search query issued to a search database that generates a ranked list of results that includes DSPs332a,332bthat are most similar to221a,221b. The Data Subject Engine313subsequently excludes results that do not meet a minimum relevance threshold, and selects the closest matching DSP332a,332bto lookup the corresponding DSR330a,330bin the DSDB107. In some embodiments, the Data Subject Engine313communicates with an Elasticsearch database to perform this search operation. In a preferred embodiment, the Data Subject Engine313creates505new DSRs330a,330band inserts them into the DSDB107contained within and managed by a Side Database317. Specifically, the Data Subject Engine3131) creates a new DSP332a,332bby copying a DSP221a,221bspecified in a request message220a,220b,2) generates a new unique DSID331a,331b, and 3) generate a new DSK333a,333b,4) place these fields into a new DSR330a,330b, and 5) inserts the new DSR into the DSDB107. In this case, the DSK333a,333bis generated by referencing transient data specified within a transaction. In some embodiments, the Data Subject Engine311generates DSIDs331a,331bby appending a per-transaction counter to a transaction ID. In some embodiments, the DSDB107is not managed by or stored within a side database317. In this case, when the Data Subject Engine313references504,604a DSR330a,330bit must use DSRs330a,330bincluded within the transaction's transient data field. The server109must include the necessary requested DSRs330a,330bwhen the Transaction Engine213constructs the transaction. Specifically, the server109must connect to the DSDB107, lookup the necessary DSRs330a,330beither using a DSID331a,331bor DSP221a,221b, and include the necessary DSRs330a,330bwithin the transaction's transient data field. In some embodiments, the Transaction Engine211does not know the necessary DSRs330a,330breferenced during the execution of a transaction311at the time the transaction is constructed. In this case, in step504,604the Data Subject Engine311will pass an error message to the Smart Contract Engine311that indicates the DSID331a,331bor DSP221a,221bfor the DSR330a,330bmissing in the transient data field of the transaction. The Smart Contract Engine311passes this error message to the Transaction Engine310which generates a transaction execution response that marks the transaction as failed and includes the error message generated by the Data Subject Engine311. The Transaction Engine213on the server109receives the failed transaction execution response that includes the error message generated by the Data Subject Engine311. The Transaction Engine213does not submit the failed transaction for commitment and ordering. The Transaction Engine213inspects the error message and performs the DSR330a,330blookup in the DSDB107using the DSID331a,331bor DSP221a,221bincluded in the error message. The Transaction Engine213then resubmits the failed transaction, but includes the corresponding missing DSR330a,330b, or indicates that the DSR330a,330bis missing from the DSDB107(to perform step605). The Transaction Engine310on the node108then continues to process the transaction, as before, but with the necessary DSR330a,330b. This fail-retry process between the node108and server109continues until either the Smart Contract Engine311successfully completes processing the transaction, or a non-recoverable error is raised. In some embodiments, the DSDB107is not managed by or stored within a side database317. In this case, when the Data Subject Engine313creates505a DSR330a,330bit must use DSRs330a,330bincluded within the transaction's transient data field. The server109must create and include DSRs330a,330bcreated in step505when the Transaction Engine213constructs the transaction. Specifically, the server109must connect to the DSDB107, create a new DSR330a,330bincluding the DSID331a,331b, DSP332a,332b, and DSK333a,333b, and include the created DSR330a,330bwithin the transaction's transient data field. In some embodiments, the Transaction Engine211does not know the necessary DSRs330a,330bcreated during the execution of a transaction311at the time the transaction is constructed. In this case, in step505the Data Subject Engine311will pass an error message to the Smart Contract Engine311that indicates the DSP332a,332bfor the created DSR330a,330bthat is missing in the transient data field of the transaction. The Smart Contract Engine311passes this error message to the Transaction Engine310which generates a transaction execution response that marks the transaction as failed and includes the error message generated by the Data Subject Engine311. The Transaction Engine213on the server109receives the failed transaction execution response that includes the error message generated by the Data Subject Engine311. The Transaction Engine213does not submit the failed transaction for commitment and ordering. The Transaction Engine213inspects the error message and performs the DSR330a,330bcreation in the DSDB107using the DSP332a,332bincluded in the error message. The Transaction Engine213then resubmits the failed transaction, but includes the corresponding missing DSR330a,330b. The Transaction Engine310on the node108then continues to process the transaction, as before, but with the now created DSR330a,330b. This fail-retry process between the node108and server109continues until either the Smart Contract Engine311successfully completes processing the transaction, or a non-recoverable error is raised. The Encryption Engine314provides encryption and decryption functions to processes executed by the Smart Contract Engine311. Specifically, the Encryption Engine314performs encryption507and decryption607operations using DSKs333a.333bprovided by the Data Subject Engine313as part of the execution of a smart contract by the Smart Contract Engine311. In a preferred embodiment, the Encryption Engine314uses the AES-256 encryption algorithm to construct the ciphertext that is included in the encrypted message. For each encryption application, the Encryption Engine314uses a unique IV by referencing random data included within a transaction's transient data field. In some embodiments, the encrypted message includes a Hash-based Message Authentication Code over the ciphertext (FLVAC-SHA256). In this case, the DSK333a,333bis used as a master key to derive two server keys using a Key Derivation Function (KDF), one for encryption to generate the cipher text, and the other for generating the HMAC over that ciphertext. In some embodiments, the encrypted message is computed using an Authenticated Encryption with Associated Data (AEAD) algorithm to provide confidentiality, integrity, and authenticity of the encrypted message. For example, using the interface and algorithms specified in IETF RFC 5116. In some embodiments, the DSK333a,333bis stored on a hardware security module which performs encryption and decryption functions within that module. In this case, the Smart Contract Engine311does not pass the DSK333a,333bdirectly to the Encryption Engine314, but instead the Smart Contract Engine311passes a unique DSK333a,333bidentifier which the Encryption Engine314passes to the hardware security module to identify the encryption key. The Distributed Ledger315consists of an append only data structure illustrated inFIG.4. that maintains an ordered list of blocks. Each block320a,320bincludes metadata321a,321bwith at least a timestamp332a,322bthat denotes when the block was generated. Each block320a,320bcontains transactions323a,323b, where a transaction323a,323bincludes a payload that may include a message that contains a DSID324a,324band encrypted data subject data325a,325b. The Consensus Engine312places transactions within new blocks, and receives new blocks to be appended to the distributed ledger315. The Distributed Ledger315consists of the entire transaction and processing history of the blockchain network, and the smart contract execution311of transactions determines the current state of the State Database316, and when available the Side Database317. In a preferred embodiment the block metadata321a,321bincludes a block hash which is a cryptographic hash over all of the contents of the block including the block hash of the immediately preceding block. This chain of hashes that links each block to the immediately preceding block forms a blockchain data structure. The State Database316is a database that stores the most recent state that is a result of committing the execution results of the Smart Contract Engine311executing all of the valid transactions stored in the Distributed Ledger315. This state is accessible by processes executed by the Smart Contract Engine311, which read and write to the State Database316. In a preferred embodiment, the State Database316consists of a LevelDB key-value store. In some embodiments, the State Database316consists of a CouchDB key-value database that stores messages in JSON format. The Side Database317is an optional database that stores the most recent state that is a result of committing the execution results of the Smart Contract Engine311executing all of the valid transactions stored in the Distributed Ledger315. Unlike the State Database316, values read and written to the Side Database317are not stored in the Distributed Ledger315data structure. Processes executed by the Smart Contract Engine311can read and write data to the Side Database317, but this data is not stored in the Distributed Ledger315, the State Database316, or in any append only or immutable data structure. In a preferred embodiment, the Side Database317consists of a LevelDB key-value store. In some embodiments, the Side Database317consists of a CouchDB key-value database that stores messages in ESON format. In a preferred embodiment, the Side Database317stores and maintains the DSDB107. In this case, DSRs330a,330bare records in the Side Database317. In some embodiments, there does not exist a Side Database317in the environment100. In this case, the DSDB107is stored and maintained by a separate database that is not directly managed by the node108. FIG.4. illustrates records, and their arrangement after sensitive data eligible for deletion has been processed by the system. Within an environment100there exists one or more distributed ledger nodes108that store and maintain a distributed ledger315. The nodes108communicate using distributed ledger protocols312to replicate, verify, and maintain the distributed ledger315. The ledger315is a data structure that includes a list of blocks320a, and320bthat are ordered in time. Blocks320a,320binclude metadata, with at least a timestamp322a,322bthat denotes when the block was generated. Blocks320a,320binclude transactions323a,323bthat were previously generated by the transaction engine216. Transactions323a,323bmay include DSIDs324a,324band encrypted data subject data325a,325b, where each DSID324a,324bis associated340a,340bwith a DSK333a,333bthat the Encryption Engine314used to generate the encrypted DSD325a,325b. The DSDB107contains DSRs330a,330bfor data subjects, including a DSK333a,333bused to encrypt sensitive data belonging to the data subject, and a DSP332a,332bthat includes information used to identify the data subject. The server109processes request messages220a,220bwhich include fields that compose a DSP221a,221bwhich the Data Subject Engine313uses to match (223a,223b) against DSPs332a,332bstored in the DSDB107, and sensitive data that belong to aaa data subject222a,222b. FIG.5. illustrates an encode process that converts a message into an encoded message. Specifically, the process receives an encode request501that includes the message to be encoded. The encode process examines the message to determine the transformation settings, which are a description of which message fields are sensitive and which fields belong to which data subjects. The encode process uses the transformation settings to determine if the message contains sensitive data for a data subject502. If there is no sensitive data, then the encode process returns the processed message as the encode response to the original caller that issued the request510. Otherwise, the encode process extracts503a data subject profile221a,221bfrom the message, using the message transformation settings, to determine which message fields compose a data subject's DSP221a,221b. The encode process uses the extracted DSP221a,221bto lookup504a corresponding DSR330a,330bin the DSDB107. If there is no matching DSR330a,330bin the DSDB107, then the encode process creates505a new DSR330a,330bwithin the DSDB107. The encode process extracts506the DSK333a,333bfrom the DSR330a,330b. The encode process uses the extracted DSK333a,333bto encrypt507the sensitive data belonging to the data subject. The encode process removes the sensitive data for the data subject from the message508. The encode process then adds509the encrypted sensitive data generated in step507to the message, and the DSID331a,331bof the respective data subject in an unencrypted form, to later facilitate the decode process illustrated inFIG.6. step603. The encode process repeats steps502-509until all of the sensitive data in the message are removed and the encrypted data is and added to the message, to construct an encoded message. The encode process then returns510the encoded message to the original caller that issued the request. In a preferred embodiment, the encode process is defined in a smart contract that is executed by the Smart Contract Engine311. In this case, the encode process uses the Data Subject Engine313to lookup the DSR in step504and create the DSR in step505. The encode process uses the Encryption Engine314to perform step507. In some embodiments, the encode process is executed on a server109which directly accesses a DSDB107that is not stored within a Side Database317. In a preferred embodiment, the encode process examines the message type to lookup the message transformation settings in an application specific predetermined table of transformation settings. An example message transformation setting is illustrated inFIG.10. In some embodiments, predetermined transformation settings configured in a lookup table change over time. For example, a network administrator adds a new transformation settings to the lookup table so that an additional field is included as sensitive and included in the DSP221a,221b. In some embodiments, a transformation settings lookup table is stored in the State Database316. In some embodiments, a transformation settings lookup table is included within the transient data field of a transaction. In some embodiments, the message is self-descriptive in that it directly includes the transformation settings and the encode process does not require a predetermined lookup table. In some embodiments, the transformation settings are inferred from the message and from previous messages, using a machine learning algorithm. In a preferred embodiment, the encode process executes a compression step immediately before encrypting the sensitive data in step507. FIG.6. illustrates a decode process that converts a encoded message into its decoded form. The encoded message was previously generated by the process illustrated inFIG.5., and the decode process attempts to reconstruct the original message prior to the application of the encode process. The decode process receives a decode request601from a caller, where the request includes an encoded message. The decode process examines the encoded message602to determine if there is encrypted data for a data subject602, where this encrypted data was previously generated in step507. If there is no encrypted data within the message, then the processed message is returned as the decode response610to the process caller. Otherwise, the decode process extracts603the DSID331a,331bcorresponding to the encrypted data. The decode process then uses the extracted DSID331a,331bto look up604a corresponding DSR330a,330bwithin the DSDB107. If there is no corresponding DSR330a,330bfor the DSID331a,331bthen the decode process removes605the encrypted from the message. Otherwise, the decode process extracts606the DSK333a,333bfrom the corresponding DSR330a,330bfor the extracted DSID331a,331b. The decode process uses the extracted DSK333a,333bto decrypt607the data that was encrypted in step507. The decode process removes the encrypted data from the message608, and adds the decrypted data to the message609. The ecode process repeats steps602-609until there are no more encrypted data contained within the message. If the DSR330a,3330bdoes not exist for the encrypted data605, then the decoded message omits this data. In a preferred embodiment, the decode process is defined in a smart contract that is executed by the Smart Contract Engine311. In this case, the decode process uses the Data Subject Engine313to lookup the DSR in step604. The decode process uses the Encryption Engine314to perform step607. In some embodiments, the decode process is executed on a server109which directly accesses a DSDB107that is not maintained within a Side Database317. In a preferred embodiment, the decode process operates on an encoded message format that is self descriptive. In other words, a predetermined table of transformation settings is not necessary to perform the decode process. An example self-descriptive encoded message is illustrated inFIG.11. In a preferred embodiment, the decode process may execute a decompression step immediately after decrypting the data in step607. FIG.7. illustrates an example workflow executed by a Distributed Ledger Oracle Server109, or server. The server's109API Engine210receives701formatted request messages originally issued by one or more devices102. The server109checks that the message is authorized702via the Authorization Engine211. If the request message is not authorized, then the Authorization Engine211returns an authorization error to the API Engine210, and the API Engine210forwards the error to the original issuer device102. Otherwise, the server109encodes703the request via the Request Encode Engine212which triggers the encoding process illustrated inFIG.5. The server109submits704the encoded request to the blockchain network via the Transaction Engine213to subsequently be processed by a smart contract that executes the steps illustrated inFIG.8. The server109receives705an encoded response that contains the smart contract execution results generated in step811. The server109decodes706the encoded response via the Response Decode Engine214which triggers the decoding process illustrated inFIG.6. to construct a decoded response. The server109returns707the decoded response message back to the original issuer device102via the API Engine210. In a preferred embodiment, the DSDB107is contained within, and managed by, a Side Database317. In this case, both the encoding process triggered in step703and illustrated inFIG.5., and the decoding process triggered in step706and illustrated inFIG.6,. are executed311by one or more smart contracts running on one or more nodes108. In some embodiments, the DSDB107is not contained within, or managed by, a Side Database317. In this case, both the encoding process triggered in step703and illustrated inFIG.5., and the decoding process triggered in step706and illustrated inFIG.6. are executed by one or more servers109that directly access the DSDB107. FIG.8. illustrates an example workflow executed by a Distributed Ledger Node108, or node. Specifically, the workflow steps illustrated inFIG.8. are executed by the Smart Contract Engine311. The Smart Contract Engine311receives801an encoded request from the Transaction Engine310, that was issued by the Transaction Engine213on a server109in step704. The Smart Contract Engine311decodes802the request by executing the decode process illustrated inFIG.6. The Smart Contract Engine311uses the Data Subject Engine313to perform step604. The Smart Contract Engine311uses the Encryption Engine314to perform step607. The Smart Contract Engine311executes810application specific business logic to process the decoded request and generate a response. The Smart Contract Engine311encodes811the response by executing the encode process illustrated inFIG.5. The Smart Contract Engine311uses the Data Subject Engine313to perform step504. The Smart Contract Engine311uses the Encryption Engine314to perform step507. The Smart Contract Engine311then returns812the encoded response back to the Transaction Engine310as the execution response. As part of the execution of application specific business logic810, this logic may read803or write807sensitive data pertaining to a data subject to the State Database316. In the write case807, the Smart Contract Engine311executes808the encode process illustrated inFIG.5. to construct an encoded message, writes809this encoded message to the State Database316, and continues processing thee application specific business logic810. In the read case803, the Smart Contract Engine311reads804an encoded message from the state database, decodes805the message by executing the decode process illustrated inFIG.6., and continues processing the decoded message using the application specific business logic810. FIG.9. illustrates an example message that contains sensitive data (name and birthday) for two different users, Ben and Tom. For example, this message is a request received by the API Engine210in step701, or is a message within a write request in step807. This message is not self-descriptive, in that a separate transformation settings message illustrated inFIG.10. is necessary in order for the encode process illustrated inFIG.5. to generate an encoded message. FIG.10. illustrates example transformation settings for messages in the format illustrated inFIG.9., and are used by the encoding process illustrated inFIG.5. For example, the encode process illustrated inFIG.5. when applied to the example message inFIG.9. with transformation settings illustrated inFIG.10. results in an example encoded message illustrated inFIG.11. The transformation settings inFIG.10. define sensitive data for two data subjects using the “private paths” fields. Specifically, the “Name” and “Birthday” field in the first request object belong to the first data subject, and the “Name” and “Birthday” field in the second request object belong to the second data subject. The transformation settings reference fields within the message (e.g., “.Requests[0]”) using a path notation, for example JSON Path. The transformation settings inFIG.10. use the “encryptor” field to specify that the encode process must use AES-256 encryption for sensitive data contained within the first and second request objects. Similarly, the transformation settings inFIG.10. use the “compressor” field to specify that the encode process must use the “zlib” compression algorithm to compress the sensitive data in the first and second request objects prior to encrypting the sensitive data. The “profile paths” fields define which fields in the message are used to extract the DSP in step503. In this case the “Name” field in the first request is used to construct the DSP221afor the first data subject, and the “Name” field in the second request is used to construct the DSP221bfor the second data subject. Specifically, when the encoding process illustrated inFIG.5. is applied to the example message inFIG.9. the encoding process uses “Ben Franklin” as the DSP221ato look up the first DSR330a, and “Tom Jefferson” as the DSP221bto look up the second DSR330b. FIG.11. illustrates an example encoded message that is a result of the encoding process illustrated inFIG.5. when applied to an example message illustrated inFIG.9. Using the example transformation settings illustrated inFIG.10. The encoded message is self-descriptive, in that it includes the information necessary for the decode process illustrated inFIG.6. to decode the encoded message. Specifically, the process constructs the original message illustrated inFIG.9. where the encrypted data325a,325bhas corresponding DSRs330a,330b(and DSKs333a,333b) available within the DSDB107. In other words, if the DSRs330a,330bhave been deleted via the destruction process, then the decode process does not include the sensitive data corresponding to those DSRs330a,330bwithin the decoded message. The example encoded message includes a header field “mxf” to indicate to the decode process the format of the encoded message. The “message” field specifies the original message illustrated inFIG.9., but with the sensitive data (names and birthdays) removed. The “transforms” field specifies transformation settings, DSIDs324a,324b, and encrypted data325a,325bcorresponding to those DSIDs. The decode process illustrated inFIG.6. uses the DSID specified in the “dsid” field to extract the DSID in step603. FIG.12. illustrates pseudo-code to demonstrate how the sensitive details ([“Ben Franklin”, “17Jan1706”]) for the Ben Franklin data subject are compressed using zlib compression and encrypted using AES-256 encryption. Specifically, the encode process illustrated inFIG.5. looks up a DSR330afor Ben Franklin containing a DSK333awith value “Here I stand, I can do no other” The encode process includes an initialization vector (IV) as part of the encryption process that is included directly in the encrypted message.FIG.12. also illustrates pseudo-code to demonstrate how the encrypted sensitive details for the Ben Franklin data subject are decrypted using AES-256 decryption and decompressed using zlib decompression, to reconstruct the sensitve details ([“Ben Franklin”, “17Jan1706”]). The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. The system and method disclosed herein may be implemented via one or more components, systems, servers, appliances, other subcomponents, or distributed between such elements. When implemented as a system, such systems may include an/or involve, inter alia, components such as software modules, general-purpose CPU, RAM, etc. found in general-purpose computers. In implementations where the innovations reside on a server, such a server may include or involve components such as CPU, RAM, etc., such as those found in general-purpose computers. Additionally, the system and method herein may be achieved via implementations with disparate or entirely different software, hardware and/or firmware components, beyond that set forth above. With regard to such other components (e.g., software, processing components, etc.) and/or computer-readable media associated with or embodying the present inventions, for example, aspects of the innovations herein may be implemented consistent with numerous general purpose or special purpose computing systems or configurations. Various exemplary computing systems, environments, and/or configurations that may be suitable for use with the innovations herein may include, but are not limited to: software or other components within or embodied on personal computers, servers or server computing devices such as routing/connectivity components, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, consumer electronic devices, network PCs, other existing computer platforms, distributed computing environments that include one or more of the above systems or devices, etc. In some instances, aspects of the system and method may be achieved via or performed by logic and/or logic instructions including program modules, executed in association with such components or circuitry, for example. In general, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular instructions herein. The inventions may also be practiced in the context of distributed software, computer, or circuit settings where circuitry is connected via communication buses, circuitry or links. In distributed settings, control/instructions may occur from both local and remote computer storage media including memory storage devices. The software, circuitry and components herein may also include and/or utilize one or more type of computer readable media. Computer readable media can be any available media that is resident on, associable with, or can be accessed by such circuits and/or computing components. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and can accessed by computing component. Communication media may comprise computer readable instructions, data structures, program modules and/or other components. Further, communication media may include wired media such as a wired network or direct-wired connection, however no media of any such type herein includes transitory media. Combinations of the any of the above are also included within the scope of computer readable media. In the present description, the terms component, module, device, etc. may refer to any type of logical or functional software elements, circuits, blocks and/or processes that may be implemented in a variety of ways. For example, the functions of various circuits and/or blocks can be combined with one another into any other number of modules. Each module may even be implemented as a software program stored on a tangible memory (e.g., random access memory, read only memory, CD-ROM memory, hard disk drive, etc.) to be read by a central processing unit to implement the functions of the innovations herein. Or, the modules can comprise programming instructions transmitted to a general purpose computer or to processing/graphics hardware via a transmission carrier wave. Also, the modules can be implemented as hardware logic circuitry implementing the functions encompassed by the innovations herein. Finally, the modules can be implemented using special purpose instructions (SIMD instructions), field programmable logic arrays or any mix thereof which provides the desired level performance and cost. As disclosed herein, features consistent with the disclosure may be implemented via computer-hardware, software and/or firmware. For example, the systems and methods disclosed herein may be embodied in various forms including, for example, a data processor, such as a computer that also includes a database, digital electronic circuitry, firmware, software, or in combinations of them. Further, while some of the disclosed implementations describe specific hardware components, systems and methods consistent with the innovations herein may be implemented with any combination of hardware, software and/or firmware. Moreover, the above-noted features and other aspects and principles of the innovations herein may be implemented in various environments. Such environments and related applications may be specially constructed for performing the various routines, processes and/or operations according to the invention or they may include a general-purpose computer or computing platform selectively activated or reconfigured by code to provide the necessary functionality. The processes disclosed herein are not inherently related to any particular computer, network, architecture, environment, or other apparatus, and may be implemented by a suitable combination of hardware, software, and/or firmware. For example, various general-purpose machines may be used with programs written in accordance with teachings of the invention, or it may be more convenient to construct a specialized apparatus or system to perform the required methods and techniques. Aspects of the method and system described herein, such as the logic, may also be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (“PLDs”), such as field programmable gate arrays (“FPGAs”), programmable array logic (“PAL”) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits. Some other possibilities for implementing aspects include: memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc. Furthermore, aspects may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. The underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (“MOSFET”) technologies like complementary metal-oxide semiconductor (“CMOS”), bipolar technologies like emitter-coupled logic (“ECL”), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, and so on. It should also be noted that the various logic and/or functions disclosed herein may be enabled using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) though again does not include transitory media. Unless the context clearly requires otherwise, throughout the description, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list. Although certain presently preferred implementations of the invention have been specifically described herein, it will be apparent to those skilled in the art to which the invention pertains that variations and modifications of the various implementations shown and described herein may be made without departing from the spirit and scope of the invention. Accordingly, it is intended that the invention be limited only to the extent required by the applicable rules of law. While the foregoing has been with reference to a particular embodiment of the disclosure, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.
67,843
11860823
DETAILED DESCRIPTION Various examples of the present technology are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the present technology. The disclosed technology addresses the need in the art for a file explorer application that is optimized to account for modern day realities regarding where users store and access files, and regarding how users engage with content items. Additionally, the disclosed technology addresses the need to view types of information that have not been exposed by a file explorer application in the past, but these types of information are important in helping a user identify content items, or content items that might need the user's attention. Identifying content items is more important than ever given the ever-increasing amount of stored data that is accessible to users. Since many users store content items in content management systems, and cloud hosted document services as well as on local storage of client devices and network accessible drives, an improved file explorer should be able to show information regarding all content items regardless of where stored. In some embodiments, an improved file explorer should be able to display a listing of content items that include content items from different sources at the same time. Since some sources in which a content item may be stored may include different features, or may store different metadata associated with content items, an improved file explorer should display new types of information available from the various sources. Since many content items are now shared with other users, or may even be collaborative content items that allow multiple users to edit and comment on a content item, an improved file explorer should present information regarding recent activities pertinent to the content items. Since some users need to access many files to make minor changes or just to view a minor update or comment made in a content item, a file explorer should provide enhanced previews that make some actions possible from the enhanced preview and without having to fully open the document. The present technology provides a file explorer that meets one or more of the stated deficiencies of traditional file explorers. By meeting the above stated needs, the present technology not only solves problems associated with displaying more useful information regarding content items from a plurality of diverse sources, but also provides several efficiencies. Since more useful information is in one place, a user does not need to operate a computing device to navigate through many additional screens to be exposed to all the data that the present technology can present in a file explorer. Further, since some actions can be performed without fully opening a content item, and without opening default applications utilized in opening content items, these actions can also save computing resources. Furthermore, the user themselves is also made more efficient since finding a relevant file, learning relevant information, and performing some tasks can all be performed in the file explorer. In some embodiments, the disclosed technology is deployed in the context of a content management system having content item synchronization capabilities and collaboration features, among others. An example system configuration100is shown inFIG.1, which depicts content management system110interacting with client device150. Accounts Content management system110can store content items in association with accounts, as well as perform a variety of content item management tasks, such as retrieve, modify, browse, and/or share the content item(s). Furthermore, content management system110can enable an account to access content item(s) from multiple client devices. Content management system110supports a plurality of accounts. An entity (user, group of users, team, company, etc.) can create an account with content management system110, and account details can be stored in account database140. Account database140can store profile information for registered entities. In some cases, profile information for registered entities includes a username and/or email address. Account database140can include account management information, such as account type (e.g. various tiers of free or paid accounts), storage space allocated, storage space used, client devices150having a registered content management client application152resident thereon, security settings, personal configuration settings, etc. Account database140can store groups of accounts associated with an entity. Groups can have permissions based on group policies and/or access control lists, and members of the groups can inherit the permissions. For example, a marketing group can have access to one set of content items while an engineering group can have access to another set of content items. An administrator group can modify groups, modify user accounts, etc. Content Item Storage A feature of content management system110is the storage of content items, which can be stored in content storage142. Content items can be any digital data such as documents, collaboration content items, text files, audio files, image files, video files, webpages, executable files, binary files, etc. A content item can also include collections or other mechanisms for grouping content items together with different behaviors, such as folders, zip files, playlists, albums, etc. A collection can refer to a folder, or a plurality of content items that are related or grouped by a common attribute. In some embodiments, content storage142is combined with other types of storage or databases to handle specific functions. Content storage142can store content items, while metadata regarding the content items can be stored in metadata database146. Likewise, data regarding where a content item is stored in content storage142can be stored in content directory144. Additionally, data regarding changes, access, etc. can be stored in server file journal148. Each of the various storages/databases such as content storage142, content directory144, server file journal148, and metadata database146can be comprised of more than one such storage or database and can be distributed over many devices and locations. Other configurations are also possible. For example, data from content storage142, content directory144, server file journal148, and/or metadata database146may be combined into one or more content storages or databases or further segmented into additional content storages or databases. Thus, content management system110may include more or less storages and/or databases than shown inFIG.1. In some embodiments, content storage142is associated with at least one content storage service116, which includes software or other processor executable instructions for managing the storage of content items including, but not limited to, receiving content items for storage, preparing content items for storage, selecting a storage location for the content item, retrieving content items from storage, etc. In some embodiments, content storage service116can divide a content item into smaller chunks for storage at content storage142. The location of each chunk making up a content item can be recorded in content directory144. Content directory144can include a content entry for each content item stored in content storage142. The content entry can be associated with a unique ID, which identifies a content item. In some embodiments, the unique ID, which identifies a content item in content directory144, can be derived from a deterministic hash function. This method of deriving a unique ID for a content item can ensure that content item duplicates are recognized as such since the deterministic hash function will output the same identifier for every copy of the same content item, but will output a different identifier for a different content item. Using this methodology, content storage service116can output a unique ID for each content item. Content storage service116can also designate or record a content path for a content item in metadata database146. The content path can include the name of the content item and/or folder hierarchy associated with the content item. For example, the content path can include a folder or path of folders in which the content item is stored in a local file system on a client device. While content items are stored in content storage142in blocks and may not be stored under a tree like directory structure, such directory structure is a comfortable navigation structure for users. Content storage service116can define or record a content path for a content item wherein the “root” node of a directory structure can be a namespace for each account. Within the namespace can be a directory structure defined by a user of an account and/or content storage service116. Metadata database146can store the content path for each content item as part of a content entry. In some embodiments, the namespace can include additional namespaces nested in the directory structure as if they are stored within the root node. This can occur when an account has access to a shared collection. Shared collections can be assigned their own namespace within content management system110. While some shared collections are actually a root node for the shared collection, they are located subordinate to the account namespace in the directory structure, and can appear as a folder within a folder for the account. As addressed above, the directory structure is merely a comfortable navigation structure for users, but does not correlate to storage locations of content items in content storage142. While the directory structure in which an account views content items does not correlate to storage locations at content management system110, the directory structure can correlate to storage locations on local content storage154of client device150depending on the file system used by client device150. As addressed above, a content entry in content directory144can also include the location of each chunk making up a content item. More specifically, the content entry can include content pointers that identify the location in content storage142of the chunks that make up the content item. In addition to a content path and content pointer, a content entry in content directory144can also include a user account identifier that identifies the user account that has access to the content item and/or a group identifier that identifies a group with access to the content item and/or a namespace to which the content entry belongs. Content storage service116can decrease the amount of storage space required by identifying duplicate content items or duplicate blocks that make up a content item or versions of a content item. Instead of storing multiple copies, content storage142can store a single copy of the content item or block of the content item and content directory144can include a pointer or other mechanism to link the duplicates to the single copy. Content storage service116can also store metadata describing content items, content item types, folders, file path, and/or the relationship of content items to various accounts, collections, or groups in metadata database146, in association with the unique ID of the content item. Content storage service116can also store a log of data regarding changes, access, etc. in server file journal148. Server file journal148can include the unique ID of the content item and a description of the change or access action along with a time stamp or version number and any other relevant data. Server file journal148can also include pointers to blocks affected by the change or content item access. Content storage service116can provide the ability to undo operations, by using a content item version control that tracks changes to content items, different versions of content items (including diverging version trees), and a change history that can be acquired from the server file journal148. Content Item Synchronization Another feature of content management system110is synchronization of content items with at least one client device150. Client device(s) can take different forms and have different capabilities. For example, client device1501is a computing device having a local file system accessible by multiple applications resident thereon. Client device1502is a computing device wherein content items are only accessible to a specific application or by permission given by the specific application, and the content items are typically stored either in an application specific space or in the cloud. Client device1503is any client device accessing content management system110via a web browser and accessing content items via a web interface. While example client devices1501,1502, and1503are depicted in form factors such as a laptop, mobile device, or web browser, it should be understood that the descriptions thereof are not limited to devices of these example form factors. For example a mobile device such as client1502might have a local file system in local content storage154accessible by multiple applications resident thereon, or client1502might access content management system110via a web browser. As such, the form factor should not be considered limiting when considering client device150's capabilities. One or more functions described herein with respect to client device150may or may not be available on every client device depending on the specific capabilities of the device—the file access model being one such capability. In many embodiments, client devices are associated with an account of content management system110, but in some embodiments, client devices can access content using shared links and do not require an account. As noted above, some client devices can access content management system110using a web browser. However, client devices can also access content management system110using client application152stored and running on client device150. Client application152can include a client synchronization service156. Client synchronization service156can be in communication with server synchronization service112to synchronize changes to content items between client device150and content management system110. Client device150can synchronize content with content management system110via client synchronization service156. The synchronization can be platform agnostic. That is, content can be synchronized across multiple client devices of varying type, capabilities, operating systems, etc. Client synchronization service156can synchronize any changes (new, deleted, modified, copied, or moved content items) to content items in a designated location of a file system of client device150. Content items can be synchronized from client device150to content management system110, and vice versa. In embodiments where synchronization is from client device150to content management system110, a user can manipulate content items directly from the file system of client device150, while client synchronization service156can monitor a directory on client device150for changes to files within the monitored folders. When client synchronization service156detects a write, move, copy, or delete of content in a directory that it monitors, client synchronization service156can synchronize the changes to content storage service116. In some embodiments, client synchronization service156can perform some functions of content storage service116including functions addressed above such as dividing the content item into blocks, hashing the content item to generate a unique identifier, etc. Client synchronization service156can index content within client storage index164and save the result in storage index164. Indexing can include storing paths plus a unique server identifier, and a unique client identifier for each content item. In some embodiments, client synchronization service156learns the unique server identifier from server synchronization service112, and learns the unique client identifier from the operating system of client device150. Client synchronization service156can use storage index164to facilitate the synchronization of at least a portion of the content within client storage with content associated with a user account on content management system110. For example, client synchronization service156can compare storage index164with content management system110and detect differences between content on client storage and content associated with a user account on content management system110. Client synchronization service156can then attempt to reconcile differences by uploading, downloading, modifying, and deleting content on client storage as appropriate. Content storage service116can store the changed or new block for the content item and update server file journal148, metadata database146, content directory144, content storage142, account database140, etc. as appropriate. When synchronizing from content management system110to client device150, a mount, modification, addition, deletion, move of a content item recorded in server file journal148can trigger a notification to be sent to client device150using notification service117. When client device150is informed of the change a request for changes listed in server file journal148since the last synchronization point known to the client device is made. When client device150determines that it is out of synchronization with content management system110, client synchronization service156requests content item blocks including the changes, and updates its local copy of the changed content items. In some embodiments, storage index164stores tree data structures wherein one tree reflects the latest representation of a directory according to server synchronization service112, while another tree reflects the latest representation of the directory according to client synchronization service156. Client synchronization service can work to ensure that the tree structures match by requesting data from server synchronization service112or committing changes on client device150to content management system110. Sometimes client device150might not have a network connection available. In this scenario, client synchronization service156can monitor the linked collection for content item changes and queue those changes for later synchronization to content management system110when a network connection is available. Similarly, a user can manually start, stop, pause, or resume synchronization with content management system110. Client synchronization service156can synchronize all content associated with a particular user account on content management system110. Alternatively, client synchronization service156can selectively synchronize a portion of the content of the total content associated with the particular user account on content management system110. Selectively synchronizing only a portion of the content can preserve space on client device150and save bandwidth. In some embodiments, client synchronization service156selectively stores a portion of the content associated with the particular user account and stores placeholder content items in client storage for the remainder portion of the content. For example, client synchronization service156can store a placeholder content item that has the same filename, path, extension, metadata, of its respective complete content item on content management system110, but lacking the data of the complete content item. The placeholder content item can be a few bytes or less in size while the respective complete content item might be significantly larger. After client device150attempts to access the content item, client synchronization service156can retrieve the data of the content item from content management system110and provide the complete content item to accessing client device150. This approach can provide significant space and bandwidth savings while still providing full access to a user's content on content management system110. Collaboration Features Another feature of content management system110is to facilitate collaboration between users. Collaboration features include content item sharing, commenting on content items, co-working on content items, instant messaging, providing presence and seen state information regarding content items, etc. Sharing Content management system110can manage sharing content via sharing service128. Sharing content by providing a link to the content can include making the content item accessible from any computing device in network communication with content management system110. However, in some embodiments, a link can be associated with access restrictions enforced by content management system110and access control list145. Sharing content can also include linking content using sharing service128to share content within content management system110with at least one additional user account (in addition to the original user account associated with the content item) so that each user account has access to the content item. The additional user account can gain access to the content by accepting the content, which will then be accessible through either web interface service124or directly from within the directory structure associated with their account on client device150. The sharing can be performed in a platform agnostic manner. That is, the content can be shared across multiple client devices150of varying type, capabilities, operating systems, etc. The content can also be shared across varying types of user accounts. To share a content item within content management system110sharing service128can add a user account identifier or multiple user account identifiers to a content entry in access control list database145associated with the content item, thus granting the added user account access to the content item. Sharing service128can also remove user account identifiers from a content entry to restrict a user account's access to the content item. Sharing service128can record content item identifiers, user account identifiers given access to a content item, and access levels in access control list database145. For example, in some embodiments, user account identifiers associated with a single content entry can specify different permissions for respective user account identifiers with respect to the associated content item. To share content items outside of content management system110, sharing service128can generate a custom network address, such as a uniform resource locator (URL), which allows any web browser to access the content item or collection in content management system110without any authentication. To accomplish this, sharing service128can include content identification data in the generated URL, which can later be used to properly identify and return the requested content item. For example, sharing service128can include the account identifier and the content path or a content item identifying code in the generated URL. Upon selection of the URL, the content identification data included in the URL can be transmitted to content management system110, which can use the received content identification data to identify the appropriate content item and return the content item. In addition to generating the URL, sharing service128can also be configured to record in access control list database145that a URL to the content item has been created. In some embodiments, the content entry associated with a content item can include a URL flag indicating whether a URL to the content item has been created. For example, the URL flag can be a Boolean value initially set to 0 or false to indicate that a URL to the content item has not been created. Sharing service128can change the value of the flag to 1 or true after generating a URL to the content item. In some embodiments, sharing service128can associate a set of permissions to a URL for a content item. For example, if a user attempts to access the content item via the URL, sharing service128can provide a limited set of permissions for the content item. Examples of limited permissions include restrictions that the user cannot download the content item, save the content item, copy the content item, modify the content item, etc. In some embodiments, limited permissions include restrictions that only permit a content item to be accessed from with a specified domain, i.e., from within a corporate network domain, or by accounts associated with a specified domain, e.g., accounts associated with a company account (e.g., @acme.com). In some embodiments, sharing service128can also be configured to deactivate a generated URL. For example, each content entry can also include a URL active flag indicating whether the content should be returned in response to a request from the generated URL. For example, sharing service128can only return a content item requested by a generated link if the URL active flag is set to 1 or true. Thus, access to a content item for which a URL has been generated can be easily restricted by changing the value of the URL active flag. This allows a user to restrict access to the shared content item without having to move the content item or delete the generated URL. Likewise, sharing service128can reactivate the URL by again changing the value of the URL active flag to 1 or true. A user can thus easily restore access to the content item without the need to generate a new URL. In some embodiments, content management system110can designate a URL for uploading a content item. For example, a first user with a user account can request such a URL, provide the URL to a contributing user and the contributing user can upload a content item to the first user's user account using the URL. Team Service In some embodiments, content management system110includes team service130. Team service130can provide functionality for creating and managing defined teams of user accounts. Teams can be created for a company, with sub-teams (e.g., business units, or project teams, etc.), and user accounts assigned to teams and sub-teams, or teams can be created for any defined group of user accounts. Teams service130can provide a common shared space for the team, private user account folders, and access limited shared folders. Teams service can also provide a management interface for an administrator to manage collections and content items within team, and can manage user accounts that are associated with the team. Authorization Service In some embodiments, content management system110includes authorization service132. Authorization service132ensures that a user account attempting to access a namespace has appropriate rights to access the namespace. Authorization service132can receive a token from client application152that follows a request to access a namespace and can return the capabilities permitted to the user account. For user accounts with multiple levels of access (e.g. a user account with user rights and administrator rights) authorization service132can also require explicit privilege escalation to avoid unintentional actions by administrators. Presence and Seen State In some embodiments, content management system can provide information about how users with which a content item is shared are interacting or have interacted with the content item. In some embodiments, content management system110can report that a user with which a content item is shared is currently viewing the content item. For example, client collaboration service160can notify notifications service117when client device150is accessing the content item. Notifications service117can then notify all client devices of other users having access to the same content item of the presence of the user of client device150with respect to the content item. In some embodiments, content management system110can report a history of user interaction with a shared content item. Collaboration service126can query data sources such as metadata database146and server file journal148to determine that a user has saved the content item, that a user has yet to view the content item, etc., and disseminate this status information using notification service117to other users so that they can know who currently is or has viewed or modified the content item. Collaboration service126can facilitate comments associated with content, even if a content item does not natively support commenting functionality. Such comments can be stored in metadata database146. Collaboration service126can originate and transmit notifications for users. For example, a user can mention another user in a comment and collaboration service126can send a notification to that user that he has been mentioned in the comment. Various other content item events can trigger notifications, including deleting a content item, sharing a content item, etc. Collaboration service126can provide a messaging platform whereby users can send and receive instant messages, voice calls, emails, etc. Collaboration Content Items In some embodiments, content management service can also include collaborative document service134which can provide an interactive content item collaboration platform whereby users can simultaneously create collaboration content items, comment in the collaboration content items, and manage tasks within the collaboration content items. Collaboration content items can be files that users can create and edit using a collaboration content item editor, and can contain collaboration content item elements. Collaboration content item elements may include a collaboration content item identifier, one or more author identifiers, collaboration content item text, collaboration content item attributes, interaction information, comments, sharing users, etc. Collaboration content item elements can be stored as database entities, which allows for searching and retrieving the collaboration content items. Multiple users may access, view, edit, and collaborate on collaboration content items at the same time or at different times. In some embodiments, this can be managed by requiring two users access a content item through a web interface and there they can work on the same copy of the content item at the same time. Collaboration Companion Interface In some embodiments, client collaboration service160can provide a native application companion interface for the purpose of displaying information relevant to a content item being presented on client device150. In embodiments wherein a content item is accessed by a native application stored and executed on client device150, where the content item is in a designated location of the file system of client device150such that the content item is managed by content application152, the native application may not provide any native way to display the above addressed collaboration data. In such embodiments, client collaboration service160can detect that a user has opened a content item, and can provide an overlay with additional information for the content item, such as collaboration data. For example, the additional information can include comments for the content item, status of the content item, activity of other users previously or currently viewing the content item. Such an overlay can warn a user that changes might be lost because another user is currently editing the content item. In some embodiments, one or more of the services or storages/databases discussed above can be accessed using public or private application programming interfaces. Certain software applications can access content storage142via an API on behalf of a user. For example, a software package such as an application running on client device150, can programmatically make API calls directly to content management system110when a user provides authentication credentials, to read, write, create, delete, share, or otherwise manipulate content. A user can view or manipulate content stored in a user account via a web interface generated and served by web interface service124. For example, the user can navigate in a web browser to a web address provided by content management system110. Changes or updates to content in the content storage142made through the web interface, such as uploading a new version of a content item, can be propagated back to other client devices associated with the user's account. For example, multiple client devices, each with their own client software, can be associated with a single account and content items in the account can be synchronized between each of the multiple client devices. Client device150can connect to content management system110on behalf of a user. A user can directly interact with client device150, for example when client device150is a desktop or laptop computer, phone, television, internet-of-things device, etc. Alternatively or additionally, client device150can act on behalf of the user without the user having physical access to client device150, for example when client device150is a server. Some features of client device150are enabled by an application installed on client device150. In some embodiments, the application can include a content management system specific component. For example, the content management system specific component can be a stand-alone application152, one or more application plug-ins, and/or a browser extension. However, the user can also interact with content management system110via a third-party application, such as a web browser, that resides on client device150and is configured to communicate with content management system110. In various implementations, the client-side application152can present a user interface (UI) for a user to interact with content management system110. For example, the user can interact with the content management system110via a file system explorer integrated with the file system or via a webpage displayed using a web browser application. In some embodiments, client application152can be configured to manage and synchronize content for more than one account of content management system110. In such embodiments client application152can remain logged into multiple accounts and provide normal services for the multiple accounts. In some embodiments, each account can appear as folder in a file system, and all content items within that folder can be synchronized with content management system110. In some embodiments, client application152can include a selector to choose one of the multiple accounts to be the primary account or default account. While content management system110is presented with specific components, it should be understood by one skilled in the art, that the architectural configuration of system100is simply one possible configuration and that other configurations with more or fewer components are possible. Further, a service can have more or less functionality, even including functionality described as being with another service. Moreover, features described herein with respect to an embodiment can be combined with features described with respect to another embodiment. While system100is presented with specific components, it should be understood by one skilled in the art, that the architectural configuration of system100is simply one possible configuration and that other configurations with more or fewer components are possible. FIG.2illustrates file browser interface200of client application152in accordance with some aspects of the present technology. File browser interface200includes content items listing section204and details pane202. In some embodiments, file browser interface200can be presented by an application stored and executing on client device150that has access to local content storage154and services and databases of content management system110. In some embodiments, file browser interface200can be presented by a web browser that interprets code downloaded from content management system110or stored on client device150to render the features of file browser interfaces such as content items listing section204and details pane202, among other features described herein. Regardless of the source of the code interpreted by a web browser rendering file browser interface200, file browser interface200has access to local content storage154and services and databases of content management system110in accordance with the features described herein. Details pane202can be populated and maintained by details pane service162. As described further herein details pane service162is responsible for requesting and populating data to be displayed in details pane202. In some embodiments, details pane service162may also be responsible for displaying details pane202in combination with the client application152or web browser on client device150. Content items listing section204is configured to list all content items contained within a grouping regardless of the storage location of those content items. For example, as illustrated inFIG.2, content items listing section204shows all content items contained within grouping205“Dropbox.” As is known in the art, Dropbox is a content management service (such as the service managed by content management system110described herein) that stores content items in cloud storage, and, for many types of files, can maintain synchronized copies of the content items stored in the cloud with copies stored in local storage on a client device. However Dropbox also can maintain content items stored only in the cloud. Content items listing section204can display representations of all content items stored by content management system110including both content items stored in local content storage154of client device150and content items stored in content storage142of content management system110. Representations of content items having copies stored in both local content storage154and content storage142can be used to open such content items directly from local content storage154. A grouping such as grouping205does not need to be limited to a single source. For example, a grouping could be linked to multiple different sources such as different content management system services (e.g., Dropbox, Box, SugarSync, iCloud, OneDrive, etc.), different online document editing services (e.g., Google Docs, Dropbox Paper, Microsoft Office Online, iCloud, etc.), and local content storage154. In some embodiments, such as illustrated inFIG.1, file browser interface200and details pane202are part of client application152associated with content management system110. In such embodiments, content management system110would be a primary service, and any other service, such as any of the representative services listed above, would be considered a secondary service170. In some embodiments, a secondary service can be provided by a third party service—e.g., a service that is not the provider of file browser interface200and details pane202. Details pane202is configured to display information such as comments, activity, previews, etc. pertaining to content items shown in content items listing section204. FIG.2illustrates an embodiment of details pane202when no specific content item has been selected in content items listing section204. The information displayed in details pane202when no specific content item has been selected pertains to any of or multiple of the content items displayed in content items listing section204. The information displayed in details pane202can be sourced from any service storing one of the content items represented in content items listing section204. Thus, the information displayed in details pane202can come from multiple sources. FIG.2illustrates an activity section206shown in details pane202. Activity section206lists information pertaining to activity occurring with respect to content items in content item listing section204. Activity occurring with respect to a content item can include information that a content item was opened, edited, shared, commented on, etc. In some embodiments, the activities occurring with respect to a particular content item may be aggregated, as will be described further herein. Activities in the activity section can be sent to client application152by notification service117. In some embodiments, content items listed in content item listing section204are shared content items209and file browser interface200can display icons representing user accounts208to which content items in a grouping are shared. In some embodiments, icons representing user accounts208reflect user accounts currently viewing the content item or that have previously viewed the content item. In some embodiments, icons representing user accounts208can be arranged from most recent viewing of the content item to least recent such that the user account that has most recently viewed the content item is displayed to the left of the icons, and icons representing user accounts with previous views of the content item can be displayed to the right. In some embodiments, only user accounts that have viewed the content item within a particular time period, or since a content item was last revised are displayed in icons representing user accounts208. FIG.3illustrates another view of file browser interface200. InFIG.3, file browser interface200displays grouping215“design assets” which is a subfolder in a larger collection. As illustrated, the title of grouping215is accompanied by brief description201of the contents of grouping215. In some embodiments, brief description201can be edited by a user with sufficient privileges for folder215. Just as inFIG.2, file browser interface200inFIG.3also includes content item listing section204, details pane202, and icons representing user accounts208.FIG.3illustrates details pane202including comments section210along with activity section206. Comments section210reflects that no comments have been provided for subfolder “design assets”215, but as reflected in activity section206at least one content item within the subfolder “design assets”215has received comments (e.g., activity207reflects that comments have been made regarding content item203). The other activities listed in activity section206reflect added, edited, or shared content items within the subfolder “design assets”215. Comments section210can be presented in any details pane view. In some embodiments, comments section210can display comments for an entire collection of content items displayed in content items listing section204(such as illustrated inFIG.3), or can display comments for a specific, selected content item (such as illustrated inFIG.4). Comment section210is configured to display comments pertaining to any content items listed in content item listing section204. In some embodiments, comments section210can also include replies to a previous comment and can form a conversation thread. Comment section210also includes an interface to receive a comment. In some embodiments, the comment can pertain generally to grouping215. In some embodiments, the comment can be in reply to a previous comment, and thereby create a conversation thread of linked comments (as illustrated inFIG.9andFIG.15A). Comments can be sent to client application152by notification service117. In some embodiments, such as illustrated inFIG.4, when a particular content item is selected in content items view204, details pane202can display details (e.g., activities, comments, previews, etc.) that are specific to the selected content item.FIG.4illustrates another view of file browser interface200, which again illustrates grouping215. However, inFIG.4, a particular content item, content item203, is selected causing details pane202to only reflect information relevant to the selected content item203. Details pane202now includes preview section216showing preview213of the selected content item203. Comments section210illustrates comment219given on the selected content item203and provides an interface223to provide new comments regarding the selected content item. Activity section206shows only activity pertinent to the selected content item. FIG.5Aillustrates an example method for populating details pane202with information relevant to content items displayed in content item listing section204. A file browser application can include a content items listing section204showing content items locally stored on client device150in local content storage and/or showing content items stored in an online service such as content management system110or secondary service170. The file browser application also includes details pane202that is populated with data by details pane service162. Details pane service162is responsible for populating details pane202with information relevant to content items in content items listing section204, and is responsible for reacting to inputs received within details pane202. Details pane service162can identify (302) content items displayed in content item listing section204(or all content items in a directory that is open in the file browser application) and can retrieve (304) existing details for content items in content item listing section204from a cache on client device150. In some embodiments, the cache can be part of storage index164. The cache is a collection of previously received information regarding content items previously displayed or that may be displayed in file browser interface200. In some embodiments, the cache may be populated when details pane service162requests information regarding a content item from one or more sources. In some embodiments, the cache may be populated when notification service117sends notifications regarding a content item to client application152. As introduced above, notification service117can receive information regarding details for a content item and notify client devices150. In some embodiments, notification service117is configured to receive raw event data and translate the raw event data into details that are more meaningful to users, or to aggregate the details to provide a better user experience. As addressed herein, notification service117can collect information regarding any file level event regarding a content item. Notification service117can also access information regarding actions taken by collaborators with whom a content item is shared with respect to the content item. Notification service117can also access information regarding comments made with respect to a content item, information regarding timestamps pertaining to when a content item was last opened, created, modified, etc. In addition to accessing this information, notification service117can also translate notifications, and/or aggregate notifications pertaining to a content item or a collection of content items to provide more meaning or a better user experience to users. For example, a poor user experience might result from sending too many notifications to client devices150, such as might happen if a collection of content items was newly shared. In such an example, and in some implementations, notifications could be sent regarding every content item in the collection, which may result in many notifications. In other examples, notification service117can send a single notification regarding the collection that was shared, or send a notification stating that many content items had been shared. While notifications service117can aggregate notifications prior to sending the notifications to client device150, in some embodiments, details pane service162can request raw data (not aggregated) for storage in cache so that details pane service162can make its own aggregation decisions based on the raw data. In addition to retrieving (304) details from the cache, details pane service162can request new details (306) for content items in content item listing section204. These details may be for activities, and comments that have occurred after the last recorded activity details stored in the cache. As noted above, while notification service117can perform an aggregating function. In some embodiments, details pane service162can additionally aggregate (308) details by further combining details stored in cache, and/or details received in response to request (306) for new details. Notifications service117only aggregates notifications including details at the time the notifications are sent to client device150, but details pane service162can aggregate all details received within a particular time period regardless of whether they were received in a single notification or multiple notifications received at different times. For example details pane service162can aggregate details received within a period of time, e.g., the last week, last three days, today, etc. The details to be aggregated might be stored in cache or received from content management system110in response to a request. The details may already be in partially aggregated form. For example, the details might be in the form of two previously received notifications where the first notification from a first time says that “you added 5 items” and a second notification from a second time says that “you added 2 items.” These partially aggregated notifications can be fully aggregated by details pane service162into a detail that says “You added 7 items.” Details pane service162can aggregate details based on any common criteria. For example, details can be aggregated based on an actor, i.e., the person performing an action. Such a detail would be “You commented on 9 items in Dropbox”244(FIG.6A). Details can be aggregated based on a common folder or content item—such as “Assets (Folder with 292 items was added . . . ”246(FIG.2), or “DCS17043_JPG was commented on by you, Tom, & 3 others”207(FIG.3). Details can be aggregated by time—such as “312 items were added to Dropbox since Monday . . . ”246(FIG.6A). Details pane service162can perform several types of aggregation. A first type of aggregation is a true aggregation where details all pertaining to the same criteria can be grouped together. An example of true aggregation is “You commented on 9 items in Dropbox”244(FIG.6A), where the number of files “you commented on” have been summed. A second type of aggregation is debouncing where the same or similar action has been performed repetitively. For example, when a user has made multiple edits and saves to the same content item, the detail can be listed as a single activity detail such as “Weekly To-do List . . . was edited by you”248(FIG.2). The user (“you”) may have made several edits, but the activity detail just states that the file was edited. A third type of aggregation is reclassification where raw events pertaining to a content item do not reflect a user's impression of the action. A common type of reclassification pertains to a delete of a content item, closely followed by an add of a content item. Given the sequence of delete then add of the same content item, and that the two events happened close in time, it is highly likely that the user moved the content item. Therefore, in reclassification, details pane service162can reclassify delete then add events to be move events. In some embodiments, details pane service162can also choose an order (308) in which to display (310) details in details pane202. Details pane service162can order the details for display according to a relevance function. For example, activities or comments can be ordered based on any one or more factors such as when the detail including the activity or comment was received, when the content item was last interacted with (viewed, edited, commented on, etc.), volume of activity or comments for the content item, number of collaborators, explicit user input indicating interest in a content item, etc. After details pane service162has aggregated and ordered (308) cached and received details, details pane service162can cause display (310) of the details in details pane202. In some embodiments, details pertaining to the same event can be aggregated based on a context of a folder that is selected. For exampleFIG.5Billustrates an example method for presenting activity details according to the relevant context of the folder that is selected. The method begins when details pane service162determines that a folder is presented (314), and presents (316) aggregated details for content items in the presented folder and subfolders (as described in steps304,306,308, and310ofFIG.5A). For example,FIG.6Aillustrates an embodiment showing a root folder205in file browser interface200. Details pane service162has populated details pane202with aggregated details including details244and246to accompany the content items displayed in content items section204. (WhileFIG.6Aonly shows aggregated details in details pane202, this is to provide a simplified example only, and it should be appreciated that details pane202can include non-aggregated details, such as details specific to a particular event on a particular content item.) SinceFIG.6Aillustrates file browser interface200showing root folder205, details pane202includes aggregated details that pertain to the entire directory. The details can be grouped according to any common characteristic. For example, aggregated detail244is grouped based on content items that have been commented on by “you,” and aggregated detail246is grouped based on content items that have been added to root folder205(or its subfolders) since Monday. Aggregated details in details pane202can be selected. As shown inFIG.5B, in some embodiments, details pane service162can determine that an aggregated detail has been selected (315). After details pane service162has determined that an aggregated detail has been selected (315), details pane service162can highlight (317) any content items that are relevant to the selected aggregated detail. For example,FIG.6Bshows an example file browser interface200showing aggregated detail244as having been selected. In accordance with step315inFIG.5B, content items229and209are highlighted (seeFIG.6B) to reflect that the content items referred to by aggregated detail244are located within the highlighted folders. In some embodiments, if a content item that was referred to by aggregated detail244was stored directly in root directory205, the content item itself would also be highlighted. As noted above, aggregated details can be contextualized according to the folder that is presented in file browser interface200. InFIG.5B, details pane service162can determine (318) that another folder has been selected, and can present (320) details that are contextualized for the selected folder. This contextualization is illustrated by comparingFIG.6AwithFIG.6C.FIG.6Apresents Dropbox root folder205, whileFIG.6Cpresents folder Design Library209, which is a subfolder of Dropbox root folder205. Accordingly,FIG.6Aincludes aggregated details pertaining to all content items in Dropbox root folder205, whileFIG.6Cincludes aggregated details pertaining to content items in Design Library209or its subfolders. For example, inFIG.6A, aggregated details244pertains to 9 content items that “you” have commented on, but inFIG.6C, aggregated detail249pertains to only 3 items the “you” have commented on in “Design Library.” The context of the aggregated detail that pertains to content items that “you” have commented on has changed fromFIG.6AtoFIG.6Cto pertain to only those content items that are in Design Library209. Specifically, aggregated detail244refers to 9 content items, while aggregated detail249refers to 3 of those 9 content items. Aggregated detail249refers to 3 content items when only two content items are shown (241,243) because some/all of the 3 content items are located within folder “Digital”241. Likewise aggregated detail251has been contextualized for presentation of Design Library209inFIG.6Cby referring to 294 items (compared to 312 items in aggregated detail246inFIG.6A), and by referring directly to subfolder “Digital”241(compared to the path “Design Library→Digital” in aggregated detail246inFIG.6A). Details pane202can determine contextualization of the aggregated details on demand (e.g., after a folder has been selected), or can pre-process the contextualizations by creating and storing contextualizations for each (sub)directory and store this table in cache. In addition to contextualizing aggregated details according to the folder displayed in file browser interface200, in some embodiments, details pane service162can present different details, alternate groupings of details, and alternate sections for organizing and interacting with details, etc. based on the folder displayed in file browser interface200. Details pane service162can further determine whether content items listing section204is displaying a root folder, or a subfolder, or whether a particular content item is selected. When content items listing section204is displaying a root folder, details pane service162might display only activity details pertaining to content items in the root folder. For example, as seen inFIG.2, file browser application200is displaying root folder205, and details pane202is only displaying activity section206. WhileFIG.2does not display separate comments section210(as seen inFIG.3) or preview section216(as seen inFIG.4), however, activity section206does include comments, and displays thumbnail images next to activities. By displaying less detail sections, details pane202can avoid being over crowded. In some embodiments, details pane service162can determine that a directory displayed in content items listing section204includes too many content items, and display only certain detail sections (e.g., activities, comments, previews, etc.). As with a determination of a root folder being displayed—by displaying less detail sections, details pane202can avoid being over crowded. In some embodiments, when content items listing section204is displaying a subfolder or is displaying less than a threshold number of content items, or there is less than a threshold number of total activities to be displayed, details pane service162may display additional types of information. For example, details pane service162can display comments section210or previews section216for content items or collections of content items displayed in content items listing section204. For exampleFIG.7, illustrates an example for receiving a selection of a content items or detail pertaining to the content item, and updating details pane202with details specific for the content item. Details pane service162can determine (322) that an individual content item within content items listing section204or a detail in details section202pertaining to the content item has been selected. Both selection of the content item or the detail for the content item can be treated as a selection of the content item. When an individual content item has been selected (such as, e.g., content item203inFIG.4), details pane service162can take actions to display a preview of the selected individual content item (such as preview213of content item203) along with comments and activity for the selected individual content item. Thus, details pane service162can request (324) a content item preview for the selected individual content item, and display (326) a content item preview in preview section216(as seen e.g., inFIG.4). In some embodiments, the content item preview can be available from a local cache (as explained in step304ofFIG.5A); in some embodiments, the preview can be available from content management system110(as explained in step306ofFIG.5A); and in some embodiments, the preview can be dynamically rendered by details pane service162(as explained further with respect toFIG.10). Additionally, details pane service162can present (328) other sections in details pane202that are specific to the content item such as comments section210and activity section206, as seen e.g., inFIG.4having comment219and activities that pertain to selected individual content item203. FIG.8illustrates another example of file browser interface200. InFIG.8, file browser interface200displays the contents of subfolder “fall 2017 issue” as seen in directory path224.FIG.8further shows two additional subfolders within content items listing section204. The first of those subfolders is subfolder220“work in progress,” and the second subfolder is subfolder225“resources.” Within each of subfolder220and subfolder225, additional content items are listed. One such additional content item is folder211“fonts,” which has been selected. Responsive to the selection of folder211, details pane service162presents details pane202showing preview section216showing preview227of folder211. Details pane202also includes comments section210and activity section206listing any comments and activity for folder211. FIG.9illustrates another example of file browser interface200. InFIG.9, file browser interface200displays the contents of subfolder230“fall 2017 issue” and further shows two additional subfolders within content items listing section204. The first of these subfolders is subfolder235“latest,” and the second is subfolder220“work in progress.” InFIG.9, content item217has been selected, and details pane service162displays details pane202showing preview section216showing a preview of content item217. This preview is a dynamic preview wherein preview section216has rendered editable contents of content item217. For example, as illustrated inFIG.9, preview section216shows a to-do list that can be interacted with directly from details pane202without opening content item217in a default application typically used to open content item217. Interactions with content displayed in preview section216of details pane202will be discussed in further detail with respect toFIG.11. FIG.10AandFIG.10Billustrate examples of detached details pane222. In these embodiments, details pane222is displayed as its own separate window as opposed to details pane202which is displayed as a portion of file browser interface200. While detached details pane222is displayed as its own separate window, it can function in the same manner as details pane202and function in coordination with file browser interface200. Likewise, any features illustrated inFIG.10AandFIG.10Bcan also be present in any of the attached views of details pane202illustrated in other figures. FIG.10Aillustrates preview section216showing a preview of a content item. Preview section216also shows metadata associated with the content item. Specifically preview section216shows when the content item was last edited228next to information regarding the number of times the content item was opened226. Displaying this information side-by-side is unique and useful. First showing information regarding how recently a content item has been edited228is useful but when combined with information regarding how often a file has been opened226provides important additional context. Together this information shows not only that a content item is currently relevant but it also shows that it has been repeatedly relevant because it has been opened repeatedly. Also, a high count for the number of times the content item has been opened is an indication that the content item is likely shared. This can be confirmed by viewing activity section206that gives more context to who has opened the content item. One unique aspect of displaying last edited information228next to content item open count226is that this data may be derived from diverse sources. For example, last edited information228may be derived from an operating system of client device150while content item open count226may be sourced from content management system110or other platform that manages shared content items. Another feature illustrated inFIG.10Ais version information displayed within activity section206. For example, in some embodiments, activity with respect to a content item can be divided into activity pertaining to different versions of the content item. As illustrated inFIG.10Aactivity section206displays activity for a first version234and activity for a second version236. In some embodiments, the content item may be associated with a declared input that it should be designated a new version. In some embodiments, the content item may be designated a new version after the content item has been edited and saved. FIG.10Balso illustrates an example of detached details pane222. InFIG.10B, comments section210reflects comments on the content item shown in preview section216. The comments shown in comment section210are shown collapsed so that a comment thread does not overtake the entire detached details pane222, or other comment threads. In some embodiments, when a comments thread is collapsed, details pane service162can display a subset of the comments, such as a first and last comment in the comments thread, and can provide information regarding the existence of additional comments242. In some embodiments, the information regarding the existence of additional comments242can be actionable to receive an input and expand to show one or more of the collapsed comments. FIG.11illustrates an example method pertaining to displaying a preview of a content item by details pane service162when file browser interface200receives (350) a selection of a comment that is specific to a particular content item. In some embodiments, when details pane202is displaying a view including comments section210, details pane202is displaying a view specific to a particular content item, whether that content item is a subfolder or a file. In some embodiments, when details pane202is displaying a view that is specific to a particular content item, details pane202also includes a preview of the particular content item. A preview can include a preview representing a content item or a preview of a portion of a content item. In some examples, when selection of a specific comment or content item is received (350), details pane service162can present a preview showing the location of the comment in the content item. In some embodiments, the preview showing the location to which the comment is anchored within the content item can replace the preview representing the entire content item in preview section216. In some embodiments, the preview showing the location to which the comment is anchored within the content item can be shown in comments section210, or in a separate pop out box or window (as illustrated inFIG.12A,FIG.12B, andFIG.13). In some embodiments, a comment anchor can be highlighted text, or a selected point, or a selected area within the content item. Details pane service162can determine (356) whether the content item is of a type that is dynamically renderable as addressed in greater detail below. As such, when details pane service162determines (356) that the preview is dynamically renderable, details pane service162dynamically renders (361) a preview from a copy of the content item (whether stored locally or in cloud storage) that shows the location of the selected comment in the content item. A dynamic preview can be rendered by a content authoring applet that is part of, or associated with details pane service162. In some embodiments, the content authoring applet can read contents of content items, locate a comment within the content item, and render the appropriate portion of the content item as the preview. In some embodiments, the content authoring applet can provide a web view through which the content authoring applet can render a preview from a copy of the content item stored at content management system110. Note, while a local copy of the content item may be available on client device150, the web view can still render the preview based on content management system copy of the content item. In some embodiments, a dynamic preview can receive edits directly in the preview. In some embodiments, the dynamic preview can receive and respond to inputs to navigate within the content item. In some embodiments, details pane service162might not be able to render dynamic previews of some content items. When details pane service162determines (356) that it is not able to dynamically render a preview, details pane service162can request (358) a static preview showing a portion of the content item and display (360) a preview of the content item showing a location within the content item to which the selected comment is anchored. Dynamic previews are further addressed with respect toFIG.16, below. The static previews can be received from content management system110which can provide a service that opens content items and creates previews of the content item surrounding a selected comment within the content item. In some embodiments, this function can be performed in advance and pre-processed previews can be sent to client device150to be cached. In some embodiments, the previews can be created and/or sent to device150on demand and at the request of details pane service162. In some embodiments, previews showing a comment in a content item might be automatically included in comments section210. In some embodiments, while a preview of a content item showing the location of a comment within the content item is being displayed, details pane service162can receive an input selecting (350) another comment resulting in presentation of another preview (360,361) showing the location of the another comment within the content item. FIG.12AandFIG.12Billustrate two examples of details pane222illustrating example previews showing the location of comments within a content item. WhileFIG.12AandFIG.12Billustrate detached details pane222, it should be appreciated that all of the features illustrated therein apply equally to the details pane in other form factors such as details pane202. As illustrated inFIG.12AandFIG.12B, detached details pane222includes preview section216showing a general preview applicable to content item217. Comments section210presents a preview showing the location of a comment with respect to content item217. FIG.12AandFIG.12Balso illustrate an expandable preview section216. User interface control232can be activated to expand preview section216as shown inFIG.12Ato reveal metadata regarding content item217. When preview section216is expanded, user interface control232can be selected to collapse preview section216to hide the additional metadata details. FIG.13illustrates file browser interface200including details pane202showing details pertaining to content item221. Preview pane216displays a preview of content item221, and comments section210shows comments pertaining to content item221. InFIG.13, an individual comment, comment256, has been selected, and in response, details pane service162has displayed popout window262to display a preview of the portion258of content item221to which comment256is anchored. FIG.14illustrates an example of content item221opened in a native or default application. As shown inFIG.13, comment256is shown along with the portion258of the document to which it is anchored. FIG.15illustrates an example of file browser interface200with details pane202showing activity for subfolder230“Fall 2017 Issue”. As illustrated inFIG.15, activity section206includes comments grouped into categories based on the day the activity was recorded—specifically grouping264for comments recorded “today” and grouping266for comments recorded “yesterday.” Additionally,FIG.15illustrates activity233pertaining to content item217having been selected. Details pane service162can receive the selection of activity233and display details pane popout window268showing details specific to content item217, including a content item preview in preview section216and preview252showing the location in which comments in comments section210are anchored within content item217. As introduced above, some content items may be conducive to presenting dynamic previews for these content items. For example, content items written in a markup language, or cloud content items such as a content item that is stored in an online document service (e.g., a Paper document, by Dropbox Inc., or a Google Doc, by Google Inc.), may be rendered directly by details pane service162. Other content items may be dynamically renderable through assistance of service accessible via an API. Some content items can be dynamically renderable by providing a webview that displays a portion of the content item opened on a content management system. Dynamically renderable previews may be interacted with directly in the preview pane such that edits or additional comments can be made without opening the content item in a default application used to edit the content item. FIG.16illustrates an example method for rendering and interacting with dynamically rendered previews. When details pane service162determines that a preview should be presented for a content item for which a preview is dynamically renderable, details pane service162can dynamically render (362) the preview from a copy of the content item. As noted above, dynamically renderable previews can receive interactions and edit directly in the preview. As such, details pane service162can receive (364) interaction with the dynamically rendered preview by way of receiving a user input into the dynamically rendered preview displayed in details pane202(262or268). Details pane service162can update (366) the preview and the content item itself in response to the received (364) interactions. In this way, quick changes can be made to the content item without having to open the content item in a default application. The changes can be reflected directly in the preview, which remains available for continued interaction until the user opens the content item in a native application, selects another content item, or closes file browser interface200. FIG.17AandFIG.17Bshow example detail panes222showing an example of dynamically rendered content item previews shown in preview section216. In the example inFIG.17A, the dynamically rendered document preview is shown in collapsed preview section216and inFIG.17Bthe dynamically rendered preview is shown in an expanded preview section216. As addressed with respect toFIG.12AandFIG.12B, preview section216can be switched from collapsed to expanded views through selection of user interface control232. The dynamically rendered preview shows a check list that can be interacted with to provide updates to both the preview and the underlying content item as explained with respect toFIG.15. In some embodiments, the preview can be taken from metadata stored with the content item. For example, some content items can be stored with workflow information, such as a to-do list. In such embodiments, a possible preview would include displaying the to-do list. In some embodiments, the to-do list is a portion of the content item itself. In some embodiments, the to-do list can be stored in a companion metadata file. In some embodiments, where the preview is a dynamically rendered preview, it can be possible to not only make changes to the portion of the content item shown in preview section216, but it may also be possible to navigate to other sections of the document. If the content item is a web document, such as a web page, it may be possible to navigate Internet content by clicking on links in the content item. Dynamically rendered previews can be provided by including one or more content authoring applets within or associated with details pane service162. The content authoring applets can include enough code to render specific document types, or at least portions of specific document types within preview section216of details pane202. In some embodiments, the dynamically rendered preview can be displayed within a frame under the control of the content authoring applet. In some embodiments, the content authoring applet can include code to render the entire content item, and when rendering the content item, can format the contents of the content item for display within preview section216. Additionally, the applet can permit scrolling through the contents of the content item. In some embodiments, the content authoring applet can include only limited features as compared to a default application typically used to view and edit the type of content item. For example, only basic editing and commenting features may be available. In some embodiments, the content authoring applet can be configured to only render limited portions of the content item. For example, only portions of the content item to which comments are attached, or only portions containing a checklist might be renderable. In some embodiments, portions of the content item can be tagged to be available to be displayed as a dynamically renderable preview. In some embodiments, the content authoring applet can be configured to render a web view through communication with web interface service124. Such embodiments can be useful to depend on content management system110to have the capability to render diverse content items in a web view. For example, content management system110can have the capability to open files specific to a particular document editor, or a particular image editor, or a particular spreadsheet application, and can provide a web view of the content item using web interface service124. In such embodiments, even though a content item has been selected on client device150and for which a version maybe stored locally, it will be a version of the content item stored at content management system110that is opened for the purposes of generating and interacting with the dynamic preview. Any changes made to the content item can be saved to content management system110and synchronized to the copy stored on client device150. Similarly, in embodiments wherein the content item is a collaborative content item, the content authoring applet can be configured to render a web view of the online service that supports the content item. For example, a collaborative content item can be rendered by the content authoring applet that is communicating with collaborative document service134. While rendering dynamic content item previews by content authoring applet hosting a web view of the content item can be useful in many circumstances, it is less useful when client device150is not connected to the Internet. In such embodiments, even if the default behavior of the content authoring applet is to host a web view in coordination with a web server, content authoring applet can contain a local resource library that is effective to permit local content items to be rendered without coordination with a web server. In some embodiments, when documents are rendered in this fashion, the formatting of the document, or the features available may be limited. When a dynamically renderable preview receives an edit, the edit can be saved in the content item itself (a locally stored version if the dynamic preview is rendered from the locally stored version, or a content management system stored version if the dynamic preview is provided in a web view). In some embodiments, the content items are managed by synchronized content management service110, and when changes to a version of a content item are saved as described above, these changes can be synchronized with other copies saved at content management system110or other client devices150. In some embodiments, a better user experience can be provided if activities and comments to be provided in details pane202are cached before they are required, or pre-fetched, allowing the activities and comments to be provided more quickly.FIG.18illustrates an example method for prefetching activities and comments to be displayed in details pane202. Details pane service162displays (402) details pane202for a folder or content item within a folder. Details pane service162can pre-fetch (404) updates to activities and/or comments for content items that are stored in a lower level subfolder than the folder for which information is currently displayed in details pane202. Once details pane service162receives an input (406) within file browser interface200to navigate to a different subfolder, details pane service162can display (408) activities and/or comments previously stored in a cache. Details pane service162can continue to request and receive (410) updates to activities and/or comments to update details pane202and the cache. In some embodiments, notification service117can push new activities and new comments to client device150, and these can also be stored in cache. In embodiments wherein activities and comments are pre-fetched as described above, it may be beneficial to utilize a more intelligent system than to simply request and download all activity and comments for all content items in a next lower level subfolder. Some folders may still have many content items in the next level of subfolders and thus downloading activities and comments for these content items may be time consuming and require a lot of bandwidth. Furthermore, users may navigate through a directory structure more quickly than the comments and activities can be pre-fetched. Accordingly, in some embodiments, the method illustrated inFIG.18can be modified to pre-fetch activities and comments according to a pre-fetching priority score. A pre-fetching priority score can be determined for each content item to ascertain a relative priority for pre-fetching comments and activities for content items. In some embodiments, the pre-fetching priority score can be dynamically updated as factors relevant to determining the pre-fetching priority score change. The pre-fetching priority score can be an estimate of the probability that the user will want to access each particular content item on their client device150. In some embodiments, the pre-fetching priority score is based on a recency criteria such as examining a last-opened date on the client device, a last-opened date on another device, a last-modified metadata value, a last shared date, how recently the content item opened or edited by another user with which the content item is shared, etc. In some embodiments, the pre-fetching priority score is based on a value representing whether or not or representing a degree to which a user explicitly marks a content item as subjectively important, as a favorite item, etc. In some embodiments, the pre-fetching priority score is based on a value representing how frequently a content item is changed or accessed in the content management system. In some embodiments, the pre-fetching priority score is based on a value representing how many user accounts are interacting with a content item over a period of time. The pre-fetching priority score can be based on any combination of the above or other factors. FIG.19shows an example of computing system500, which can be for example any computing device making up client device150, content management system110or any component thereof in which the components of the system are in communication with each other using connection505. Connection505can be a physical connection via a bus, or a direct connection into processor510, such as in a chipset architecture. Connection505can also be a virtual connection, networked connection, or logical connection. In some embodiments, computing system500is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices. Example system500includes at least one processing unit (CPU or processor)510and connection505that couples various system components including system memory515, such as read only memory (ROM)520and random access memory (RAM)525to processor510. Computing system500can include a cache of high-speed memory512connected directly with, in close proximity to, or integrated as part of processor510. Processor510can include any general purpose processor and a hardware service or software service, such as services532,534, and536stored in storage device530, configured to control processor510as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor510may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction, computing system500includes an input device545, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system500can also include output device535, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system500. Computing system500can include communications interface540, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device530can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices. The storage device530can include software services, servers, services, etc., that when the code that defines such software is executed by the processor510, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor510, connection505, output device535, etc., to carry out the function. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program, or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium. In some embodiments, the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
90,734
11860824
DETAILED DESCRIPTION The present invention will now be described more fully hereinafter with reference to the accompanying drawings in which example embodiments of the invention are shown. However, the invention may be embodied in many different forms and should not be construed as limited to the representative embodiments set forth herein. The exemplary embodiments are provided so that this disclosure will be both thorough and complete and will fully convey the scope of the invention and enable one of ordinary skill in the art to make, use, and practice the invention. Unless described or implied as exclusive alternatives, features throughout the drawings and descriptions should be taken as cumulative, such that features expressly associated with some particular embodiments can be combined with other embodiments. Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which the presently disclosed subject matter pertains. It will be understood that relative terms are intended to encompass different orientations or sequences in addition to the orientations and sequences depicted in the drawings and described herein. Relative terminology, such as “substantially” or “about,” describe the specified devices, materials, transmissions, steps, parameters, or ranges as well as those that do not materially affect the basic and novel characteristics of the claimed inventions as whole (as would be appreciated by one of ordinary skill in the art). The terms “coupled,” “fixed,” “attached to,” “communicatively coupled to,” “operatively coupled to,” and the like refer to both: (i) direct connecting, coupling, fixing, attaching, communicatively coupling; and (ii) indirect connecting coupling, fixing, attaching, communicatively coupling via one or more intermediate components or features, unless otherwise specified herein. “Communicatively coupled to” and “operatively coupled to” can refer to physically and/or electrically related components. As used herein, the terms “enterprise” or “provider” generally describes a person or business enterprise that hosts, maintains, or uses the disclosed systems and methods. The term “feedback” is used to generally refer to alphanumeric text in digital form and can be used interchangeably with the terms alphanumeric feedback data, alphanumeric text feedback, alphanumeric textual feedback data, feedback data, textual feedback data, textual data, and text feedback data. The term “users” is at times used interchangeably with the term “feedback sources” and refers to humans that generate linguistic expressions of ideas included in the feedback data that can be processed using artificial intelligence and natural language processing technologies. Embodiments are described with reference to flowchart illustrations or block diagrams of methods or apparatuses where each block or combinations of blocks can be implemented by computer-readable instructions (i.e., software). The term apparatus includes systems and computer program products. The referenced computer-readable software instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a particular machine. The instructions, which execute via the processor of the computer or other programmable data processing apparatus, create mechanisms for implementing the functions specified in this specification and attached figures. The computer-readable instructions are loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions, which execute on the computer or other programmable apparatus, provide steps for implementing the functions specified in the attached flowchart(s) or block diagram(s). Alternatively, computer software implemented steps or acts may be combined with operator or human implemented steps or acts in order to carry out an embodiment of the disclosed systems and methods. The computer-readable software instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner. In this manner, the instructions stored in the computer-readable memory produce an article of manufacture that includes the instructions, which implement the functions described and illustrated herein. Disclosed are systems and methods that automatically classify, filter, and reduce large volumes of feedback data as a function of time using artificial intelligence technology. The aggregated feedback data is reduced by representing the feedback data as sets of descriptors corresponding to one or more time periods that are displayed on a graphical user interface. The descriptors effectively reduce, or summarize, the feedback data as a function of time. Thus, a provider can expediently review feedback data and identify trends or changes over time. Such functionality in turn allows providers to proactively address problems and develop solutions rather than reactively addressing problems after they arise. System Level Description As shown inFIG.1, a hardware system100configuration according to one embodiment generally includes a user110that benefits through use of services and products offered by a provider through an enterprise system200. The user110accesses services and products by use of one or more user computing devices104&106. The user computing device can be a larger device, such as a laptop or desktop computer104, or a mobile computing device106, such as smart phone or tablet device with processing and communication capabilities. The user computing device104&106includes integrated software applications that manage device resources, generate user interfaces, accept user inputs, and facilitate communications with other devices, among other functions. The integrated software applications can include an operating system, such as Linux®, UNIX®, Windows®, macOS®, iOS®, Android®, or other operating system compatible with personal computing devices. The user110can be an individual, a group, or an entity having access to the user computing device104&106. Although the user110is singly represented in some figures, at least in some embodiments, the user110is one of many, such as a market or community of users, consumers, customers, business entities, government entities, and groups of any size. The user computing device includes subsystems and components, such as a processor120, a memory device122, a storage device124, or power system128. The memory device122can be transitory random access memory (“RAM”) or read-only memory (“ROM”). The storage device124includes at least one of a non-transitory storage medium for long-term, intermediate-term, and short-term storage of computer-readable instructions126for execution by the processor120. For example, the instructions126can include instructions for an operating system and various integrated applications or programs130&132. The storage device124can store various other data items134, including, without limitation, cached data, user files, pictures, audio and/or video recordings, files downloaded or received from other devices, and other data items preferred by the user, or related to any or all of the applications or programs. The memory device122and storage device124are operatively coupled to the processor120and are configures to store a plurality of integrated software applications that comprise computer-executable instructions and code executed by the processing device120to implement the functions of the user computing device104&106described herein. Example applications include a conventional Internet browser software application and a mobile software application created by the provider to facilitate interaction with the provider system200. The integrated software applications also typically provide a graphical user interface (“GUI”) on the user computing device display screen140that allows the user110to utilize and interact with the user computing device. Example GUI display screens are depicted in the attached figures. The GUI display screens may include features for displaying information and accepting inputs from users, such as text boxes, data fields, hyperlinks, pull down menus, check boxes, radio buttons, and the like. One of ordinary skill in the art will appreciate that the exemplary functions and user-interface display screens shown in the attached figures are not intended to be limiting, and an integrated software application may include other display screens and functions. The processing device120performs calculations, processes instructions for execution, and manipulates information. The processing device120executes machine-readable instructions stored in the storage device124and/or memory device122to perform methods and functions as described or implied herein. The processing device120can be implemented as a central processing unit (“CPU”), a microprocessor, a graphics processing unit (“GPU”), a microcontroller, an application-specific integrated circuit (“ASIC”), a programmable logic device (“PLD”), a digital signal processor (“DSP”), a field programmable gate array (“FPGA”), a state machine, a controller, gated or transistor logic, discrete physical hardware components, and combinations thereof. In some embodiments, particular portions or steps of methods and functions described herein are performed in whole or in part by way of the processing device120. In other embodiments, the methods and functions described herein include cloud-based computing such that the processing device120facilitates local operations, such communication functions, data transfer, and user inputs and outputs. The user computing device104&106incorporates an input and output system136operatively coupled to the processor device120. Output devices include a display140, which can be, without limitation, a touch screen of the mobile device106that serves both as an output device. The touch-screen display provides graphical and text outputs for viewing by one or more users110while also functioning as an input device, by providing virtual buttons, selectable options, a virtual keyboard, and other functions that, when touched, control the user computing device. The user output devices can further include an audio device, like a speaker144. The user computing device104&106may also include a positioning device108, such as a global positioning system device (“GPS”) that determines a location of the user computing device. In other embodiments, the positioning device108includes a proximity sensor or transmitter, such as an RFID tag, that can sense or be sensed by devices proximal to the user computing device104&106. A system intraconnect138, such as a bus system, connects various components of the mobile device106. The user computing device104&106further includes a communication interface150. The communication interface150facilitates transactions with other devices and systems to provide two-way communications and data exchanges through a wireless communication device152or wired connection154. Communications may be conducted via various modes or protocols, such as through a cellular network, wireless communication protocols using IEEE 802.11 standards. Communications can also include short-range protocols, such as Bluetooth or Near-field communication protocols. Communications may also or alternatively be conducted via the connector154for wired connections such by USB, Ethernet, and other physically connected modes of data transfer. To provide access to, or information regarding, some or all the services and products of the enterprise system200, automated assistance may be provided by the enterprise system200. For example, automated access to user accounts and replies to inquiries may be provided by enterprise-side automated voice, text, and graphical display communications and interactions. In at least some examples, any number of human agents210act on behalf of the provider, such as customer service representatives, advisors, managers, and sales team members. Human agents210utilize agent computing devices212to interface with the provider system200. The agent computing devices212can be, as non-limiting examples, computing devices, kiosks, terminals, smart devices such as phones, and devices and tools at customer service counters and windows at POS locations. In at least one example, the diagrammatic representation and above-description of the components of the user computing device104&106inFIG.1applies as well to the agent computing devices212. As used herein, the general term “end user computing device” can be used to refer to either the agent computing device212or the user computing device110depending on whether the agent (as an employee or affiliate of the provider) or the user (as a customer or consumer) is utilizing the disclosed systems and methods to segment, parse, filter, analyze, and display feedback data. Human agents210interact with users110or other agents212by phone, via an instant messaging software application, or by email. In other examples, a user is first assisted by a virtual agent214of the enterprise system200, which may satisfy user requests or prompts by voice, text, or online functions, and may refer users to one or more human agents210once preliminary determinations or conditions are made or met. A computing system206of the enterprise system200may include components, such as a processor device220, an input-output system236, an intraconnect bus system238, a communication interface250, a wireless device252, a hardwire connection device254, a transitory memory device222, and a non-transitory storage device224for long-term, intermediate-term, and short-term storage of computer-readable instructions226for execution by the processor device220. The instructions226can include instructions for an operating system and various software applications or programs230&232. The storage device224can store various other data234, such as cached data, files for user accounts, user profiles, account balances, and transaction histories, files downloaded or received from other devices, and other data items required or related to the applications or programs230&232. The network258provides wireless or wired communications among the components of the system100and the environment thereof, including other devices local or remote to those illustrated, such as additional mobile devices, servers, and other devices communicatively coupled to network258, including those not illustrated inFIG.1. The network258is singly depicted for illustrative convenience, but may include more than one network without departing from the scope of these descriptions. In some embodiments, the network258may be or provide one or more cloud-based services or operations. The network258may be or include an enterprise or secured network, or may be implemented, at least in part, through one or more connections to the Internet. A portion of the network258may be a virtual private network (“VPN”) or an Intranet. The network258can include wired and wireless links, including, as non-limiting examples, 802.11a/b/g/n/ac, 802.20, WiMax, LTE, and/or any other wireless link. The network258may include any internal or external network, networks, sub-network, and combinations of such operable to implement communications between various computing components within and beyond the illustrated environment100. External systems270and272represent any number and variety of data sources, users, consumers, customers, enterprises, and groups of any size. In at least one example, the external systems270and272represent remote terminal utilized by the enterprise system200in serving users110. In another example, the external systems270and272represent electronic systems for processing payment transactions. The system may also utilize software applications that function using external resources270and272available through a third-party provider, such as a Software as a Service (“SaasS”), Platform as a Service (“PaaS”), or Infrastructure as a Service (“IaaS”) provider running on a third-party cloud service computing device. For instance, a cloud computing device may function as a resource provider by providing remote data storage capabilities or running software applications utilized by remote devices. The embodiment shown inFIG.1is not intended to be limiting, and one of ordinary skill in the art will appreciate that the system and methods of the present invention may be implemented using other suitable hardware or software configurations. For example, the system may utilize only a single computing system206implemented by one or more physical or virtual computing devices, or a single computing device may implement one or more of the computing system206, agent computing device206, or user computing device104&106. Capturing Feedback Data Feedback data is generated by users of the providers system who are also known as feedback data sources. Feedback data can include alphanumeric text data or “content data,” such as a written narrative, that is optionally accompanied by numeric rating scores input by consumers in response to defined queries from a provider (e.g., a request to rate a service on a scale of 1 to 5). The feedback data is received by a provider system in discrete feedback data packets where each packet represents, for example, a distinct user review, comment, or message. Inclusive of the alphanumeric content data and a rating score, feedback data and further include, without limitation: (i) feedback content data (e.g., alphanumeric text, such as a written narrative); (ii) rating data (e.g., a numeric score on a discrete scale); (iii) feedback source identifier information (e.g., a user identification number, screen name, or email address); (iv) feedback source location data (i.e., a city, state, or county where the feedback source is located or is domiciled); (v) feedback source attribute data indicating other attributes of the feedback source (e.g., age, gender, etc.); (vi) temporal data representing when the feedback data was created; (vii) a product identifier that indicates a product or service that is the subject of the feedback data; (viii) provider location data that identifiers a provider location that is subject to the feedback (i.e., a particular store front or other location for the provider); (ix) provider agent data that identifies a provider agent that may be the subject of the feedback data; (x) provider category data that designates another subject of the feedback data (e.g., an identifier indicating the feedback relates to a provider promotion); (xi) a feedback packet identifier, such as a unique code that identifies the particular review, comment, or communication that comprises the feedback; (xii) a feedback channel identifier that indicates the channel through which feedback data was received (e.g., a provider website, email, customer service telephone call, etc.); or (xiii) any other information useful for a provider in classifying feedback data. The feedback data can be captured by the provider system through a variety of channels, such as: (i) input to a provider website; (ii) a written electronic message sent to a provider using an email or instant messaging software application; (iii) a message posted, published, or transmitted through a third-party messaging service, such as a comment posted to a social media platform that is sent directly to, or that “tags,” a provider; (iv) telephone calls or voice messages to a provider that are subsequently transcribed using a speech-to-text software application; (v) a message sent through “SMS” or “MMS” text messaging; or (vi) other means of transmitting electronic alphanumeric messages and data to a provider. A provider may receive numerous feedback data packets from numerous discrete feedback data sources. In one embodiment, the disclosed system and methods were used to capture between 20,000 to 30,000 feedback data packets representing separate voice telephone calls made to a customer complaint telephone line. In another example, the systems and methods were used to automatically process10,000reviews concerning a provider software application where such reviews were transmitted through a third-party software application repository (e.g., an “app store”). The feedback data packets can be stored directly to a provider system or stored to a third-party database, such as a cloud service storage or software as a service provider. The feedback data packets are stored to a relational database that maintains the feedback data packets in a manner that permits the feedback data packets to be associated with certain information, such as one or more subject identifiers or sentiment identifiers. Storage to a relational database further facilitates expedient sorting of the data, such as retrieving feedback data packets having temporal data within a predefined range of dates. As discussed below, a feedback reduction service processes the feedback data using a subject classification analysis to determine one or more subject identifiers that represent topics addressed within the feedback data. Non-limiting examples can be subject identifiers relating to a particular provider product or service. The feedback reduction service further performs a sentiment analysis that generates sentiment data classifying the feedback data according to one or more sentiment categories. Natural Language Processing The feedback reduction service processes the feedback data using natural language processing technology that is implemented by one or more artificial intelligence software applications and systems. The artificial intelligence software and systems are in turn implemented using neural networks. Iterative training techniques and training data instill neural networks with an understanding of individual words, phrases, subjects, sentiments, and parts of speech. As an example, training data is utilized to train a neural network to recognize that phrases like “locked out,” “change password,” or “forgot login” all relate to the same general subject matter when the words are observed in proximity to one another at a significant frequency of occurrence. The feedback reduction service utilizes one or more known techniques to perform a subject classification analysis that identities subject classification data. Suitable known techniques can include Latent Semantic Analysis (“LSA”), Probabilistic Latent Semantic Analysis (“PLSA”), Latent Dirichlet Allocation (“LDA”), or a Correlated Topic Model (“CTM”). The feedback data is first pre-processes using a reduction analysis to create reduced feedback data, which is streamlined by performing one or more of the following operations, including: (i) tokenization to transform the feedback data into a collection of words or key phrases having punctuation and capitalization removed; (ii) stop word removal where short, common words or phrases such as “the” or “is” are removed; (iii) lemmatization where words are transformed into a base form, like changing third person words to first person and changing past tense words to present tense; (iv) stemming to reduce words to a root form, such as changing plural to singular; and (v) hyponymy and hypernym replacement where certain words are replaced with words having a similar meaning so as to reduce the variation of words within the feedback data. In one embodiment, the feedback reduction service processes the reduced feedback data packets by performing a Latent Drichlet Allocation (“LDA”) analysis to identify subject classification data that includes one or more subject identifiers (e.g., topics addressed in the underlying feedback data). Performing the LDA analysis on the reduced feedback data may include transforming the feedback data into an array of text data representing key words or phrases that represent a subject (e.g., a bag-of-words array) and determining the one or more subjects through analysis of the array. Each cell in the array can represent the probability that given text data relates to a subject. A subject is then represented by a specified number of words or phrases having the highest probabilities (i.e., the words with the five highest probabilities), or the subject is represented by text data having probabilities above a predetermined subject probability threshold. In other embodiments, subject may each include one or more subject vectors, where each subject vector includes one or more identified keywords within the reduced feedback data as well as a frequency of the one or more keywords within the reduced textual data. The subject vectors are analyzed to identify words or phrases that are included in a number of subject vectors having a frequency below a specified threshold level that are removed from the subject vectors. In this manner, the subject vectors are refined to exclude text data less likely to be related to a given subject. To reduce an effect of spam, the subject vectors may be analyzed, such that if one subject vector is determined to use text data that is rarely used in other subject vectors, then the text data is marked as having a poor subject assignment and is removed from the subject vector. Further, in one embodiment, any unclassified feedback data is processed to produce reduced feedback data. Then words within the reduced feedback data are mapped to integer values, and the feedback data is turned into a bag-of-words that includes integer values and the number of times the integers occur in feedback data. The bag-of-words is turned into a unit vector, where all the occurrences are normalized to the overall length. The unit vector may be compared to other subject vectors produced from an analysis of feedback data by taking the dot product of the two unit vectors. All the dot products for all vectors in a given subject are added together to provide a strength score for the given subject, which is taken as subject weighting data. To illustrate generating subject weighting data, for any given subject there may be numerous subject vectors. Assume that for most of subject vectors, the dot product will be close to zero—even if the given feedback data addresses the subject at issue. Since there are some subjects with numerous subject vectors, there may be numerous small dot products that are added together to provide a significant score. Put another way, the particular subject is addressed consistently through several documents, instances, or sessions of the feedback data, and the recurrence of the carries significant weight. In another embodiment, a predetermined threshold may be applied where any dot product that has a value less than the threshold is ignored and only stronger dot products above the threshold are summed for the score. In another embodiment, this threshold may be empirically verified against a training data set to provide a more accurate subject analyses. In another example, a number of subjects may be widely different, with some subjects having orders of magnitude less subject vectors than others. The weight scoring may significantly favor relatively unimportant subjects that occur frequently in the feedback data given the differences in the number of subject vectors. To address this problem, a linear scaling on the dot product scoring based on the number of subject vectors may be applied. The result provides a correction to the score so that important but less common subjects are weighed more heavily. Once all scores are calculated for all subjects, then subjects may be sorted, and the most probable subjects are returned. The resulting output provides an array of subjects and strengths. In another embodiment, hashes may be used to store the subject vectors to provide a simple lookup of text data (e.g., words and phrases) and strengths. The one or more subject vectors can be represented by hashes of words and strengths, or alternatively an ordered byte stream (e.g., an ordered byte stream of 4-byte integers, etc.) with another array of strengths (e.g., 4-byte floating-point strengths, etc.). The feedback reduction service can also use term frequency—inverse document frequency (“tf-idf”) software processing techniques to generating weighting data that weight words or particular subjects. The tf-idf is represented by a statistical value that increases proportionally to the number of times a word appears in the feedback data. This frequency is offset by the number of separate feedback data instances that contain the word, which adjusts for the fact that some words appear more frequently in general across multiple discussions or documents. The result is a weight in favor of words or terms more likely to be important within the feedback data, which in turn can be used to weigh some subjects more heavily in importance than others. To illustrate with a simplified example, the tf-idf might indicate that the term “employment” carries significant weight within feedback data. To the extent any of the subjects identified by an LDA analysis include the term “employment,” that subject can be assigned more weight by the feedback reduction service. The feedback reduction service analyzes the feedback data through, for example, semantic segmentation to identify attributes of the feedback data. Attributes include, for instance, parts of speech, such as the presence of particular interrogative words, such as who, whom, where, which, how, or what. In another example, the feedback data is analyzed to identify the location in a sentence of interrogative words and the surrounding context. For instance, sentences that start with the words “what” or “where” are more likely to be questions than sentence having these words placed in the middle of the sentence (e.g., “I don't know what to do,” as opposed to “What should I do?” or “Where is the word?” as opposed to “Locate where in the sentence the word appears.”). In that case, the closer the interrogative word is to the beginning of a sentence, the more weight is given to the probability it is a question word when applying neural networking techniques. The feedback reduction service can also incorporate Part of Speech (“POS”) tagging software code that assigns words a parts of speech depending upon the neighboring words, such as tagging words as a noun, pronoun, verb, adverb, adjective, conjunction, preposition, or other relevant parts of speech. The feedback reduction service can utilize the POS tagged words to help identify questions and subjects according to pre-defined rules, such as recognizing that the word “what” followed by a verb is also more likely to be a question than the word “what” followed by a preposition or pronoun (e.g., “What is this?” versus “What he wants is an answer.”). POS tagging in conjunction with Named Entity Recognition (“NER”) software processing techniques can also be used by the feedback reduction service to identify various feedback sources within the feedback data. NER techniques are utilized to classify a given word into a category, such as a person, product, organization, or location. Using POS and NER techniques to process the feedback data allow the feedback reduction service to identify particular words and text as a noun and as representing a person participating in the discussion (i.e., a feedback source). The feedback reduction service can also perform a sentiment analysis to determine sentiment from the feedback data. Sentiment can indicate a view or attitude toward a situation or an event. Further, identifying sentiment in data can be used to determine a feeling, emotion or an opinion. The sentiment analysis can apply rule-based software applications or neural networking software applications, such as convolutional neural networks (discussed below), a lexical co-occurrence network, and bigram word vectors to perform sentiment analysis to improve accuracy of the sentiment analysis. Sentiment analysis can determine the polarity of feedback data according to a scale defined by the provider, such as classifying feedback data as being very positive, somewhat positive, neutral, somewhat negative or very negative. The sentiment analysis can also determine particular emotion associated with the feedback data, such as optimistic, excited, frustrated, or a range of other emotions. Prior to performing a sentiment analysis, the feedback data is subject to the reduction analysis that can include tokenization, lemmatization, and stemming. Polarity type sentiment analysis can apply a rule-based software approach that relies on lexicons, or lists of positive and negative words and phrases that are assigned a sentiment score. For instance, words such as “growth,” “great,” or “achieve” are assigned a sentiment score of certain value while negative words and phrases such as “failed,” “missed,” or “under performed” are assigned a negative score. The scores for each word within the tokenized, reduced feedback data are aggregated to determine an overall sentiment score. To illustrate with a simplified example, the words “great” and “growth” might be assigned a positive score of five (+5) while the word “failed” is assigned a score of negative ten (−10). The sentence “Growth failed to make targeted projection” could then be scored as a negative five (−5) reflecting an overall negative sentiment polarity. Similarly, the sentence “This product was a great big failure” might also be scored as a negative five, thereby reflecting a negative sentiment. The feedback reduction service can also apply machine learning software to determine sentiment, including use of such techniques to determine both polarity and emotional sentiment. Machine learning techniques also start with a reduction analysis. Words are then transformed into numeric values using vectorization that is accomplished through a “bag-of-words” model, Word2Vec techniques, or other techniques known to those of skill in the art. Word2Vec, for example, can receive a text input (e.g., a text corpus from a large data source) and generate a data structure (e.g., a vector representation) of each input word as a set of words. The data structure may be referred to herein at a “model” or “Word2Vec model.” Each word in the set of words is associated with a plurality of attributes. The attributes can also be called features, vectors, components, and feature vectors. For example, the data structure may include features associated with each word in the set of words. Features can include, for example, gender, nationality, etc. that describe the words. Each of the features may be determined based on techniques for machine learning (e.g., supervised machine learning) trained based on association with sentiment. Training the neural networks is particularly important for sentiment analysis to ensure parts of speech such as subjectivity, industry specific terms, context, idiomatic language, or negation are appropriately processed. For instance, the phrase “Our rates are lower than competitors” could be a favorable or unfavorable comparison depending on the particular context, which should be refined through neural network training. Machine learning techniques for sentiment analysis can utilize classification neural networking techniques where a corpus of feedback data is, for example, classified according to polarity (e.g., positive, neural, or negative) or classified according to emotion (e.g., satisfied, contentious, etc.). Suitable neural networks can include, without limitation, Naive Bayes, Support Vector Machines using Logistic Regression, convolutional neural networks, a lexical co-occurrence network, bigram word vectors, Long Short-Term Memory. Neural networks are trained using training set feedback data that comprise sample tokens, phrases, sentences, paragraphs, or documents for which desired subjects, feedback sources, interrogatories, or sentiment values are known. A labeling analysis is performed on the training set feedback data to annotate the data with known subject labels, interrogatory labels, feedback source labels, or sentiment labels, thereby generating annotated training set feedback data. For example, a person can utilize a labeling software application to review training set feedback data to identify and tag or “annotate” various parts of speech, subjects, interrogatories, feedback sources, and sentiments. The training set feedback data is then fed to the feedback reduction service neural networks to identify subjects, interrogatories, feedback sources, or sentiments and the corresponding probabilities. For example, the analysis might identify that particular text represents a question with a 35% probability. If the annotations indicate the text is, in fact, a question, an error rate can be taken to be 65% or the difference between the calculated probability and the known certainty. Then parameters to the neural network are adjusted (i.e., constants and formulas that implement the nodes and connections between node), to increase the probability from 35% to ensure the neural network produces more accurate results, thereby reducing the error rate. The process is run iteratively on different sets of training set feedback data to continue to increase the accuracy of the neural network. For some embodiments, the feedback reduction service can be configured to determine relationships between and among subject identifiers and sentiment identifiers. Determining relationships among identifiers can be accomplished through techniques, such as determining how often two identifier terms appear within a certain number of words of each other in a set of feedback data packets. The higher the frequency of such appearances, the more closely the identifiers would be said to be related. A useful metric for degree of relatedness that relies on the vectors in the data set as opposed to the words is cosine similarity. Cosine similarity is a technique for measuring the degree of separation between any two vectors, by measuring the cosine of the vectors' angle of separation. If the vectors are pointing in exactly the same direction, the angle between them is zero, and the cosine of that angle will be one (1), whereas if they are pointing in opposite directions, the angle between them is “pi” radians, and the cosine of that angle will be negative one (−1). If the angle is greater than pi radians, the cosine is the same as it is for the opposite angle; thus, the cosine of the angle between the vectors varies inversely with the minimum angle between the vectors, and the larger the cosine is, the closer the vectors are to pointing in the same direction. Artificial Intelligence A machine learning program may be configured to implement stored processing, such as decision tree learning, association rule learning, artificial neural networks, recurrent artificial neural networks, long short term memory networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, k-nearest neighbor (“KNN”), and the like. Additionally or alternatively, the machine learning algorithm may include one or more regression algorithms configured to output a numerical value in response to a given input. Further, the machine learning may include one or more pattern recognition algorithms— e.g., a module, subroutine or the like capable of translating text or string characters and/or a speech recognition module or subroutine. The machine learning modules may include a machine learning acceleration logic (e.g., a fixed function matrix multiplication logic) that implements the stored processes or optimizes the machine learning logic training and interface. The machine learning modules utilized by the present systems and methods can be implemented with neural networking techniques. Neural networks learn to perform tasks by processing examples, without being programmed with any task-specific rules. A neural network generally includes connected units, neurons, or nodes (e.g., connected by synapses) and may allow for the machine learning program to improve performance. A neural network may define a network of functions, which have a graphical relationship. As an example, a feedforward network may be utilized, such as an acyclic graph with nodes arranged in layers. A feedforward network260(as depicted inFIG.2A) may include a topography with a hidden layer264between an input layer262and an output layer266. The input layer262includes input nodes272that communicate input data, variables, matrices, or the like to the hidden layer264that is implemented with hidden layer nodes274. The hidden layer264generates a representation and/or transformation of the input data into a form that is suitable for generating output data. Adjacent layers of the topography are connected at the edges of the nodes of the respective layers, but nodes within a layer typically are not separated by an edge. In at least one embodiment of such a feedforward network, data is communicated to the nodes272of the input layer, which then communicates the data to the hidden layer264. The hidden layer264may be configured to determine the state of the nodes in the respective layers and assign weight coefficients or parameters of the nodes based on the edges separating each of the layers. That is, the hidden layer264implements activation functions between the input data communicated from the input layer262and the output data communicated to the nodes276of the output layer266. It should be appreciated that the form of the output from the neural network may generally depend on the type of model represented by the algorithm. Although the feedforward network260ofFIG.2Aexpressly includes a single hidden layer264, other embodiments of feedforward networks within the scope of the descriptions can include any number of hidden layers. The hidden layers are intermediate the input and output layers and are generally where all or most of the computation is done. Neural networks may perform a supervised learning process where known inputs and known outputs are utilized to categorize, classify, or predict a quality of a future input. However, additional or alternative embodiments of the machine learning program may be trained utilizing unsupervised or semi-supervised training, where none of the outputs or some of the outputs are unknown, respectively. Typically, a machine learning algorithm is trained (e.g., utilizing a training data set) prior to modeling the problem with which the algorithm is associated. Supervised training of the neural network may include choosing a network topology suitable for the problem being modeled by the network and providing a set of training data representative of the problem. Generally, the machine learning algorithm may adjust the weight coefficients until any error in the output data generated by the algorithm is less than a predetermined, acceptable level. For instance, the training process may include comparing the generated output produced by the network in response to the training data with a desired or correct output. An associated error amount may then be determined for the generated output data, such as for each output data point generated in the output layer. The associated error amount may be communicated back through the system as an error signal, where the weight coefficients assigned in the hidden layer are adjusted based on the error signal. For instance, the associated error amount (e.g., a value between −1 and 1) may be used to modify the previous coefficient (e.g., a propagated value). The machine learning algorithm may be considered sufficiently trained when the associated error amount for the output data is less than the predetermined, acceptable level (e.g., each data point within the output layer includes an error amount less than the predetermined, acceptable level). Thus, the parameters determined from the training process can be utilized with new input data to categorize, classify, and/or predict other values based on the new input data. An additional or alternative type of neural network suitable for use in the machine learning program and/or module is a Convolutional Neural Network (“CNN”). A CNN is a type of feedforward neural network that may be utilized to model data associated with input data having a grid-like topology. In some embodiments, at least one layer of a CNN may include a sparsely connected layer, in which each output of a first hidden layer does not interact with each input of the next hidden layer. For example, the output of the convolution in the first hidden layer may be an input of the next hidden layer, rather than a respective state of each node of the first layer. CNNs are typically trained for pattern recognition, such as speech processing, language processing, and visual processing. As such, CNNs may be particularly useful for implementing optical and pattern recognition programs required from the machine learning program. A CNN includes an input layer, a hidden layer, and an output layer, typical of feedforward networks, but the nodes of a CNN input layer are generally organized into a set of categories via feature detectors and based on the receptive fields of the sensor, retina, input layer, etc. Each filter may then output data from its respective nodes to corresponding nodes of a subsequent layer of the network. A CNN may be configured to apply the convolution mathematical operation to the respective nodes of each filter and communicate the same to the corresponding node of the next subsequent layer. As an example, the input to the convolution layer may be a multidimensional array of data. The convolution layer, or hidden layer, may be a multidimensional array of parameters determined while training the model. An example convolutional neural network CNN is depicted and referenced as280inFIG.2B. As in the basic feedforward network260ofFIG.2A, the illustrated example ofFIG.2Bhas an input layer282and an output layer286. However where a single hidden layer264is represented inFIG.2A, multiple consecutive hidden layers284A,284B, and284C are represented inFIG.2B. The edge neurons represented by white-filled arrows highlight that hidden layer nodes can be connected locally, such that not all nodes of succeeding layers are connected by neurons.FIG.2C, representing a portion of the convolutional neural network280ofFIG.2B, specifically portions of the input layer282and the first hidden layer284A, illustrates that connections can be weighted. In the illustrated example, labels W1and W2refer to respective assigned weights for the referenced connections. Two hidden nodes283and285share the same set of weights W1and W2when connecting to two local patches. Weight defines the impact a node in any given layer has on computations by a connected node in the next layer.FIG.3represents a particular node300in a hidden layer. The node300is connected to several nodes in the previous layer representing inputs to the node300. The input nodes301,302,303and304are each assigned a respective weight W01, W02, W03, and W04in the computation at the node300, which in this example is a weighted sum. An additional or alternative type of feedforward neural network suitable for use in the machine learning program and/or module is a Recurrent Neural Network (“RNN”). An RNN may allow for analysis of sequences of inputs rather than only considering the current input data set. RNNs typically include feedback loops/connections between layers of the topography, thus allowing parameter data to be communicated between different parts of the neural network. RNNs typically have an architecture including cycles, where past values of a parameter influence the current calculation of the parameter. That is, at least a portion of the output data from the RNN may be used as feedback or input in calculating subsequent output data. In some embodiments, the machine learning module may include an RNN configured for language processing (e.g., an RNN configured to perform statistical language modeling to predict the next word in a string based on the previous words). The RNN(s) of the machine learning program may include a feedback system suitable to provide the connection(s) between subsequent and previous layers of the network. An example RNN is referenced as400inFIG.4. As in the basic feedforward network260ofFIG.2A, the illustrated example ofFIG.4has an input layer410(with nodes412) and an output layer440(with nodes442). However, where a single hidden layer264is represented inFIG.2A, multiple consecutive hidden layers420and430are represented inFIG.4(with nodes422and nodes432, respectively). As shown, the RNN400includes a feedback connector404configured to communicate parameter data from at least one node432from the second hidden layer430to at least one node422of the first hidden layer420. It should be appreciated that two or more nodes of a subsequent layer may provide or communicate a parameter or other data to a previous layer of the RNN network400. Moreover, in some embodiments, the RNN400may include multiple feedback connectors404(e.g., connectors404suitable to communicatively couple pairs of nodes and/or connector systems404configured to provide communication between three or more nodes). Additionally or alternatively, the feedback connector404may communicatively couple two or more nodes having at least one hidden layer between them (i.e., nodes of nonsequential layers of the RNN400). In an additional or alternative embodiment, the machine learning program may include one or more support vector machines. A support vector machine may be configured to determine a category to which input data belongs. For example, the machine learning program may be configured to define a margin using a combination of two or more of the input variables and/or data points as support vectors to maximize the determined margin. Such a margin may generally correspond to a distance between the closest vectors that are classified differently. The machine learning program may be configured to utilize a plurality of support vector machines to perform a single classification. For example, the machine learning program may determine the category to which input data belongs using a first support vector determined from first and second data points/variables, and the machine learning program may independently categorize the input data using a second support vector determined from third and fourth data points/variables. The support vector machine(s) may be trained similarly to the training of neural networks (e.g., by providing a known input vector, including values for the input variables) and a known output classification. The support vector machine is trained by selecting the support vectors and/or a portion of the input vectors that maximize the determined margin. As depicted, and in some embodiments, the machine learning program may include a neural network topography having more than one hidden layer. In such embodiments, one or more of the hidden layers may have a different number of nodes and/or the connections defined between layers. In some embodiments, each hidden layer may be configured to perform a different function. As an example, a first layer of the neural network may be configured to reduce a dimensionality of the input data, and a second layer of the neural network may be configured to perform statistical programs on the data communicated from the first layer. In various embodiments, each node of the previous layer of the network may be connected to an associated node of the subsequent layer (dense layers). Generally, the neural network(s) of the machine learning program may include a relatively large number of layers (e.g., three or more layers) and are referred to as deep neural networks. For example, the node of each hidden layer of a neural network may be associated with an activation function utilized by the machine learning program to generate an output received by a corresponding node in the subsequent layer. The last hidden layer of the neural network communicates a data set (e.g., the result of data processed within the respective layer) to the output layer. Deep neural networks may require more computational time and power to train, but the additional hidden layers provide multistep pattern recognition capability and/or reduced output error relative to simple or shallow machine learning architectures (e.g., including only one or two hidden layers). Referring now toFIG.5and some embodiments, an artificial intelligence program502may include a front-end algorithm504and a back-end algorithm506. The artificial intelligence program502may be implemented on an AI processor520. The instructions associated with the front-end algorithm504and the back-end algorithm506may be stored in an associated memory device and/or storage device of the system (e.g., memory device124and/or memory device224) communicatively coupled to the AI processor520, as shown. Additionally or alternatively, the system may include one or more memory devices and/or storage devices (represented by memory524inFIG.5) for processing use and/or including one or more instructions necessary for operation of the AI program502. In some embodiments, the AI program502may include a deep neural network (e.g., a front-end network504configured to perform pre-processing, such as feature recognition, and a back-end network506configured to perform an operation on the data set communicated directly or indirectly to the back-end network506). For instance, the front-end program506can include at least one CNN508communicatively coupled to send output data to the back-end network506. Additionally or alternatively, the front-end program504can include one or more AI algorithms510,512(e.g., statistical models or machine learning programs such as decision tree learning, associate rule learning, recurrent artificial neural networks, support vector machines, and the like). In various embodiments, the front-end program504may be configured to include built in training and inference logic or suitable software to train the neural network prior to use (e.g., machine learning logic including, but not limited to, image recognition, mapping and localization, autonomous navigation, speech synthesis, document imaging, or language translation). For example, a CNN508and/or AI algorithm510may be used for image recognition, input categorization, and/or support vector training. In some embodiments and within the front-end program504, an output from an AI algorithm510may be communicated to a CNN508or509, which processes the data before communicating an output from the CNN508,509and/or the front-end program504to the back-end program506. In various embodiments, the back-end network506may be configured to implement input and/or model classification, speech recognition, translation, and the like. For instance, the back-end network506may include one or more CNNs (e.g., CNN514) or dense networks (e.g., dense networks516), as described herein. For instance and in some embodiments of the AI program502, the program may be configured to perform unsupervised learning, in which the machine learning program performs the training process using unlabeled data (e.g., without known output data with which to compare). During such unsupervised learning, the neural network may be configured to generate groupings of the input data and/or determine how individual input data points are related to the complete input data set (e.g., via the front-end program504). For example, unsupervised training may be used to configure a neural network to generate a self-organizing map, reduce the dimensionally of the input data set, and/or to perform outlier/anomaly determinations to identify data points in the data set that falls outside the normal pattern of the data. In some embodiments, the AI program502may be trained using a semi-supervised learning process in which some but not all of the output data is known (e.g., a mix of labeled and unlabeled data having the same distribution). In some embodiments, the AI program502may be accelerated via a machine learning framework520(e.g., hardware). The machine learning framework may include an index of basic operations, subroutines, and the like (primitives) typically implemented by AI and/or machine learning algorithms. Thus, the AI program502may be configured to utilize the primitives of the framework520to perform some or all of the calculations required by the AI program502. Primitives suitable for inclusion in the machine learning framework520include operations associated with training a convolutional neural network (e.g., pools), tensor convolutions, activation functions, basic algebraic subroutines and programs (e.g., matrix operations, vector operations), numerical method subroutines and programs, and the like. It should be appreciated that the machine learning program may include variations, adaptations, and alternatives suitable to perform the operations necessary for the system, and the present disclosure is equally applicable to such suitably configured machine learning and/or artificial intelligence programs, modules, etc. For instance, the machine learning program may include one or more long short-term memory (“LSTM”) RNNs, convolutional deep belief networks, deep belief networks DBNs, and the like. DBNs, for instance, may be utilized to pre-train the weighted characteristics and/or parameters using an unsupervised learning process. Further, the machine learning module may include one or more other machine learning tools (e.g., Logistic Regression (“LR”), Naive-Bayes, Random Forest (“RF”), matrix factorization, and support vector machines) in addition to, or as an alternative to, one or more neural networks, as described herein. Those of skill in the art will also appreciate that other types of neural networks may be used to implement the systems and methods disclosed herein, including, without limitation, radial basis networks, deep feed forward networks, gated recurrent unit networks, auto encoder networks, variational auto encoder networks, Markov chain networks, Hopefield Networks, Boltzman machine networks, deep belief networks, deep convolutional networks, deconvolutional networks, deep convolutional inverse graphics networks, generative adversarial networks, liquid state machines, extreme learning machines, echo state networks, deep residual networks, Kohonen networks, and neural turning machine networks, as well as other types of neural networks known to those of skill in the art. FIG.6is a flow chart representing a method600, according to at least one embodiment, of model development and deployment by machine learning. The method600represents at least one example of a machine learning workflow in which steps are implemented in a machine learning project. In step602, a user authorizes, requests, manages, or initiates the machine-learning workflow. This may represent a user such as human agent, or customer, requesting machine-learning assistance or AI functionality to simulate intelligent behavior (such as a virtual agent) or other machine-assisted or computerized tasks that may, for example, entail visual perception, speech recognition, decision-making, translation, forecasting, predictive modelling, and/or suggestions as non-limiting examples. In a first iteration from the user perspective, step602can represent a starting point. However, with regard to continuing or improving an ongoing machine learning workflow, step602can represent an opportunity for further user input or oversight via a feedback loop. In step604, user evaluation data is received, collected, accessed, or otherwise acquired and entered as can be termed data ingestion. In step606the data ingested in step604is pre-processed, for example, by cleaning, and/or transformation such as into a format that the following components can digest. The incoming data may be versioned to connect a data snapshot with the particularly resulting trained model. As newly trained models are tied to a set of versioned data, preprocessing steps are tied to the developed model. If new data is subsequently collected and entered, a new model will be generated. If the preprocessing step606is updated with newly ingested data, an updated model will be generated. Step606can include data validation, which focuses on confirming that the statistics of the ingested data are as expected, such as that data values are within expected numerical ranges, that data sets are within any expected or required categories, and that data comply with any needed distributions such as within those categories. Step606can proceed to step608to automatically alert the initiating user, other human or virtual agents, and/or other systems, if any anomalies are detected in the data, thereby pausing or terminating the process flow until corrective action is taken. In step610, training test data such as a target variable value is inserted into an iterative training and testing loop. In step612, model training, a core step of the machine learning work flow, is implemented. A model architecture is trained in the iterative training and testing loop. For example, features in the training test data are used to train the model based on weights and iterative calculations in which the target variable may be incorrectly predicted in an early iteration as determined by comparison in step614, where the model is tested. Subsequent iterations of the model training, in step612, may be conducted with updated weights in the calculations. When compliance and/or success in the model testing in step614is achieved, process flow proceeds to step616, where model deployment is triggered. The model may be utilized in AI functions and programming, for example to simulate intelligent behavior, to perform machine-assisted or computerized tasks, of which visual perception, speech recognition, decision-making, translation, forecasting, predictive modelling, and/or automated suggestion generation serve as non-limiting examples. Feedback Data Processing, Filtering, and Segmenting One example process for analyzing feedback data is shown inFIG.7. The outputs of the process are displayed on a graphical user interface, such as the example Feedback Explorer GUI shown inFIG.8. Large volumes of feedback data are aggregated and filtered according to provider determined categories, such as filtering the feedback data according to subject (e.g., feedback relating to a provider mobile application or product). The feedback data is also parsed according to time periods and reduced for display on the Feedback Explorer GUI. The feedback data is reduced in that the feedback data is depicted as descriptor sets806that summarize the feedback data over time. The summarized, graphical representation of the feedback data substantially enhances access to, and understanding of, feedback data that otherwise could not be expediently reviewed or analyzed as a function of time. The result is that providers are able to proactively recognize trends in feedback data and develop solutions to address problems or implement improvements. For instance, if the Feedback Explorer GUI indicates users are optimistic or satisfied with using a provider's mobile software application to transfer user account balances, the descriptor sets might yield descriptors that include “satisfied,” “mobile app,” or “balance transfers.” In that case, the provider can proactively develop other features that utilize the mobile application in an effort to achieve positive user results for other provider products and services. As another example, if the Feedback Explorer GUI indicates that users are increasingly or routinely dissatisfied with being “locked out” of the mobile software application, the descriptor sets for a series of time period identifiers might include descriptors for “upset,” “mobile app,” and “locked out.” In that case, a provider can investigate the mobile software application login or authentication process to determine possible solutions for improving the mobile application. On the other hand, if the Feedback Explorer GUI indicates that users were dissatisfied with a provider's mobile software application for only a single time period, the provider might be able to determine that the dissatisfaction is attributable to an isolated event, such as a system outage. In that case, further action might not be necessary with regard to the mobile software application. Turning again toFIG.7, feedback data packets and provider feedback parameters are passed to a feedback reduction software service that performs one or more of the temporal segmentation, subject classification analysis, sentiment analysis, or descriptor analysis. The temporal segmentation utilizes the temporal data from the feedback data packets that indicate when each feedback data packet was generated. The temporal segmentation analysis can also utilize provider feedback parameters, such time period identifiers that are each associated with a time period range according to provider settings, such as a given day, month, year, or other suitable time period. The temporal segmentation analysis processes the feedback data packets to determine whether the temporal data for each of the feedback data packets falls within a time period range. The feedback data packets that fall within a time period range are optionally stored to a database as a production set of feedback data packets that are available for further processing as a set. Feedback data packets falling within a time period range are also are associated with a time period identifier corresponding to a time period range. For example, the Feedback Explorer GUI ofFIG.8includes a time period identifier of “2020-08-01” that corresponds to a time period range of the entire month of August 2021. The feedback data packets associated with a given time period identifier can be stored to a database as a production subset that is available for further processing as a group of feedback data packets. The feedback data packets are associated with a time period identifier by storing the feedback data packets and the time period identifiers to a relational database that maintains a relationship between the types of data. The feedback data packets can also be associated with a time period identifier by appending the time period identifiers to the data that comprises the feedback data packet. The system also performs a subject classification analysis using the content data within the feedback data packets to identify one or more subject identifiers, or topics, included within the underlying feedback data. To illustrate subject classification, the feedback reduction service might perform the subject classification analysis for all feedback data packets for a given year to determine that the relevant subject identifiers include “mobile software application” and “balance transfers.” These subject identifiers might be derived if the feedback data packets include content data describing user experiences with operating a provider's mobile software application to perform functions that includes transferring account balances. The subject classification analysis can be performed using neural networking technology alone or in combination with a rule-based software engine. The number of subject identifiers output by the subject classification analysis can be determined by the provider feedback parameters. For example, the subject classification analysis may use neural networking technology to generate an output of ten (10) possible subject identifiers that are each associated with a probability of being a subject addressed in the underlying feedback data. The provider feedback parameters (i.e., provider settings) can be processes by a rules-based software application that accepts the subject identifiers with the five (5) highest probabilities. Alternatively, the feedback parameters can include a subject probability threshold such that all subject identifiers having a probability above the threshold are outputs of the subject classification analysis. The subject identifiers can be displayed on a Feedback Explorer GUI such asFIG.8where subject identifiers are displayed as text on Filter Selection input functions804. The Filter Selection input function804can be configured to define sets of feedback data for analysis. In the embodiment shown inFIG.8, the Filter Selection input functions804correspond to subject identifiers. Selecting a Filter Selection input function transmits a filter command from an end user device displaying the Feedback Explorer GUI to the provider system. The filter command includes a subject identifier that corresponds to the Filter Selection input function selected by the end user. The provider system responds to the end user computing device by transmitting only those descriptor sets for display that are associated with the selected subject identifier. For instance, an end user might select a Filter Select input function associated with a subject identifier of “provider website.” The provider system receives a filter command and responds by returning descriptor sets relating to the provider website for display on the Feedback Explorer GUI. In this manner, end users can filter the feedback data displayed on the Feedback Explorer GUI. For other embodiments, additional filter criteria can be utilized, such as product identifiers (i.e., feedback relating to a particular product), provider location identifiers (i.e., feedback data relating to a provider storefront location), or feedback channel identifiers (i.e., the medium through which the feedback data was received). In those cases, the filter command will include a product, location, or channel identifier. The provider system analyzes the feedback data packets to determine the presence of the selected identifiers and returns descriptor sets relating to the selected filters, such as provider products, locations, or feedback channels. With reference toFIG.7, the system optionally performs a sentiment analysis utilizing the feedback data packets and the provider feedback parameters. The sentiment analysis generates one or more sentiment identifiers and a sentiment rating score for each sentiment identifier. Each sentiment identifier corresponds to a qualitative emotive descriptor, such as “satisfied,” “optimistic,” or “upset.” The sentiment identifiers can be defined by the provider feedback parameters and used as inputs to neural network software applications. The neural network software applications process the feedback data packets to determine probabilities that the particular sentiment identifiers are represented within the underlying feedback data. A rule-based software application and the provider feedback parameters are used to select sentiment identifiers that are outputs of the sentiment analysis, such as selecting the three (3) highest probabilities or selecting sentiment identifiers having probabilities above a predetermined sentiment probability threshold. The sentiment analysis further determines a rating score that can be a numeric value representing the relative significance of the corresponding sentiment identifier within the underlying feedback data. The sentiment rating score can be determined using probability outputs from neural networks, analyses of the frequency of particular terms within the feedback data, or a combination of techniques known to those of ordinary skill in the art. The sentiment rating score is used compare sentiment identifiers, such as determining that one sentiment identifier is more prevalent in the underlying feedback data than a second sentiment identifier. In some embodiments, the sentiment analysis may determine a sentiment polarity, such as a score representing a positive or negative sentiment that can be displayed as an icon, image, or other graphical indicator of sentiment polarity on the Feedback Explorer GUI. The provider system also performs a descriptor analysis to determine descriptor sets806that are displayed as clusters of terms or phrases on the Feedback Explorer GUI. The descriptor sets can be determined using one or a combination of techniques from one or more sources of data. In one embodiment, the descriptor sets are determined using subject vectors created as part of a subject classification analysis through Latent Semantic Analysis, Probabilistic Latent Semantic Analysis, Latent Dirichlet Allocation, Correlated Topic Modeling, or other suitable subject classification methods known to one of skill in the art. The descriptor analysis is performed on the feedback data packets associated with one or more time period identifiers. With reference toFIG.8, a descriptor set is determined for each time period identifier representing a single month, and a primary descriptor set808is determined using the collective feedback data packets for each of the time period identifiers shown inFIG.8(i.e., July 2021 to March 2022). The subject vectors include one or more terms associated with a particular subject identifier. The subject vectors can be generated as part of the subject classification analysis. The system determines the frequency of the subject vector terms within the feedback data packets corresponding to a particular time period identifier using analysis techniques such as frequency— inverse document frequency. Terms with higher frequencies of appearance are included as descriptors within a descriptor set according to the provider feedback parameters. Alternatively, the subject vector is generated as part of the descriptor analysis, and neural networking technology can be used to determine the probabilities that particular terms correspond to a subject identifier. The frequency of the particular terms within the content data of the feedback data packets is determined, and the subject vector terms are selected for inclusion in a descriptor set based on the results of the frequency analysis. The probabilities outputs from the neural networks, the frequency determinations, or dot-product techniques described above are used to determine descriptor weighting data indicating the relative significance of particular terms from the subject vectors within the feedback data packets. Descriptors are selected from the subject vectors according to the provider feedback parameters. For instance, the descriptor analysis can utilize a rule-based software approach to select terms having the five (5) highest probabilities or frequency ratings within the feedback data. Or the descriptor analysis can select terms having a probability or frequency above a predetermined descriptor threshold. The selected terms are output as descriptors forming the descriptor sets. The descriptors can also be determined from the sentiment identifiers according to the provider feedback parameters. For instance, the descriptor analysis can utilize the sentiment identifiers having the two (2) highest rating scores as outputs to include within the descriptor set. The sentiment analysis is performed on the feedback data packets corresponding to a particular time period identifier or time period range for display on the Feedback Explorer GUI. Descriptor weighting data is generated using, for example, the sentiment rating score, to indicate the relative significance of the sentiment identifier as a descriptor within the descriptor set. For other embodiments, neural networking technology can be used to determine probabilities that the sentiment identifiers relate to a particular subject identifier or filter setting. Thus, when an end user selects a Filter Selection input function corresponding to a subject identifier, sentiments relating to the particular subject identifier are displayed as part of the descriptor set for each time period identifier. To illustrate, an end user might select a Filter Selection input function having a subject identifier of “provider website.” The sentiment analysis determines that the feedback data packets for a given time period identifier reflect sentiment identifiers of “upset” and “satisfied.” A neural network software application determines that the sentiment identifier of “satisfied” has a substantially higher probability of relating to the subject identifier of “provider website.” Thus, the sentiment identifier of “satisfied” is included with the descriptor set relating to the selected subject identifier of “provider website” and displayed on the Feedback Explorer GUI. In some embodiments, the descriptor analysis further determines the relatedness of descriptors included within a descriptor set. The relatedness of two words can be determined by, among other means, determining the frequency that two words appear proximal to one another within the content data of the feedback data packets. The higher the frequency of such appearances, the more closely the words would be said to be related. Persons of ordinary skill in the art will be aware of other ways to gauge relatedness, such as calculating the cosine similarity of each descriptor in a descriptor set. The relatedness determination generates a relatedness rating score between and among each of the descriptors within a descriptor set. The relationship between descriptors can be depicted in the Feedback Explorer GUI in a variety of ways, such as (i) drawing a line between descriptors when the relatedness score is above a predetermined threshold set within the provider feedback parameters, or (ii) positioning descriptors within a descriptor set as proximal (horizontally or vertically) or overlapping. In other embodiments, relatedness is shown through color where color data, such as numeric “Red Blue Green” values are calculated from the relatedness rating score. Thus, two words that are closely related could be shown as blue and light blue within a descriptor set. The descriptor sets are an efficient mechanism for summarizing feedback data and depicting changes in feedback data over time. The descriptor sets are weighted clusters of descriptors where the display and arrangement of the descriptors represents certain attributes of the underlying feedback data. The descriptor weighting data can be displayed in a variety of ways, such as displaying descriptors having a higher weight with larger sized fonts, different color fonts, or with a higher relative position within the arrangement of the descriptor set (i.e., descriptors with higher weighting data are shown above descriptors with lower weighting data). Descriptors representing different categories of data can likewise be displayed with different size fonts, different color fonts, or with different arrangements within the descriptor set. For instance, descriptors based on sentiment identifiers can be shown in a different color or position than descriptors generated through analysis of the frequency with which the descriptor appears in the underlying content data. The descriptor sets can include other graphical elements that represent attributes of the underlying feedback data, such as the circular icon shown within the descriptor sets ofFIG.8. The circular element could represent an overall volume of feedback data received for a given time period identifier where the circular element is displayed in a higher position with increasing volumes of feedback data. In other embodiments, the circular element could represent a sentiment polarity where a higher position represents a more positive sentiment. In some embodiments, end users can be provided with the option to graphically rearrange descriptors within a descriptor set. In yet other embodiments, the Feedback Explorer GUI can include a toolbar812that allows end users to annotate the descriptor set, transmit one or more descriptor sets to another end users, or zoom in or zoom out of portions of the Feedback Explorer GUI. Once again referring toFIG.7, the Feedback Explorer GUI can include a View Content input function that allows end users to display the underlying feedback data used to generate a descriptor set. The View Content input function can be shown as an icon, button, or image, or the descriptor set itself can be formatted as a hyperlink that serves as an input function. Selection of the View Content input function causes the graphical user interface to display content data from the feedback data packets that represents text from the underlying reviews, comments, or communications comprising the feedback data. Thus, end users can effectively and expediently sort, navigate, and view feedback data relating only to certain subject identifiers and time periods. Selecting the View Content input function transmits a content command to the provider system that includes a subject identifier, filter, and/or time period identifier corresponding to a descriptor set. The provider system responds by returning content data from the feedback data packets associated with the received subject identifier, filter, or time period identifier. The outputs of the temporal segmentation, subject classification, sentiment, and descriptor analysis can be stored to a relational database in a manner that maintains correlation between the various data sets. Thus, descriptor sets are stored and correlated with one or more subject identifiers, filter selections, or time period identifiers. The feedback data packets can also be stored in a manner that correlates the feedback data packets to a particular time period identifiers or subject identifiers, such as where feedback data packets are stored as a production set or production subset of feedback data. Skilled artisans will appreciate that the example process shown inFIG.7is not intended to be limiting, and other arrangements of process steps can be used. As an example, the system can first perform a subject classification analysis before performing a temporal segmentation analysis. Skilled artisans will also recognize that the above examples for filtering or sorting feedback data are not intended to be limiting with respect to system configurations. That is, the examples describe a system where an end user computing device accesses a provider system through, for example, a web-based portal using an Internet browser software application or a provider software application integrated with the end user computing device (e.g., a provider mobile app). The Feedback Explorer GUI can be generated by a software processes integrated with the end user computing device, such as a feedback interface software service. Selecting a Filter Selection or View Content input function causes the feedback interface service to generate the filter or content command that is transmitted to the provider system, which returns descriptor sets or content data. The provider system can also return display data required for display of the Feedback Explorer GUI. Display data received by an end user computing device includes instructions compatible with, and readable by, the particular Internet browser or software application for rendering a user interface, including graphical elements (e.g., icons, frames, etc.), digital images, text, numbers, colors, fonts, or layout data representing the orientation and arrangement graphical elements and alphanumeric data on a user interface screen. In other embodiments, the descriptor sets and/or feedback data packets are stored to transitory memory or non-transitory storage devices that are integrated with the end user computing device. Selecting the Filter Select or View Content input functions causes the feedback interface service to retrieve the descriptor sets or content data from memory or storage for display on the Feedback Explorer GUI. Alternatively, the feedback interface service may pass a filter or content command to another software process integrated with the end user computing device, such as a local feedback reduction service. The local feedback reduction service either retrieves the relevant descriptor sets or content data from memory or processes feedback data packets to generate the descriptor sets or content data, which are then passed to the feedback interface service for display on the Feedback Explorer GUI. Although the foregoing description provides embodiments of the invention by way of example, it is envisioned that other embodiments may perform similar functions and/or achieve similar results. Any and all such equivalent embodiments and examples are within the scope of the present invention.
85,034
11860825
DETAILED DESCRIPTION Collaborative document systems may allow an electronic document owner to invite other users to join as collaborators with respect to an electronic document stored in a cloud-based environment. An electronic document refers to media content used in electronic form. Media content may include text, tables, videos, images, graphs, slides, charts, software programming code, designs, lists, plans, blueprints, maps, etc. An electronic document to which users have been granted permission to access and/or edit concurrently may be referred to as a collaborative document herein. The collaborative document may be provided to user devices of the collaborators by one or more servers in a cloud-based environment. Each collaborator may be associated with a user type (e.g., editor, reviewer, viewer, etc.). Different views and capabilities may be provided to the collaborators based on their user type to enable editing, commenting on, reviewing, or simply viewing the collaborative document. Once granted permission to access the collaborative document, the collaborators may access the collaborative document to perform operations allowed for their user type. Conventionally, the user access history of a collaborative document is not recorded and displayed. Additionally, some systems may use separate user interfaces (UIs) for displaying an electronic document and certain recorded data, which may degrade performance of the system by switching between UIs. Systems may also use third-party trackers (e.g., a file placed on a user computer (in a browser) by a server of a system when the user accesses the system, and the file has a domain of the third-party that records use of the system) to record desired data, thereby adding a separate connection point from which to retrieve data in a network and degrading performance of the system. Aspects and implementations of the present disclosure are directed to a collaborative document system that addresses at least these deficiencies, among others, by recording and managing user access history of collaborative documents. User access history may provide an author and/or collaborators of a collaborative document with various insights by viewing when others have accessed the collaborative document. For example, a senior employee in an organization may be waiting on a junior employee to access the collaborative document, or vice versa, prior to accessing the document. Displaying the user access history to the employees may aid in reducing desired changes being undone and/or change conflicts between versions of the collaborative document. Additionally, providing the user access history may result in a more streamlined process for collaborators to develop, edit, review, and/or view a collaborative document. In one implementation, a collaborative document may be shared with one or more users (e.g., collaborators). As noted above, the users may have various user types, such as editor, reviewer, or viewer. Editors may access the collaborative document to make changes to the collaborative document, reviewers may access the document to suggest changes or make comments in the collaborative document, and viewers may access the document to view the collaborative document. These accesses may be collected and recorded as user access data by servers in a cloud-based environment providing the collaborative document, without the use of a third-party tracker. User access data may include the users that access (e.g., view) the collaborative document and the time at which the users access the collaborative document. The user access history may be created based on the user access data and may be provided for display in a user interface that may also be presenting the collaborative document on a user device. The user access history may be displayed in a consolidated view that shows the users that have permission to access the collaborative document and an indication of whether the users have accessed or have not accessed the collaborative document, and times of user accesses. Also, the consolidated view may include available actions (e.g., sending a message, requesting review, alerting to an update) corresponding to the users in the user access history. In an implementation, users having a certain user type (e.g., editor) may be allowed to view the user access history for users with permissions to access the collaborative document, while users without that certain user type cannot view the user access history. In another implementation, a user that has permission to access a collaborative document or that uses the collaboration document system can manage their privacy settings by disabling user access history from being recorded at an individual collaborative document level and/or at a global collaborative document level. Further, the users in the user access history may be grouped based on group metadata to organize the information presented in the consolidated view. For example, users of certain teams or departments in an organization may be grouped into different groups. In one implementation, the consolidated view may be displayed as an overlay on a portion of the collaborative document also being presented in the UI. Displaying the user access history in the same UI as the collaborative document may improve processing of the collaborative document system due to fewer transitions between a UI solely displaying the collaborative document and a UI solely displaying the user access history. Further, network traffic may be reduced because repeated requests and/or responses may be eliminated for transitioning between UIs displaying the collaborative document and the user access history individually. The consolidated view of the user access history may also enhance the UI by conveniently providing useful information and actionable options in the UI without transitioning to a separate UI. As a result, user experience with the UI may be improved. FIG.1illustrates an example of a system architecture100for implementations of the present disclosure. The system architecture100includes a cloud-based environment110connected to user devices120A-120Z via a network130. A cloud-based environment110refers to a collection of physical machines that host applications providing one or more services (e.g., collaborative document access) to multiple user devices120via a network130. The network130may be public networks (e.g., the Internet), private networks (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof. Network130may include a wireless infrastructure, which may be provided by one or more wireless communications systems, such as a wireless fidelity (WiFi) hotspot connected with the network130and/or a wireless carrier system that can be implemented using various data processing equipment, communication towers, etc. Additionally or alternatively, network130may include a wired infrastructure (e.g., Ethernet). The cloud-based environment110may include one or more servers112A-112Z and a data store114. The data store114may be separate from the servers112A-112Z and communicatively coupled to the servers112A-112Z or the data store114may be part of one or more of the servers112A-112Z. The data store114may store a collaborative document116. The collaborative document116may be a spreadsheet document, a slideshow document, word processing document, or any suitable electronic document (e.g., an electronic document including text, tables, videos, images, graphs, slides, charts, software programming code, designs, lists, plans, blueprints, maps, etc.) that can be shared with users. The collaborative document116may be created by an author and the author may share the collaborative document116with other users (e.g., collaborators). Sharing the collaborative document116may refer to granting permission to the other users to access the collaborative document116. Sharing the collaborative document116may include informing the other users of the collaborative document116via a message (e.g., email) including a link to the collaborative document116. The level of permissions that each user is granted may be based on the user type of each particular user. For example, a user with an editor user type may be able to open the collaborative document116and make changes directly to the collaborative document116. Whereas a user with a reviewer user type may make comments to suggest changes in the collaborative document116. The servers112A-112Z may be physical machines (e.g., server machines, desktop computers, etc.) that each include one or more processing devices communicatively coupled to memory devices and input/output (I/O) devices. One or more of the servers112A-112Z may provide a collaborative document environment122A-122Z to the user devices120A-120Z. The server112A-112Z selected to provide the collaborative document environment122A-122Z may be based on certain load-balancing techniques, service level agreements, performance indicators, or the like. The collaborative document environment122A-122Z may enable users using different user devices120A-120Z to simultaneously access the collaborative document116to review, edit, view, and/or propose changes to the collaborative document116in a respective user interface124A-124Z that presents the collaborative document116. In an implementation, the user interfaces124A-124Z may be web pages rendered by a web browser and displayed on the user device120A-120Z in a web browser window. In another implementation, the user interfaces124A-124Z may be included in a stand-alone application downloaded to the user device120A-120Z. The user devices120A-120Z may include one or more processing devices communicatively coupled to memory devices and I/O devices. The user devices120A-120Z may be desktop computers, laptop computers, tablet computers, mobile phones (e.g., smartphones), or any suitable computing device. A user that is invited and becomes a collaborator of the collaborative document116may request to access the collaborative document116. As such, the user device120A associated with the user may request the collaborative document116from the cloud-based environment110. In one implementation, the request may include user access data117, such as the user that accessed the collaborative document116and the time of the access. The user access data117may be stored in the data store114. One of the servers112A-112Z may provide the collaborative document116to the requesting user device112A for display in the user interface122A. In one implementation, when the user device120A accesses the collaborative document116(e.g., by viewing), the user device120A may transmit a file read receipt to the cloud-based environment110. The file read receipt may include the user access data117, and the user access data117may be stored in the data store114. Further, the collaborative document environment122A-122Z may provide users with certain privacy settings for controlling whether their access history may be viewed for the collaborative document116and/or other collaborative documents. For example, a user may explicitly decline having their access history displayed for the collaborative document116and/or any collaborative documents. The user device120A may transmit one or more settings119(e.g., global access history setting, document level access history setting) to the cloud-based environment110for storage in the data store114. Each of the servers112A-112Z may host a user access history module (118A-118Z). The user access history modules118A-118Z may be implemented as computer instructions that are executable by one or more processing devices on each of the servers112A-112Z. The user access history modules118A-118Z may receive a request for the user access history for the collaborative document116. The user access history modules116A-116Z may create the user access history using the user access data117by identifying the users with permission to access the collaborative document116and determining times of when the users accessed the collaborative document116. The user access history modules116A-116Z may exclude, based on the settings119, user access data117for users that have not consented to allowing their access history from being shown. The user access history may be provided to the requesting user device120A for display in the user interface124A presenting the collaborative document116. In an implementation, the user access history may be displayed over a portion of the collaborative document116in the user interface124A. The user access history may be displayed in a consolidated view that specifies the users with permission to access the collaborative document116, the indication of when the user accessed the collaborative document116or an indication of if the user has not accessed the collaborative document116, and one or more actions corresponding to the user. For example, one action may include sending a message to the user, and the viewer can initiate this action from the consolidated view in the user interface124A. Also, as discussed below, the users may be grouped into groups in the consolidated view based on group metadata included in the user access history. The user access history may be provided to users with certain user types (e.g., editors) and may be blocked from presentation to users having other user types (e.g., non-editors). Additionally, users may be eligible to see the user access history based on a type of account that is associated with the collaborative document environment122A. For example, one type of account may include a premium account that is eligible for viewing the user access history. Other types of accounts may include a basic account and a consumer account that are not eligible for viewing the user access history. FIG.2illustrates an example user interface presenting a collaborative document116, in accordance with one implementation of the disclosure. Although the collaborative document116includes a slideshow document as an example, it should be understood that the user access history techniques of the disclosure may be applied to a spreadsheet document, word-processing document, or any suitable collaborative document. The collaborative document116is displayed in user interface124A of a collaborative document environment122A on a user device120A. The collaborative document116may have been created by an author/owner who shared the collaborative document116with other users. Alternatively, an author may share the collaborative document116with a domain or a group of users and any users in that domain or group may be granted permission to access the collaborative document116. The users that are granted permission to access the collaborative document116may each request and access the collaborative document116on respective user devices120A-120Z. For example, the user, “Ted House,” may have accepted an invitation to join as a collaborator of the collaborative document116. The user may then request to access the collaborative document200via an input selection in the collaborative document environment122A, which is further described with respect toFIG.3 FIG.3depicts a flow diagram of aspects of a method300for storing user access data117in the cloud-based environment110, in accordance with one implementation of the disclosure. Method300and each of its individual functions, routines, subroutines, or operations may be performed by one or more processing devices of the computer device executing the method. In certain implementations, method300may be performed by a single processing thread. Alternatively, method300may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method300may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing method300may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, method300may be performed by one or more user access history modules118A-118Z executed by one or more processing devices of the servers112A-112Z in the cloud-based environment110. Method300may begin at block302. At block302, the processing device may receive one or more access requests to the collaborative document116. The access requests may be received from one or more user devices120A-120Z and may each include certain access data117, such as an identity of the user associated with a respective access request and a timestamp (date/time) at which the respective access request was made. In an implementation, the user devices120A-120Z that access (e.g., open the collaborative document116for viewing) the collaborative document116may transmit a file read receipt to the cloud-based environment and the file read receipt may include the access data117. At block304, the processing device may store the access data117for each of the one or more access requests or for the file read receipt in the data store114. Further, the processing device may provide the collaborative document116to be presented in the one or more user interfaces124A-124Z of the requesting user devices120A-120Z. FIG.4illustrates an example of an informational view400displayed when an editor accesses the collaborative document116, in accordance with one implementation of the disclosure. The informational view400may include a promotional message402that appears in the user interface124A with the collaborative document116when the editor first accesses the collaborative document116. Since editors may be allowed to view the user access history for the collaborative document116, the promotional message402may inform the editor that they can see which collaborators have accessed their collaborative document116and which collaborators have not yet accessed the collaborative document116. Further, the promotional message402may inform the editor that they can hide their access history if they so prefer. In an example, a link may be provided to various privacy settings views described below with respect toFIGS.17A-20. If the user desires to close the informational view400, the user may select the “OK” button404, and the user interface124A may remove the informational view400and display just the collaborative document116. If the user desires to view the user access history for the collaborative document116, the user may select the “OPEN” button406, and the user interface124A may display the user access history, as described herein. In an implementation, the “OPEN” button406may be referred to as a visual indicator. FIG.5illustrates another example of an informational view500displayed when a non-editor (e.g., viewer, reviewer) accesses the collaborative document116, in accordance with one implementation of the disclosure. The informational view500may include a promotional message502that appears when the non-editor first accesses the collaborative document116. Since non-editors may not be allowed to view the user access history for the collaborative document116, the promotional message502may inform the non-editor that certain users can see the user access history of the collaborators for the collaborative document116. Further, promotional message502may inform the editor that they can hide their access history if they so prefer. In an example, a link may be provided to various privacy settings views described below with respect toFIGS.17A-20. If the user desires to close the informational view500, the user may select the “OK” button504, and the user interface124A may remove the informational view500and display just the collaborative document116. FIG.6illustrates an example of user access history600displayed in a user interface124A presenting the collaborative document116, in accordance with one implementation of the disclosure. As depicted, the user access history600may be displayed in a consolidated view602in the user interface124A. In one example, the consolidated view602may be overlaid over a portion of the user interface124A displaying the collaborative document116. The user interface124A may use certain techniques, such as blurring or masking, that can be applied to the underlying collaborative document116to draw attention to the consolidated view602, which is in focus. As should be noted, the collaborative document environment122A may not transition to a different UI to display the user access history600, which may enhance performance of the collaborative document environment122A and network130. Further, displaying the user access history600in the consolidated view602together with the collaborative document116in the same user interface124A may enhance the user experience with the collaborative document environment122A by conveniently placing desired information and providing a more seamless experience. The consolidated view602may include headers for the collaborators (604) granted permission to access the collaborative document116, when the collaborative document116was last viewed (606), and an action (e.g., send message) (608) corresponding to the users. The user may sort (ascending/descending) the user access history by any of the headers. For example, the user access history600is sorted in descending order by the header associated with when the collaborative document116was last viewed (606). Additionally, there may be different domains that each include users or groups of users with permissions to access the collaborative document116. As such, the user may select the domain for which the user access history600is displayed. In the depicted example, the user access history600is displayed for collaborators in the “domain1.com”. In another example, the user access history600may be displayed for any user, either within a domain or outside a domain, that has permission to access the collaborative document116. The user access history600may display access history for users that have not explicitly declined to allow their access history for the collaborative document116or for any collaborative document to be shown. If a user chooses to hide their user access history at the document level for the collaborative document116or at the global document level for any collaborative documents, then the user access history600may exclude user access history pertaining to that user for the collaborative document116. Each line item in the user access history600may include a user icon610selected by the particular user or a default icon, the identity612(e.g., name) of the user, an identifier614of whether the name (e.g., “You”) is associated with the user viewing the user access history600, an indication616of when the user accessed the collaborative document116, and a selectable option618to enable performing an action for the corresponding user. The selectable option618may be any suitable UI element that enables selection (e.g., checkbox, radio button, slider, drop down list, etc.). In one implementation, the indication616may represent when the user most recently accessed the collaborative document116. In an implementation, the indication616may include a generalized time period of when the user accessed the collaborative document116. A generalized time period may provide comfort to certain users who dislike the idea of having the exact time of when they accessed the collaborative document116from being visible to other collaborators. For example, the indications616with generalized time periods depicted in the user access history600include “Within the hour,” “This morning,” “Monday,” and “Nov. 3.” Also, the indication616may indicate if users have not yet accessed the collaborative document116(“Never”). In certain instances, the user access history600may provide insights to a user to determine who has viewed the collaborative document116and when, and who has not viewed the collaborative document116. The user may use the selectable options618to take action (described in more detail below with respect toFIGS.11A-13) by sending a message, for example, to a user, such as “Bill Webber”, who has not yet accessed the collaborative document116. Other actions that may be performed using the consolidated view602may include, for example, requesting feedback, alerting selected users of a change to the collaborative document116, etc. The user may update their privacy settings by clicking a “PRIVACY SETTINGS” button620provided in the consolidated view602. FIGS.7A-7Dillustrate examples of using a visual indicator700representing user access history on the user interface124A, in accordance with one implementation of the disclosure. The visual indicator700may be a graphic (e.g., a graphical representation of an eye), text (e.g., “Your access history is shown” or “Your access history is not shown”), image, or the like. The visual indicator700may be located anywhere on the canvas of the user interface124A and is not limited to the particular depicted location. In other implementations, a dropdown menu may provide a link to display the user access history600or privacy settings associated with the user access history600. FIG.7Aillustrates the visual indicator700as a graphical representation of an eye, which may indicate that the access history for the user associated with the collaborative document116or for any collaborative document is enabled. Thus, other collaborators may be able to see when the user accesses the collaborative document116. In one implementation, selecting the visual indicator700may open the consolidated view602including the user access history600in the user interface124A displaying the collaborative document116. In another implementation, selecting or hovering over the visual indicator700may cause a menu to appear that provides a link702to the user access history600, as shown inFIG.7B. The menu may be displayed in the user interface124A presenting the collaborative document116. If the user selects the link, the user access history600may be displayed. In an implementation, if the user selects or hovers over the visual indicator700, a menu may appear that provides a link702to user access privacy settings. If the user selects the link702, a view including privacy settings may be displayed within the user interface124A presenting the collaborative document116. In yet another implementation, depicted inFIG.7C, selecting or hovering over the visual indicator700may cause a menu to appear that provides a selectable option704to change the setting119associated with access history for the collaborative document116or for any collaborative document. When the user changes the selectable option704to disable showing user access history for the collaborative document, as shown inFIG.7D, the visual indicator700may change (e.g., the eye has a strikethrough) to indicate that access history viewing is disabled. FIG.8depicts a flow diagram of aspects of a method800for a server providing the user access history600for the collaborative document116for display in the user interface124A presenting the collaborative document116, in accordance with one implementation of the disclosure. Although the user interface124A is used for discussion of method800, it should be understood that any other user interface124B-124Z may be used, instead of or in addition to the user interface124A, to display the user access history600and the collaborative document116. For example, multiple user interfaces124A-124Z may simultaneously display the user access history600and the collaborative document116. Method800may be performed in the same or a similar manner as described above in regards to method300. Also, method800may be performed by processing devices of one or more of the servers112A-112Z executing the user access history modules118A-118Z in the cloud-based environment110. Method800may begin at block802. At block802, the processing device may receive a request for the collaborative document116. The request may be received from a user device120A executing the collaborative document environment122A. The collaborative document116may be open in one or more of the collaborative document environments122B-122Z on other user devices120B-120Z when the request is received. In one implementation, the user access data117included in the access request may be stored by the processing device in the data store114. At block804, the processing device may provide the collaborative document116for presentation to the user in the user interface124A. For example, the processing device may retrieve the collaborative document116from the data store114and transmit the collaborative document116to the collaborative document environment122A executing on the user device120A. The collaborative document environment122A may present the collaborative document116in the user interface124A. In an implementation, the user device120A may transmit a file read receipt to the cloud-based environment110when the user accesses the collaborative document116, such that the user access data117may be stored in the data store114. At block806, the processing device may receive a request for the user access history600for the collaborative document116being presented in the user interface124A. The user may select a visual indicator of the user access history on the canvas of the user interface124A (e.g., visual indicator700), from an informational view400(e.g., “OPEN” button406), or from a drop-down menu (e.g., a link). At block808, the processing device may create the user access history600for the collaborative document116based on the user access data117including accesses of the collaborative document116by one or more of the users. The processing device may retrieve the user access data117from the data store114and identify user accesses for the collaborative document116. Also, the processing device may retrieve the settings119and determine which users have explicitly declined to allow their user access history to be shown. The processing device may exclude the user access data117associated with users that have explicitly declined from the user access history600. The created user access history600may include the identity of the users with permission to access the collaborative document116and an indicator of when the users have accessed the collaborative document116or whether the users have not yet accessed the collaborative document116. In an implementation, the user access history600may also include group metadata that specifies an identity of a group including users with permission to access the collaborative document116. At block810, the processing device may provide the user access history600for the collaborative document116for display in the user interface124A presenting the collaborative document116. The user access history600may be displayed in the consolidated view602as depicted inFIG.6. There may be selectable options to perform one or more actions corresponding to each of the users in the user access history600. For example, if a certain user has not accessed the collaborative document116since a substantial change was made, then the user may send a message to that user to encourage them to access (e.g., review, edit, view) the collaborative document116. FIG.9depicts a flow diagram of aspects of a method900for a user device120A updating the user interface124A to present the user access history600together with the collaborative document116, in accordance with one implementation of the disclosure. Although the user interface124A is used for discussion of method900, it should be understood that any other user interface124B-124Z may be used, instead of or in addition to the user interface124A, to display the user access history600and the collaborative document116. For example, multiple user interfaces124A-124Z may simultaneously display the user access history600and the collaborative document116. Method900may be performed in the same or a similar manner as described above in regards to method300. Also, method900may be performed by processing devices of one or more of the user devices120A-120Z executing the collaborative document environments122A-122Z. For purposes of clarity, the user device120A is referenced throughout the discussion of method900. Method900may begin at block902. At block902, the processing device may present the user interface124A displaying the collaborative document116. The user interface124A may include a visual indicator representing the user access history600. In some examples, the visual indicator may be a graphic (e.g., the visual indicator700is an eye700inFIGS.7A-7D), image, text, button (e.g., “OPEN” button406in informational view400ofFIG.4or any other button in the user interface124A), link (e.g., in a drop-down menu or on the canvas of the user interface124A), or any other suitable visual indicator that represents the user access history600. At block904, the processing device may detect a selection of the visual indicator representing the user access history600. In an implementation, the user may use a cursor to select the visual indicator, or when the user device120A implements a touchscreen, the user may tap the visual indicator on the touchscreen. At block906, the processing device may request the user access history600for the collaborative document116from the server112A. For example, responsive to the selection of the visual indicator, the processing device may send a request for the user access history600to the server112A (e.g., any of servers112A-112Z in the cloud-based environment110). At block908, the processing device may, in response to receiving the user access history600from the server, update the user interface124A to present the user access history600together with the collaborative document116as depicted inFIG.6. As shown in the consolidated view602, the user may perform one or more actions corresponding to the users in the user access history600. As such,FIGS.10A-10Billustrate examples of performing an action corresponding to the users in the user access history600, in accordance with one implementation of the disclosure. AlthoughFIGS.10A-10Bdepict an example of the action being sending a message, it should be understood that other actions may be performed similarly to as shown. In particular,FIG.10Adepicts a user selecting several of the selectable options618in the consolidated view602displaying the user access history600. The user access history600remains displayed together with the collaborative view116in the user interface124A. The user selected the selectable option618for “Joe Smith,” “Jane Doe,” and “Bill Webber.” Selecting one or more of the selectable options may cause a bar1000to appear in the header of the consolidated view602. The bar may provide an indication1002of how many items (users) are selected via the selectable options618, a “CANCEL” button1004to cancel the selected action, and a “SEND MESSAGE” button1006. When the user is ready to send a message to the desired users, the user may select (e.g., selection circle1008) the “SEND MESSAGE” button1006. The selection may be made via any suitable input apparatus (e.g., mouse, touchscreen, voice command via microphone). Selecting the “SEND MESSAGE” button1006may cause a message view1010to appear, as shown inFIG.10B. Although the message view1010is shown for purposes of explanation, it should be understood that any suitable action configuration view may be shown together with the collaborative document116in the user interface124A. The message view1010may be displayed in the user interface124A together with the collaborative document116. The selected users (“Joe Smith,” “Jane Doe,” and “Bill Webber”) are displayed in a recipient portion1012. In an implementation, the user may add additional recipients of the message as desired. A subject portion1014may include the subject of the message and may be modifiable by the user. In an implementation, the subject may be defaulted to the title of the collaborative document116. A message portion1016may enable a user to enter text of a message they desire to send to the selected users. In an implementation, the user may be able to upload attachments to the message, set deadlines (e.g., dates when feedback is desired, dates when access is desired, etc.), or the like. When the user is satisfied with the message, the user may click a “SEND MESSAGE” button1018, which may cause the message to be sent to one or more servers112A-112Z of the cloud-based environment110, as described below. FIG.11depicts a flow diagram of aspects of a method1100for a server112A (e.g., any of servers112A-112Z in the cloud-based environment110) receiving a message request from a user device120A displaying the collaborative document116and transmitting a message, in accordance with one implementation of the disclosure. Method1100may be performed in the same or a similar manner as described above in regards to method300. Also, method1100may be performed by processing devices of one or more of the servers112A-112Z executing the user access history modules118A-118Z in the cloud-based environment110. Method1100may begin at block1102. At block1102, the processing device may receive a message request including a message to be sent to one or more users displayed in the user access history600in the user interface124A. As described above, the user viewing the user access history600may select one or more selectable options618corresponding to users to which the user desires to send a message. The user may complete the message to the selected users in the message view1010and may send the message request from the user device120A by clicking the “SEND MESSAGE” button108. At block1104, the processing device may transmit the message to one or more user devices (e.g.,120B-120Z) associated with the one or more users designated as recipients in the message. FIG.12depicts a flow diagram of aspects of a method1200for a user device120A performing an action based on selectable options618presented in the consolidated view602including the user access history600, in accordance with one implementation of the disclosure. Although the user interface124A is used for discussion of method1200, it should be understood that any other user interface124B-124Z may be used, instead of or in addition to the user interface124A, to display the user access history600and the collaborative document116. For example, multiple user interfaces124A-124Z may simultaneously display the user access history600and the collaborative document116. Method900may be performed in the same or a similar manner as described above in regards to method300. Also, method1200may be performed by processing devices of one or more of the user devices120A-120Z executing the collaborative document environments122A-122Z. For purposes of clarity, the user device120A is referenced throughout the discussion of method1200. Method1200may begin at block1202. At block1202, the processing device may present the user access history600in the consolidated view602. The user access history600may specify one or more users that have access to the collaborative document116and an indication of when the one or more users most recently accessed the collaborative document116. Also, the indication may be a generalized time period, as discussed above. At block1204, the processing device may present, within the consolidated view602, one or more selectable options618to perform an action for each of the one or more users. At block1206, the processing device may receive selection of one or more selectable options618associated with the one or more users. For example, the user may select one or more selectable options618in the consolidated view602and the bar1000may appear in the consolidated view602. The user may select a perform action button (e.g., “SEND MESSAGE” button1006). At block1208, the processing device may display an action configuration view (e.g., message view1010) with the collaborative document116in the user interface124A in response to the user selecting a perform action button (e.g., “SEND MESSAGE” button1006). In some implementations, selecting the perform action button (e.g., “SEND MESSAGE” button1006) may cause the action to be performed without displaying an action configuration view. The user may customize the action (e.g., message) in the action configuration view when displayed. From the action configuration view, the user may select the perform action button (e.g., “SEND MESSAGE” button1018). At block1210, the processing device may perform the action in response to the user selecting either the perform action button in the consolidated view602or the action configuration view. In some examples, the action may include sending the message, requesting feedback, setting a deadline, uploading an attachment to aid in reviewing the collaborative document116, alerting about a change in the collaborative document116, and the like. FIGS.13A-13Eillustrate examples of using groups of users in the user access history600, in accordance with one implementation of the disclosure. In an implementation, the user may organize users with access to the collaborative document116into groups. Additionally or alternatively, an administrator may organize the users into groups (e.g., based on departments or teams within an organization). The groups may be stored in the data store114in the cloud-based environment110. Group metadata may include membership information of the users that are part of a certain group. The group metadata may be included in the user access history600such that group information may be displayed. FIG.13Aillustrates one such group1300in the user access history600. As depicted, the group1300includes an identity of “sales-group” and it is contracted to just display the identity of the group1300without showing the individual members of the group1300. When the list of information in the user access history600is sorted by indication616of when the users last viewed the collaborative document116, the indication616for the group1300is represented by the most recent access of a user within the group. For example, the indication616for the group is depicted as “This morning.” Since “This morning” is later than “Within the hour” and “Monday,” the group is placed in between users (“Ted House” and “John Doe”) associated with those indications616. When the user selects to expand the group1300, the user device120A may fetch the individual user members from one or more servers112A-112Z of the cloud-based environment110. In some instances, there may be a rule that specifies a threshold number of users that can be shown for the group1300. If the number of users returned from the servers exceeds the threshold, a message may be displayed underneath the identity of the group1300in the user access history600that the group has too many members to display. Further, if the request fails, a generic error message may be displayed underneath the identity of the group1300in the user access history600that the group members may not be displayed at that time, for example. It should be noted that these messages may be displayed at any suitable location in the user interface124A. If the users of the group1300load successfully in the user access history600, the users may be displayed at location indicative of being part of the group1300. For example, inFIG.13B, users (“Brad Smithy” and “Sam Tarley”) are displayed below the group1300and indented to indicate that the users are part of the group1300. As previously mentioned, the list in the user access history600is sorted by indication616of when the users most recently accessed the collaborative document116. As such, the user (“Brad Smithy”) who accessed the collaborative document116“This morning” is displayed above the user (“Sam Tarley”) who accessed the collaborative document116“Yesterday”. Also, inFIG.13Athe indication616(“This morning”) for the group corresponds to the indication616for “Brad Smithy” since his access indication616is most recent. In some implementations, external group members may not be shown in the expanded view of the group1300. As depicted, a message1304may be displayed for the group that indicates that group members outside of the domain being shown (e.g., “domain1.com”) are not shown. Further, each user in the group1300may also include the selectable options618to perform an action corresponding to the users in the group1300. In an implementation, the user viewing the user access history600may not have access to view the membership details of the group1300. For example,FIG.14illustrates that the user access history600includes the group1300(“sales-group”) but the user viewing the user access history600does not have access to view membership details related to the group1300. As such, there is no indication616for when the group1300last viewed the collaborative document116, and the group1300is placed at the bottom of the list in the user access history600. When the user selects to expand the group1300, a message1500is displayed that indicates the user does not have access to the membership details of the group1300, as illustrated inFIG.15. FIGS.16A-16Eillustrate examples of using a user access privacy setting view1600to control settings associated with the user access history600, in accordance with one implementation of the disclosure. The user access privacy setting view1600may be displayed in the user interface124A together with the collaborative document116. To open the user access privacy setting view1600, the user may select visual indicators (e.g., link, button, graphic, image, etc.) representing the user access privacy setting view1600from any suitable location (e.g., any view described herein, drop-down menu, on canvas of the user interface124A, etc.) in the user interface124A displaying the collaborative document116. FIG.16Aillustrates an example of a user access privacy setting view1600displayed together with the collaborative document116. The particular example of the user access privacy setting view1600may be displayed to users that have a certain user type (e.g., editor) that allows the viewing of the user access history600. The user access privacy setting view1600may include two selectable options: a first selectable option1602for global account setting and a second selectable option1604for a document setting. The first selectable option1602for the global account setting may control whether the user access history is shown for any documents to which the user has permission to access (global access history setting). The second selectable option1604for the document setting may control the user access history for the specific collaborative document116currently accessed (document level access history setting). The global access history setting and the document level access history setting may be considered settings119and may be referred to as privacy settings herein. If the user modifies the first selectable option1602and/or the second selectable option1604, the user may click a “DONE” button1605to send the settings119to the cloud-based environment110to be stored in the data store114. As depicted, the user has enabled both the first selectable option1602and the second selectable option1604, and thus, the user has allowed their user access history to be viewed for the particular document and for any document to which the user has permission to access. If the user disables the first selectable option1602to explicitly decline their user access history from being shown for any collaborative document, then the second selectable option1604may automatically be disabled for the collaborative document116, as well. In some instances, the second selectable option1604may be become hidden from view when the first selectable option1602is disabled. FIG.16Billustrates another example of the user access privacy setting view1600displayed together with the collaborative document116. This particular example of the user access privacy setting view1600may be displayed to users that have a certain user type (e.g., non-editor) that are prevented from viewing the user access history600. For example, a message1606may be displayed in the user access privacy setting view1600that indicates that editors or owners of the collaborative document116can see who has accessed their collaborative documents, and the user may manage how their access history is shown to those users using the first selectable option1602and the second selectable option1604. FIG.16Cillustrates another example of the user access privacy setting view1600where an administrator has disabled the collection of user access data117for the collaborative document116, thereby preventing the user from modifying their settings119. In such an instance, the user access privacy setting view1600may display a message1608indicating that the administrator does not currently allow their access history to be shown. While in this state, the user may not edit their user access privacy settings, in this implementation. For example, the first selectable option1602and the second selectable option1604may be hidden so the user cannot modify their settings. FIG.16Dillustrates another example of the user access privacy setting view1600where an administrator has disabled the collection of user access data117for the collaborative document116, but the user is still allowed to modify their settings119. In such an instance, the user access privacy setting view1600may display a message1610indicating that the administrator does not currently allow their access history to be shown and may display another message1612indicating that the user may manage how their access history should be shown if the administrator changes the setting to collect user access data117again. As depicted, the first selectable option1602and the second selectable option1604remain visible in the user access privacy setting view1600so the user can modify their settings119. FIGS.17A-17Eillustrate examples of using a general settings view1700to control settings associated with the user access history600, in accordance with one implementation of the disclosure. The general settings view1700may be displayed in the user interface124A together with a home page1702displaying the collaborative documents that the user has permission to access. To open the general settings view1700, the user may select visual indicators (e.g., link, button, graphic, image, etc.) representing the general setting views from any suitable location (e.g., any view described herein, drop-down menu, on canvas of the user interface124A, etc.) in the user interface124A displaying the home page1702. FIG.17Aillustrates the general settings view1700including a selectable option1704for a global account setting (e.g., setting119). The selectable option1704for the global account setting may control whether the user access history is shown for any documents to which the user has permission to access (global access history). In an implementation, the general settings view1700may include a message1705that indicates that user can hide their user access history for specific documents by accessing the user access privacy settings view1600. If the user modifies the selectable option1704, the user may click a “DONE” button1706to send the global account setting to the cloud-based environment110to be stored in the database114. FIGS.17B-17Cillustrate various examples for modifying the selectable option1704in the general settings view1700. It should be understood that the techniques described below forFIGS.17B-17Cmay be applicable to any similar selectable option described herein. In one example, a user may disable the user access history using the selectable option1704by sliding the selectable option1704from right to left.FIG.17Billustrates the selectable option1704at a middle toggle state until the setting119is updated. The selectable option1704may remain in this middle toggle state until the setting119is accepted by one or more servers112A-112Z in the cloud-based environment110and stored in the database114. The middle toggle state may provide an indication to the user that the setting119is in the process of being updated. While the selectable option1704is in the middle toggle state, a message1708may be displayed that indicates that the global access history setting is being updated. FIG.17Cillustrates an example of the general settings view1700when the global access history setting is successfully updated. As depicted, the selectable option1704is disabled and a message1710is displayed that indicates that the global access history setting is updated. Once the global access history setting is updated, other collaborators cannot see the user access history for any collaborative documents associated with the user. FIG.17Dillustrates an example of the general settings view1700when the global access history setting cannot be updated. As depicted, the selectable option1704returned to enabled and a message1712is displayed that indicates that the global access history setting could not be updated. While the global access history setting is enabled, the certain users allowed to view the user access history600can see user access history associated with the user for any collaborative document the user has permission to access. FIG.17Eillustrates an example of the general settings view1700when the administrator has disabled the collection of user access data117. In such a scenario, a message1714may be displayed in the general settings view1700that indicates that the administrator does not currently allow their access history to be shown. The selectable option1704may be hidden such that the user cannot modify their global access setting. In another implementation, the general settings view1700may allow the user to update the global access setting by modifying a visible selectable option1704, and a message may be displayed that indicates that the user can set the global access setting to how they desire their access history to be shown when the administrator changes settings. FIG.18depicts a flow diagram of aspects of a method1800for a server (e.g., one or more servers112A-112Z of the cloud-based environment110) to filter out users from the user access history600based on privacy settings119, in accordance with one implementation of the disclosure. Method1800may be performed in the same or a similar manner as described above in regards to method300. Also, method1800may be performed by processing devices of one or more of the servers112A-112Z executing the user access history modules118A-118Z in the cloud-based environment110. Method1800may begin at block1802. At block1802, the processing device may receive a request to disable showing the user access history associated with a first user (e.g., user A) of the user device120A for the collaborative document116. The request may include a document level access history setting (setting119) indicating that user A explicitly declines to allow their user access for the collaborative document116to be shown. The server (112A-112Z) may store the setting119in the data store114. At block1804, the processing device may receive, from a second user (e.g., user B) of user device120B, a request for the user access history600for the collaborative document116being presented in the user interface124B. At block1806, the processing device may create the user access history600for the collaborative document116based on the user access data117and the document level access history setting119. For example, the processing device may exclude the user access history for user A with the document level access history setting119that indicates user A explicitly declined to allow their user access history to be shown for the collaborative document116. The processing device may exclude any other user access data117from the user access history600for other users that explicitly declined, based on the document level access history setting119, to allow their user access history to be shown for the collaborative document116. The processing device may just include user access history600for users that allow their user access history to be shown for the collaborative document116(based on the document level access history setting119). In an implementation, the processing device may use the global access history setting119instead of or in addition to the document level access history setting119when creating the user access history600. For example, if a user set the global access history setting119to disabled, then the processing device may exclude the user access history for that user for the collaborative document116. Additionally, if the user access history600is requested for any other collaborative document, the user that set the global access history setting119to disabled may also be excluded from the user access history600for those collaborative documents. At block1808, the processing device may provide the user access history600for the collaborative document116for display in the user interface124B presenting the collaborative document116. It should be noted that the user access history600does not include access history for any users that set the document level access history setting119to disabled or the global access history setting119to disabled. FIG.19depicts a flow diagram of aspects of a method1900for a user device120A to allow a user to disable user access history viewing for the collaborative document116, in accordance with one implementation of the disclosure. Method1900may be performed in the same or a similar manner as described above in regards to method300. Also, method1900may be performed by processing devices of one or more of the user devices120A-120Z executing the collaborative document environments122A-122Z. For purposes of clarity, the user device120A is referenced throughout the discussion of method1900. Method1900may begin at block1902. At block1902, the processing device may receive a selection to view user access history settings. The selection may be received via a visual indicatory (e.g., button, link, graphic, image, etc.) located at any suitable location (e.g., on the canvas of the UI124A, in a drop-down menu of the UI124A, on any view presented in the UI124A, etc.) of the UI124A presenting the collaborative document116. At block1904, the processing device may present the user access privacy setting view1600including at least one of a selectable option (e.g., the second selectable option1604) to enable or disable user access history viewing associated with the user for the collaborative document116(document level access history setting119). At block1906, the processing device may receive a selection, via the second selectable option1604, to disable the user access history viewing associated with the user for the collaborative document116. The user device120A may transmit a request to the server (one or more of servers112A-112Z) to set the document level access history setting119to disabled. If the server accepts the change to the document level access history setting119, the document level access history setting119may be saved in the database114. As a result, the server may exclude the access history for the user for the collaborative document116based on the document level access history setting119. At block1908, the processing device may, in response to receiving the user access history600that excludes the user access history associated with the user for the collaborative document116from the server, display an updated user interface124A to present the user access history600together with the collaborative document116. FIG.20depicts a flow diagram of aspects of a method2000for a user device120A to allow a user to disable user access history viewing for any collaborative document associated with the user, in accordance with one implementation of the disclosure. Method2000may be performed in the same or a similar manner as described above in regards to method300. Also, method2000may be performed by processing devices of one or more of the user devices120A-120Z executing the collaborative document environments122A-122Z. For purposes of clarity, the user device120A is referenced throughout the discussion of method2000. Method2000may begin at block2002. At block2002, the processing device may receive a selection to view user access history settings. The selection may be received via a visual indicator (e.g., button, link, graphic, image, etc.) located at any suitable location (e.g., on the canvas of the UI124A, in a drop-down menu of the UI124A, on any view presented in the UI124A, etc.) of the UI124A presenting the collaborative document116. At block2004, the processing device may present the user access privacy setting view1600including at least one of a selectable option (e.g., the first selectable option1602) to enable or disable the user access history for any collaborative documents to which the user has permission to access (global access history setting). At block2006, the processing device may receive a selection, via the first selectable option1602, to disable the user access history viewing associated with the user for each collaborative document to which the user has access. The user device120A may transmit a request to the server (one or more of servers112A-112Z) to set the global access history setting119to disabled. If the server accepts the change to the global access history setting119, the global access history setting119may be saved in the database114. As a result, the server may exclude the access history for the user for the collaborative document116based on the global access history setting119. Likewise, if the user access history is requested for any other collaborative document to which the user has permission to access, the server may exclude the access history for the user from those as well based on the disabled global access history setting119. At block2008, the processing device may, in response to receiving the user access history600that excludes at least the user access history associated with the user for the collaborative document116from the server, display an updated user interface124A to present the user access history600together with the collaborative document116. FIG.21illustrates an example of a view2100indicating that non-editors cannot view the user access history600for the collaborative document116, in accordance with one implementation of the disclosure. The view2100may be displayed in the UI124A presenting the collaborative document116. The view2100may include a message2102that indicates that the user cannot view the user access history600unless the user has a user type of editor. The view2100may be displayed a result of the user selecting a visual indicator representing the user access history600in the UI124A. In an implementation, the view2100may also include a link2104that enables the user to request the certain user type (e.g., editor) to be able to view the user access history600. FIG.22depicts a flow diagram of aspects of a method2200for a server determining whether to provide the user access history600to a user, in accordance with one implementation of the disclosure. Method2200may be performed in the same or a similar manner as described above in regards to method300. Also, method2200may be performed by processing devices of one or more of the servers112A-112Z executing the user access history modules118A-118Z in the cloud-based environment110. Method2200may begin at block2202. At block2202, the processing device may receive a request for the user access history600. The request may be received from the user device120A. At block2204, the processing device may determine whether the user has a certain user type (e.g., editor with editing permissions). If the user has the certain user type, the processing device may provide (block2206) the user access history600. If the user does not have the certain user type (e.g., is a non-editor), the processing device may (block2208) block the user access history600from being sent. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. FIG.23depicts a block diagram of an example computing system operating in accordance with one or more aspects of the present disclosure. In various illustrative examples, computer system2300may correspond to any of the computing devices within system architecture100ofFIG.1. In one implementation, the computer system2300may be each of the servers112A-112Z. In another implementation, the computer system2300may be each of the user devices120A-120Z. In certain implementations, computer system2300may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system2300may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system2300may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein. In a further aspect, the computer system2300may include a processing device2302, a volatile memory2304(e.g., random access memory (RAM)), a non-volatile memory2306(e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage device2316, which may communicate with each other via a bus2308. Processing device2302may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor). Computer system2300may further include a network interface device2322. Computer system2300also may include a video display unit2310(e.g., an LCD), an alphanumeric input device2312(e.g., a keyboard), a cursor control device2314(e.g., a mouse), and a signal generation device2320. Data storage device2316may include a non-transitory computer-readable storage medium2324on which may store instructions2326encoding any one or more of the methods or functions described herein, including instructions implementing the user access history module118(118A-118Z) ofFIG.1for implementing methods300,800,1100,1400,1800, and2200or including instructions implementing collaborative document environment122(122A-122Z) ofFIG.1for implementing methods900,1200,1500,1900, and2000. Instructions2326may also reside, completely or partially, within volatile memory2304and/or within processing device2302during execution thereof by computer system2300, hence, volatile memory2304and processing device2302may also constitute machine-readable storage media. While computer-readable storage medium2324is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media. In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure can be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure. Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving”, “displaying”, “moving”, “adjusting”, “replacing”, “determining”, “playing”, or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. For simplicity of explanation, the methods are depicted and described herein as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts can be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. Certain implementations of the present disclosure also relate to an apparatus for performing the operations herein. This apparatus can be constructed for the intended purposes, or it can comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” Moreover, the words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. Further to the descriptions above, a user may be provided with controls allowing the user to make an election as to both if and when systems, programs or features described herein may enable collection of user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), and if the user is sent content or communications from a server. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city. ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over what information is collected about the user, how that information is used, and what information is provided to the user.
76,497
11860826
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. General Overview Presented herein are high availability techniques that combine synchronization between multiple filesystems with hierarchical sharing of data blocks between clone files in a standby filesystem that are replicas of clone files in a source filesystem. This approach provides space and time efficient filesystem replication that complements or supplants traditional replication technologies with special treatment of file clones that share data blocks, which vastly reduces the replica size of the cloned files, and consequently provides decreased consumption of network bandwidth and storage space. File cloning herein may include generation of thousands of clone files from a same base file and clones of clones, limited only by implementation constraints such as a filesystem's index node (inode) structure and available storage space. Techniques herein may be based on application program interfaces (APIs) of state-of-the-art filesystems. In a Linux embodiment, file input/output (I/O) clone (FICLONE) is an I/O control (ioctl) API that generates a clone file from a base file with sharing of a same underlying stored content. FICLONE is supported by a broad range of popular enterprise filesystems, such as btrfs, XFS, ZFS, OCFS2, and Oracle ACFS. Likewise, techniques herein may instead use a proprietary cloning function of some filesystems, such as fshare operations in Oracle ACFS. Data blocks (e.g. disk blocks) of base and clone files remain shared until modified. In an embodiment, modification of a shared data block causes a copy-on-write (COW) operation that allocates new data block(s) for the modified data. Replication herein is incremental to minimize data to be synchronized. Only modified data is transferred and applied to a standby filesystem, to avoid redundant replication of data shared between base and clone files. Data blocks are units of fine-grained sharing within a filesystem and synchronization between filesystems. This approach identifies data shared between file clones and significantly decreases the cost of synchronizing the shared data. This approach is compatible with incremental replication in which only a delta between two consecutive synchronization intervals is synchronized to a standby filesystem. The following novel features are incorporated. Avoiding redundant replication of data shared between file clones Replicating only changed data of a file clone Portable to filesystems that provide a file clone operation such as a standard Linux ioctl. For each file, this clone-aware replication classifies content into two categories: data shared with a base file, and unique data introduced by changes to a clone file or a base file after the clone being created from a base file. Data shared with other clones in a source filesystem remain shared in a standby filesystem. Clones of clones may compose chains of clones and trees of chains of clones. A replication process is decomposed into two phases. In a first phase, all files of the source filesystem are inspected and, in the source filesystem, all clones with common ancestors (i.e. files that are roots for all their descendant subtrees of clones) are identified and aggregated. A set of clones that are directly or indirectly based on a same file, including the file itself, is referred to herein as a relative clone set (RCS). Also in the first phase, files that are not clones are synchronized to the standby filesystem. In a second phase, all clone files in the RCSs are synchronized in a particular sequence that entails clone tree traversal by tree level. Level ordering ensures that by the time a clone is to be synchronized, the file that the clone is based on in a higher tree level has already been synchronized, which other approaches do not do. In an embodiment, a computer stores source files and source clone files in the source file systems, these files are made up of data blocks. The source clone files are copies of the source files, which initially share the same blocks as the source files after we make the copies. After then, either the source files or the source clone files could be modified, where some of the shared blocks are replaced by the modified blocks. Once the replication starts its first stage, the replica source files that are the very first bases for other clones (i.e., they are the bases of other clones and they are not cloned from others), are first replicated to the standby file system. This replication includes a full replication of all their blocks. In the second stage, all the descendant clones of these base files are cloned on the standby, from their base files based on their clone relationship on the source file system. This clone operation ensures that the replica of shared blocks on the source file system are also shared on the standby. These clone operations are followed by the comparison of each descendant on the source file system with its base file to produce a list of differed blocks. These differed blocks are caused either by the modification on the base file or the clone file, after the clone file is cloned from the base file. As a final step of the second stage, these differed blocks are sent to the standby, and applied to the corresponding descendent. 1.0 Example Storage System FIG.1is a block diagram that depicts an example storage system100, in an embodiment. Storage system100provides high availability for a filesystem based on techniques that combine synchronization between source filesystem110A and standby filesystem110B with hierarchical sharing of replica data blocks RB1-RB2between replica files RF, RF1-RF2, and RF1A in standby filesystem110B. Although not shown, storage system100contains one or more processors that manage one or more storage devices such as network attached storage (NAS), magnetic or optical disk drives, or volatile or nonvolatile storage devices such as solid state drives (SSDs) or persistent memory (PMEM). 1.1 Example Filesystems The one or more processors may reside in the storage devices or in one or more computers. In an embodiment, storage system100contains a communication network. One example embodiment consists of a computer that operates filesystems110A-110B in respective storage devices. Another example embodiment consists of two computers that respectively operate filesystems110A-110B in respective storage devices. In an embodiment, source filesystems110A-110B reside in a same storage device. In an embodiment, a storage device is just a bunch of disks (JBOD) such as a redundant array of inexpensive disks (RAID). Depending on the embodiment, data content of a storage device may be byte addressable or block addressable. Depending on the embodiment, data blocks such as SB1-SB2and RB1-RB2may have variable sizes or a same fixed size. In various embodiments, a data block is a disk block or a page or segment of virtual memory in primary or secondary storage in volatile or nonvolatile storage. In any case: a) each replica data block in standby filesystem110B is a copy of a corresponding source data block in source filesystem110A, and b) a source data block and its corresponding replica data block have a same size. As shown, replica data blocks RB1-RB2are respective copies of source data blocks SB1-SB2. A filesystem is a logical container of files that contain data blocks. A filesystem may be manually or automatically controlled by an application program interface (API) and/or a shell command interface that provides operations such as create, read, update, and delete (CRUD) for data block(s) or whole file(s). For example, filesystems110A-110B may be POSIX interface compliant such as when a storage device cooperates with a POSIX device driver. 1.2 High Availability Source filesystem110A operates as a primary filesystem whose contents are dynamically synchronized to standby filesystem110B, which may be passive until filesystem110A fails, which causes failover that activates filesystem110B as a replacement for filesystem110A. Failover occurs when failure of filesystem110A is detected by a foreground mechanism such as timeout or failure of a data access or by a background mechanism such as heartbeat, watchdog, or performance monitoring. In an embodiment, either or both of filesystems110A-110B are append only such as with a write-once storage device. Standby filesystem110B contains replica files RF, RF1-RF2, and RF1A that are respective synchronized copies of source files SF, SF1-SF2, and SF1A that source filesystem110A contains. For example, the contents of files SF and RF should be identical so that no data is lost by filesystem failover. Synchronization strategies and mechanisms are discussed later herein. In an embodiment after failover, roles of filesystems110A-110B are reversed so that source filesystem110A may, by synchronization in the opposite direction, become a standby for now-primary filesystem110B. In an embodiment after such recovery of filesystem110B, storage system100may revert to the original cooperation with source filesystem110B again as primary. 1.3 Data Block Configuration Data blocks may be arranged in one or both of two orthogonal ways that are referred to herein as sharing and synchronizing. Sharing occurs when a same data block is effectively contained in multiple files of a same filesystem. For example as shown, source files SF and SF1may contain respective index nodes (inodes) (not shown) that reference same source data block SB1For example, both inodes may contain a same logical block address (LBA) that identifies source data block SB1. Source data block SB1may be simultaneously accessed in both of source files SF and SF1as if there were two respective data blocks even though source data block SB1is physically only one data block. Sharing data blocks saves storage space by avoiding duplicate data. In an embodiment with virtual memory and/or memory-mapped input/output (I/O), sharing accelerates data access by decreasing storage device I/O and/or decreasing thrashing of virtual memory or hardware caches (e.g. L1-L3). Although not shown, a file may contain data blocks that are not shared with other files. Synchronization provides ongoing mirroring between multiple filesystems. For example, source data block SB1in source filesystem110A is synchronized with corresponding replica data block RB1, which entails initial copying of the content of source data block SB1to replica data block RB1and repeated copying if content of source data block SB1is subsequently modified. In various embodiments, copying involves complete or partial replacement of content of replica data block RB1. 1.4 File Configuration Files may be arranged in one or both of two orthogonal ways that are known herein as cloning and synchronizing. In various scenarios discussed later herein, synchronizing source file SF with its corresponding replica file RF entails synchronizing some data blocks in source file SF with corresponding data blocks in replica file RF as discussed above. Thus: a) any data block synchronization should occur during synchronization of files, and b) high availability of files is based on high availability of filesystems and data blocks as discussed above. File cloning entails shallow copying based on sharing data blocks. For example as shown, source data block SB1is shared by source files SF and SF1because source clone file SF1is a clone of source file SF. Initial cloning entails: a) generating source clone file SF1such that b) source clone file SF1consists of data blocks shared by source file SF, and c) all of source file SF's data blocks are shared with source clone file SF1. Initially, no new data blocks are allocated for source clone file SF1that may be a sparse file that is thinly provisioned. File cloning is mirrored between filesystems110A-110B such that source clone file SF1, which is a shallow copy of source file SF, corresponds to replica clone file RF1that is a shallow copy of replica file RF. Because replica file RF mirrors source file SF that contains source data block SB1that is shared with source clone file SF1as shown, corresponding data block sharing occurs in standby filesystem110B. That is, source data block SB1corresponds to replica data block RB1that is shared by replica files RF and RF1as shown. Replica files sharing data blocks provides efficiencies, including: a) disk space is saved in standby filesystem110B in the same way as discussed above for source filesystem110A, and b) I/O of storage devices and/or a communication network is decreased as follows. 1.5 Synchronization Modification of source file SF may entail modification of source data block SB1. Although the modification may be expressly applied only to source file SF, the modification of shared source data block SB1may be treated as modifications to both of source files SF and SF1. Thus, other approaches may wrongly decide that synchronization of both source files SF and SF1is needed, which may cause source data block SB1to be unnecessarily synchronized twice which, even worse, may cause replica files RF and RF1to stop sharing replica data block RB1and instead unnecessarily materialize separate respective data blocks in standby filesystem110B. In other words, synchronization of shared source data blocks by other approaches may destroy sharing of replica data blocks. Instead, storage system100perfectly maintains filesystem mirroring during synchronization, including preserving continued sharing of replica data blocks in standby filesystem110B. In an embodiment during filesystem synchronization, storage system100: a) detects that synchronizing source file SF causes synchronization of source data block SB1, and b) reacts by not repeating synchronization of source data block SB1when synchronizing source clone file SF1. Such avoidance of redundant synchronization of a shared source data block decreases I/O of storage devices and/or a communication network and preserves sharing of replica data blocks. In that way, filesystems110A-110B will remain identical. 1.6 File Clones Multiple clones may be made from a same file. For example as shown, source clone files SF1-SF2are both clones of source file SF. As shown, source data block SB1is shared by all of source files SF and SF1-SF2. In an embodiment, there is no logical limit to how many files may share a same data block nor how many clones a base file such as source file SF may have, although a filesystem may impose a practical limit on an amount of sharing and/or cloning. When all of source clone files SF1-SF2and source file SF are identical, those three source files each consists of a same set of shared data blocks. Divergence of any of those three files by separate modification is discussed later herein. A clone file may itself be cloned. For example as shown, source clone file SF1A is a shallow copy of source clone file SF1that is a shallow copy of source file SF. Thus, cloning may establish a linear chain of clone files that are directly or indirectly based on a base file. In an embodiment, there is no logical limit to how long a chain of clones may be, although a filesystem may impose a practical limit on chain length. Although not shown, when all files in a chain are identical, those files consist of a same set of shared data blocks. In a chain of files that begins at source file SF and ends at source clone file SF1A, each of the chained files may individually be a base file, a clone file, or both. The root of the chain is only a base file, such as source file SF. The end of the chain is only a clone file, such as source clone file SF1A. Any other files in the chain are simultaneously both a base file and a clone file. For example, source clone file SF1is a clone of source file SF and a base for source clone file SF1A. A base file may be a direct base and/or an indirect base. For example, source file SF is a direct base of source clone file SF1and an indirect base of source clone file SF1A. As shown and by separate modifications as discussed later herein, two or three of chained source files SF, SF1, and SF1A have diverged, which means that, by separate modification, their contents have diverged such that SF, SF1, and SF1A are no longer identical. For example as shown in the chain, source data block SB1is not shared with source clone file SF1A, and source data block SB2is not shared with source file SF. Likewise and although not shown, source data block SB1may cease to be shared with, for example, source file SF or source clone file SF1. 2.0 Non-Identical Clones FIGS.1-2are discussed together as follows.FIG.2is a block diagram that depicts an example filesystem200, in an embodiment of storage system100ofFIG.1. To demonstrate divergence, filesystem200is a legend that depicts a generalization of filesystems110A-110B. Storage system100does not actually contain filesystem200as a third filesystem, which is why filesystem200is shown drawn with dashed lines. In other words, filesystem200may be either of filesystems110A-B. Features shown in filesystem200occur in source filesystem110A and then, by synchronization, also occur in standby filesystem110B. For example as shown, filesystem200contains a partial chain of files F and F1that may actually be: a) source files SF and SF1, b) source files SF1and SF1A, c) replica files RF and RF1, or d) replica files RF1and RF1A. 2.1 Copy on Write As explained earlier herein, each of files F and F1in the chain in filesystem200initially were identical and consisted of a same set of shared data blocks OB1-2, although shown as having since diverged. A shared data block may or may not have copy-on-write semantics. Without copy-on-write, inFIG.1, either of source clone files SF1and SF1A may be used to modify shared source data block SB2, and the modification is effective in both source clone files SF1and SF1A because sharing of source data block SB2continues after the modification. With copy-on-write, modification of a data block may instead cause sharing of the data block to cease. When one of source clone files SF1and SF1A is used to modify source data block SB2, a new data block is allocated to store the modified content of source data block SB2. In other words, copy-on-write causes two versions of source data block SB2that are respectively stored in source clone files SF1and SF1A. For example although not shown inFIG.2, files F and F1initially shared original data blocks OB1-2. If file F is used to modify original data block OB1, then file F subsequently contains the modified version in newly allocated new data block MB1as shown. Likewise, file F1continues to contain unmodified original data block OB1as shown. In that example, file F operates as a base file that is modified and responsively contains a newly allocated data block, e.g., MB1. In another example, clone file F1instead is modified and responsively contains newly allocated data block MB2, instead of the base file. Likewise, file F continues to contain unmodified original data block OB2as shown. 2.3 Tree of Clones As shown inFIG.1, source file SF is a base file that is a root file of a logical tree that also includes source clone files SF1-SF2and SF1A. This logical tree of clone files is not the same as a directory tree in a filesystem. For example, source files SF, SF1-SF2, and SF1A may be in a same or different directories. A filesystem may contain many such logical trees that each contains a different root file and a different set of clones. These logical trees are disjoint such that they do not overlap, intersect, nor have any file in common. As discussed later herein, each disjoint tree may contain a separately discoverable set of files and may be separately synchronized. In any case, the scope of copy-on-write may depend on where in a tree is an involved data block. When a modified version of a data block arises in one branch of the tree, other branches will continue to contain an unmodified version. For example if source clone file SF2is used to modify source data block SB1, then source clone files SF1and SF1A will not contain the modified version. Likewise, a modification will not propagate up a chain toward the root. For example, if source clone file SF1A is used to modify a data block that is shared with an entire chain, then neither source file SF nor SF1will contain the modified version. Likewise, a modification will not propagate down a chain toward the leaves. For example, if instead source clone file SF1is used to modify the data block that is shared with the entire chain, then SF1A will not share the modified version. In an embodiment, source clone files SF1-SF2may continue to share an unmodified version of source data block SB1after source file SF is used to modify source data block SB1. 3.0 Synchronization Process FIG.3is a flow diagram that depicts an example computer process to provide high availability for a filesystem based on techniques that combine synchronization between source filesystem110A and standby filesystem110B with hierarchical sharing of replica data blocks RB1-RB2between replica files RF, RF1-RF2, and RF1A in standby filesystem110B, in an embodiment.FIG.3is discussed with reference toFIGS.1-2. Step301stores source data blocks SB1-SB2in source filesystem110A and, in standby filesystem110B, stores replica data blocks RB1-RB2that are copies of respective source data blocks SB1-SB2. In source file SF and source clone file SF1that is a copy of source file SF, step302includes same source data block SB1in source filesystem110A. Additionally or instead, in source clone file SF1and source clone file SF1A that is a copy of source clone file SF1, step302may include same source data block SB2in source filesystem110A. In standby filesystem110B, step303replicates what step302did in source filesystem110A. In standby filesystem110B, in replica file RF and replica clone file RF1that is a copy of replica file RF, step303includes same replica data block RB1that is a copy of source data block SB1in source filesystem110A. Additionally or instead, in standby filesystem110B, in replica clone file RF1and replica clone file RF1A that is a copy of replica clone file RF1, step303may include same replica data block RB2that is a copy of source data block SB2in source filesystem110A. Step304modifies a modified source file that may be either source file SF or source clone file SF1. In various embodiments, modification by step304entails replacement of source content that variously is source data block SB1itself or only the contents of source data block SB1as explained earlier herein. In the modified source file, step304replaces that source content with a modified copy of the source content without modifying the source content in an unmodified source file that is the other of source file SF or source clone file SF1. Additionally or instead, step304is performed for source data block SB2and source clone files SF1and SF1A. In standby filesystem110B, step305replicates what step304did in source filesystem110A. Specifically, step305modifies a modified replica file that may be either replica file RF or replica clone file RF1. In various embodiments, modification by step305entails replacement of replica content that variously is replica data block RB1itself or only the contents of replica data block RB1. In the modified replica file, step305replaces that replica content with a modified copy of the source content without modifying the replica content in an unmodified replica file that is the other of replica file RF or replica clone file RF1, the modification is received from the source file system110A and applied on file system110B. Additionally or instead, step305is performed for replica data block RB2and replica clone files RF1and RF1A. 4.0 Synchronization of Tree of Clones As explained earlier herein forFIG.1, in source filesystem110A, source file SF operates as: a) a base file at the start of two chains of clone files that respectively end at source clone files SF1A and SF2as shown, and b) a root file for a tree of clone files that contains both chains as branches. Likewise as shown, standby filesystem110B contains a tree of clone files that is a replica of the tree of clone files in source filesystem110A. Either of those two trees of clone files may be referred to herein as a replica clone set (RCS). FIG.4is a flow diagram that depicts an example computer process to synchronize the tree of clone files in source filesystem110A with the tree of replica clone files in standby filesystem110B, in an embodiment.FIG.4is discussed with reference toFIGS.1-2. The process ofFIG.4occurs in two sequential phases that may be temporally separated. The first phase generates the two similar trees of clone files in their respective filesystems110A-B. Between the first phase and second phase and although not shown, the source tree in source filesystem110A may accumulate modified data blocks at different levels of the source tree and in both branches of the source tree. In other words, before the second phase, the two trees differ because the source tree contains modified data blocks and the replica tree does not. Thus, synchronization is needed, which the second phase performs as follows. In the second phase, the modified data blocks in source filesystem110A are synchronized with (i.e. replicated to) standby filesystem110B, eventually such as by periodic schedule or upon some condition such as a threshold count of: modified data blocks, modified files, or multiblock transaction commits. This approach synchronizes files while traversing the source tree in a particular ordering to preserve data block sharing throughout the replica tree. Other approaches do not use the particular ordering, which may wrongly cease data block sharing in the replica tree. Periodic or otherwise, repeated synchronization of a same source tree may be needed because contents in the source tree may be modified at different times. For example, a same source data block may sequentially be: a) modified before synchronization, b) modified again after synchronization, thereby necessitating another synchronization. When and how frequent is the synchronization may be configurable. As explained above, one of the branches of the source tree is a source chain that ends at source clone file SF1A (e.g., SF=>SF1=>SF1A). Mirroring the source chain in source filesystem110A is a replica chain that ends at replica clone file RF1A in replica filesystem110B (e.g., RF=>RF1=>RF1A). Although not shown, the first phase ofFIG.4includes: a) generating the source chain in source filesystem110A and b) mirroring the source chain by generating the replica chain in standby filesystem110B. Part of doing (a)-(b) entails steps301-303ofFIG.3. As further discussed below, the process ofFIG.4includes the process ofFIG.3. The first phase includes steps401-405that generate two similar trees of clone files in respective filesystems110A-B as follows. As shown, source clone files SF1-2are both directly based on source file SF. In source clone file SF2, step401includes source data block SB1in source filesystem110A. Thus, the source tree has two branches, and all three of source files SF and SF1-2share source data block SB1as shown. As shown, replica clone files RF1-2are both directly based on replica file RF. In replica clone file RF2, step402includes replica data block RB1in standby filesystem110B that is a copy of source data block SB1in source filesystem110A. Thus: the replica tree mirrors the source tree; the replica tree has two branches; and all three of replica files RF and RF1-2share replica data block RB1as shown. In other words, a same data block may be shared in different branches, which may cause other synchronization approaches to malfunction such as wrongly cease sharing of replica data block RB1between multiple branches in the replica tree. Steps403-405operate solely on one respective branch of the source tree and replica tree that respectively are the source chain and the replica chain as follows. Step403stores source data block SB2and its corresponding replica data block RB2. As shown, source clone file SF1contains both source data blocks SB1-2although neither of source data blocks SB1-2is shared across the entire source chain. Step403stores: a) source data block SB2in source filesystem110A and b) replica data block RB2in standby filesystem110B that is a copy of source data block SB2in source filesystem110A. Steps404-405respectively perform inclusion of source data block SB2and its corresponding replica data block RB2. In source clone files SF land its clone, source clone file SF1A, step404includes same source data block SB2in source filesystem110A. As shown, source file SF does not include source data block SB2, which means that source files SF and SF1previously diverged and, as explained earlier herein, the source chain remains intact despite such divergence. In replica clone files RF1and its clone, replica clone file RF1A, step405includes same replica data block RB2in standby filesystem110B that is a copy of source data block SB2in source filesystem110A. As shown, replica file RF does not include replica data block RB2, and the replica chain remains intact despite such divergence within the replica chain to mirror divergence within the source chain as discussed above. After the first phase, the source tree in source filesystem110A accumulates modified data blocks at different levels of the source tree and in both branches of the source tree. Eventually as discussed above, the second phase synchronizes the modified data blocks in source filesystem110A into standby filesystem110B according to steps406-409as follows. As discussed above, synchronization of modified files should occur in a particular ordering that, in an embodiment, is based on multiple conditions that are detected by steps406-408as follows. Those detections may be based on inspection and analysis of metadata stored in or available to storage system100that describes the topology of the source tree in source filesystem110A, including: a) which clone file is directly based on which other file and b) which data blocks are included in which file(s). Based on that metadata, storage system100can infer: a) which files are in which chain, b) which chains are branches in the source tree, and c) which files share which data block. For example, the metadata may include: a) file identifiers such as paths and/or index node (inode) identifiers and/or b) data block identifiers such as logical block addresses (LBAs). Same or different metadata may indicate replication details such as: a) which filesystem is a standby and b) which replica file mirrors which source file. Step406detects that source clone file SF2is based on source file SF. Step407detects that source clone file SF1A is based on both of source files SF and SF1. With steps406-407, storage system100has begun analysis of metadata for the whole source tree. Also with steps406-407, storage system100has detected that both of source clone files SF2and SF1A are in a same source tree because both of source clone files SF2and SF1A are directly or indirectly based on same source file SF even though source clone files SF2and SF1A do not occur at a same level in the source tree. Step408detects that source clone file SF1A, but not source clone file SF2, is based on source clone file SF1. In other words, step408detects that source clone file SF2is based on a subset of files that source clone file SF1A is based on. Thus, step408detects that source clone files SF2and SF1A are in different levels of the source tree. Data blocks not shared by multiple files may be synchronized in any ordering. Data blocks shared by files in different levels of the source tree should be synchronized in a relative ordering based on increasing tree level. In other words, in the same source tree or, depending on the embodiment, in the same branch of that tree, shared data blocks in a file that is based on fewer files should be synchronized before shared data blocks in a file that is based on more files. Thus, shared source data block SB1in source file SF should be synchronized before synchronizing shared source data block SB2in source clone file SF1because source clone file SF1is based on one file and source file SF is based on zero files. In a single threaded embodiment, the shared and unshared modified data blocks of the files of the source tree are synchronized in breadth first order or depth first order of the files in the source tree. In an embodiment where an asynchronous queue decouples two pipeline stages for pipeline parallelism: a) the shared and unshared modified data blocks of the files of the source tree are enqueued in breadth first order or depth first order of the files in the source tree by the first stage, and simultaneously b) the second stage synchronizes the modified data blocks from the queue to standby filesystem110B. The second stage may synchronize data blocks individually, in a batch per file, or in a batch of a fixed count of data blocks. The queue may maintain metadata such as which source file did which modified data block come from and/or which other source files further down the source tree share that same data block. If filesystems110A-B are managed by respective computers, synchronization may entail sending metadata with a data block or a batch to facilitate mirroring when the modified data block or batch is applied in standby filesystem110B. In an embodiment, all unshared data blocks are synchronized before any shared data block or vice versa. In an embodiment, source filesystem110A contains multiple source trees that are disjoint as discussed earlier herein, and the multiple source trees are concurrently synchronized such as with a separate thread or a separate pipeline per source tree. In an embodiment, multiple branches of a same source tree are concurrently synchronized such as with a separate thread or a separate pipeline per tree branch. Although not shown, additional branching may occur at different levels in the source tree such that the source tree contains multiple subtrees. In an embodiment, multiple subtrees of a same source tree are concurrently synchronized such as with a separate thread or a separate pipeline per subtree. In any of those various embodiments, shared source data block SB1should be synchronized before shared source data block SB2according to the heuristics and reasons discussed above. Disjoint trees, tree levels, subtrees, metadata for tree analysis, and inodes are further discussed later herein forFIG.5. In some cases, ordering by tree level may be relaxed, such as with depth first traversal, such that level ordering of synchronization is imposed only within a same chain (i.e. tree branch). For example, step409as discussed above need not be a strict requirement because step409imposes a synchronization ordering that encompasses different tree branches (i.e. chains). In all embodiments, for a given synchronization of a given source tree, each modified data block is synchronized exactly once. For example even though a same tree traversal visits source files SF and SF1at separate respective times, and even though same source data block SB1is shared by both source files SF and SF1, shared source data block SB1is only expressly synchronized for source file SF but not again for source clone file SF1. Likewise in all embodiments, unmodified source data blocks, whether shared or not, are not synchronized after initial replication. Thus, techniques herein guarantee: a) preservation of replica data block sharing and b) synchronization of a minimized count of data blocks. Such minimal synchronization per (b) decreases network input/output (I/O), which accelerates synchronization. Thus as a synchronization computer, storage system100itself is accelerated. Likewise, by preventing wrongly ceasing sharing of replica data blocks per (a), the reliability of storage system100itself is increased. 5.0 Discovery of Trees of Clones FIG.5is a block diagram that depicts an example source filesystem500that is represented by incrementally growing merge-find set520based on metadata510that is persistent, in an embodiment. As explained earlier herein, a tree of clone files may be orthogonal to a directory tree of files. For example, a directory may contain files from different clone trees. Likewise, a clone tree may contain files from different directories in source filesystem500. A consequence of this orthogonality is that a natural and orderly traversal of a tree of directories may visit files in a seemingly arbitrary ordering that does not reflect the existence and organization of multiple source trees. For example, trees531-535may each be a source tree or a subtree of a source tree. Tree531contains levels 1-3 that contain files as shown. Even though clone files A and C are in a lower clone tree level than is root file E, either or both of clone files A and C may occur in a higher directory tree level than root file E. In that case, clone file A or C would be visited before visiting root file E during a breadth first traversal of a tree of directories in source filesystem500. Likewise, even if clone file A or C occurs in a lower level of a tree of directories than root file E, clone file A or C could be visited before visiting root file E during a depth first traversal of a tree of directories in source filesystem500, so long as clone file A or C occurs in a different branch of the tree of directories than does root file E. Those examples of arbitrary visitation ordering may complicate discovery of the existence and configuration of clone trees. To solve that technical problem, merge-find set520may be grown and used as a representation of source filesystem500based on metadata510. Merge-find set520may be a data structure in memory of a computer and incrementally grown as follows. 5.1 Example Persistent Metadata Metadata510is persisted in source filesystem500. Although metadata510is demonstratively shown as tabular, each row of metadata510may instead be stored within source filesystem500in separate respective one or more inodes as follows. In an embodiment, each row of metadata510is stored in a same or different respective inode of a same or different respective directory. In metadata510, shown columns file, inode, and extended attribute are stored in a directory inode for one or more of files A-E that reside in that directory. The file column identifies each file such as by name. The inode column identifies (e.g. by inode number or by LBA) a first inode of each file that effectively locates and/or identifies the file within source filesystem500. The extended attribute column stores attributes that are ignored by source filesystem500but that have semantics to replication logic of the storage system. In this approach, the extended attribute column identifies the first inode of a direct base file for a clone file. For example as shown in the first row of metadata510, file A starts at inode I1and is based on a file that starts at inode IS. Thus, file A is a clone file. Likewise as shown in the second row of metadata510, file B starts at inode12but has no base file. Thus, file B is a root of a source tree. 5.2 Example Volatile Merge-Find Set As explained earlier herein, a computer may discover files A-E by traversing a tree of directories in source filesystem500that may be orthogonal to the source trees of clones in source filesystem500. For example as demonstratively shown by the processing column of metadata510, the computer discovers one file at a time by sequentially processing one metadata row at a time in a downwards ordering of rows as shown by the arrow. In other words, the computer discovers file A first and file E last, which may be a technical problem because the computer would not discover the root file of tree531until last. Another technical problem is that clone files A and C are discovered before discovering their base file. Yet another technical problem is that file B is discovered between files A and C even though file B belongs in a different source tree than files A and C. All of those technical problems of discovery ordering are solved with merge-find set520as follows. Initially, merge-find set520is empty and discovery of files A-E begins at the top row of metadata510that is file A. For example, in the directory inode of an initial directory, such as a root directory of source filesystem500or a current working directory (CWD) of a storage system driver, are directory entries such as for subdirectories and/or at least file A of files A-E. Thus, file A is discovered first. In merge-find set520, a potential source tree is generated that is only a potential source tree that is later discovered to actually be a subtree in yet undiscovered tree531. The directory entry that declares file A specifies, as shown, that file A begins in other inode I1and is based, as shown according to the extended attribute of the directory entry, on whichever file begins at inode15. However, the base file at inode15has not yet been discovered. Thus in merge-find set520, file A cannot yet join the source tree that would contain the base file at inode because that source tree has not yet been generated in merge-find set520. Thus temporarily, file A by itself has its own tree in merge-find set520. 5.3 Source Tree Discovery Example Processing of metadata510proceeds to the next directory entry or the first directory entry of the next subdirectory. In other words, the next row of metadata510is processed, which is file B that has an empty extended attribute. Thus, file B has no base file, which means file B is the root of a source tree that is generated as any of trees532-535. At this point, files A-B are alone in separate respective trees in merge-find set520. Next in metadata510is file C that is processed in the same way as file A. In other words, both of files A and C are alone in their own respective trees. At this point, files A-C are alone in separate respective trees in merge-find set520. Next in metadata510, file D is discovered that specifies inode13that is the inode of file C that merge-find set520already contains. Thus, file D is not processed in the same way as files A-C. Instead, file D is added to merge-find set520as a clone that is based on file C. In other words, files C-D are in the same potential source tree. Next in metadata510, file E is discovered whose extended attribute is empty. Thus similar to file B, file E is a root of a source tree. However unlike file B for which no clones were discovered, because file E starts at inode IS, file E is the direct base file of files A and C that are alone in their own respective trees in merge-find set520. Thus as shown: a) source tree531is generated that has file E as its root, and b) the potential trees of files A and C become subtrees in tree531. Assuming metadata510has more rows for more files than shown: a) trees532-538are eventually added to merge-find set520, and those trees may independently grow, and b) some of those trees may become subtrees in each other or in tree531. Thus, merge-find set520grows by: a) generating small new trees, b) independently growing trees larger by incrementally adding files to the trees, and c) merging some trees as subtrees into other trees. When all of metadata510has been processed, populating of merge-find set520ceases. Merge-find set520fully specifies all source clone trees in source filesystem500, including: a) which file is a root of which source tree, and b) which clone files are directly based on which files. As discussed earlier herein, replication may occur by descending by level into each source tree. For example as shown, tree531has three levels 1-3. A first level contains only a root file of the source tree. As shown in metadata510, file D is based on file C that is in level 2 of tree531. Thus although not shown, file D is in level 3 of tree531. 6.0 Lifecycle of Multiple Trees of Clones FIG.6is a flow diagram that depicts an example computer process to discover and synchronize multiple trees of clone files in source filesystem500, in an embodiment.FIG.6is discussed with reference toFIG.5. An initialization phase is performed only once and includes step601that replicates files A-E from source filesystem500to a standby filesystem. Thus, the standby filesystem mirrors source filesystem500. After the initialization phase, a synchronization phase may be repeated at various times to synchronize modifications in source filesystem500into the standby filesystem. In an embodiment, before step602, the synchronization phase makes a point-in-time readonly snapshot (i.e. copy) of the files in source filesystem500. In an embodiment, source filesystem500is temporarily made readonly while the snapshot is being made within source filesystem500. In an embodiment, the synchronization phase includes step602that is repeated for each of files A-E and may be combined with previous step601. In an extended attribute of a source clone file that is ignored by source filesystem500, step602stores an identifier of a source base file. For example, file D is based on file C that is identified by inode13. Thus, step602stores an identifier (e.g. LBA) of inode3in the extended attribute of file D. Step602leaves the extended attribute empty for files B and E that are root files of respective source trees. In an embodiment, step602uses the point-in-time readonly snapshot of the source files in source filesystem500so that the original source files may remain in service and be modified without affecting a simultaneously ongoing synchronization. Step602traverses the files of source trees531-535or, in an embodiment, the files of the snapshot, and populates metadata510based on clone relationships extracted from each source clone file's extended attribute provided by step601. During the traversal, step602replicates the files without a clone relationship to the standby file system, such as files B and E, where file B is a standalone file without a clone and file E is a tree root. After step602is exhaustively repeated, metadata510is fully persisted in a non-tabular format in source filesystem500within various index entries of various inodes of various directories in source filesystem500. Between steps602-603, computer(s) in the storage system may reboot, including forgetting all data stored in volatile memory. For example, rebooting may cause merge-find set520to be forgotten, in which case step603should regenerate merge-find set520from metadata persisted by step602. Step603populates merge-find set520to identify logical trees that are source trees of files that include: a) a tree root file that is not based on other files and b) clone files that are directly or indirectly based on the tree root file. For example by analyzing metadata510, step603discovers source tree531that contains root file E and clone files such as files A and C that are arranged into tree levels 1-3. Step603coalesces multiple trees. For example as shown, step603merges subtrees536-538into tree532. Between steps603-604, some shared and unshared data blocks of some of files A-E may be or, in an embodiment, not be modified in source filesystem500. Step604synchronizes those modifications from source filesystem500to the standby filesystem. In an embodiment, step604simultaneously synchronizes remaining files (i.e. non-root source clone files) in a particular level in a sequence of logical levels 2-3 of tree531of files in source filesystem500with the standby filesystem. For example, step604may detect that files A and C are both in level 2 of tree531, in which case step604synchronizes files A and C by two respective concurrent execution contexts. In particular, the synchronization operation is not an exact replication of all data blocks of all source clone files to the standby filesystem. Instead the source filesystem only sends control information to the standby computer whose logic directs the standby filesystem to make a clone of file C and a clone of file A from the already replicated file E, and then make a clone of file D from the clone of file C. After synchronization of data blocks by step604, step605further detects the differed blocks introduced by modification on either the source files or the source clone files after the clone operation before step602. Step605compares each source clone file with its base file (e.g., compare file C with file E), and detects which of their data blocks differ. Then, in an embodiment, a previously modified data block that was synchronized by step604is again modified and again needs synchronization. In an embodiment, step605sends these differed blocks of each source clone file to the standby computer, and logic of the standby computer replaces the old blocks of standby clone files (e.g., A, C and D) with the differed blocks. In an embodiment, step605performs the second modification of the same source data block by replacing the previously modified version of the source data block in previously modified file A or C-E with a further modified version of the previously modified version of the source data block. For example in an embodiment, files A and C-E are in same source tree531and may share that same source data block that is again modified through any one of files A or C-E. Inspection of merge-find set520reveals that the modified data block is shared in all of levels 1-3 because file A and C-E span levels 1-3. Thus as explained earlier herein, the modified data block should be synchronized as part of file C in level 2 that is the highest level of levels 2-3 that share the data blocks with the root E at level 1, and it is also the base file of the source clone file of D. Thus: a) even though file C may be discovered last in metadata510, file C is synchronized before file D, and b) sharing of the modified data block by replica files of source files C and D in the standby system is preserved despite repeated modification and synchronization of the data block and regardless which of files C and D was used for making the modification. Simultaneously, A is cloned from E, and it has no cloned file from itself, so its differed blocks from E can be replicated to the standby without any order dependency. Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.7is a block diagram that illustrates a computer system700upon which an embodiment of the invention may be implemented. Computer system700includes a bus702or other communication mechanism for communicating information, and a hardware processor704coupled with bus702for processing information. Hardware processor704may be, for example, a general purpose microprocessor. Computer system700also includes a main memory706, such as a random access memory (RAM) or other dynamic storage device, coupled to bus702for storing information and instructions to be executed by processor704. Main memory706also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor704. Such instructions, when stored in non-transitory storage media accessible to processor704, render computer system700into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system700further includes a read only memory (ROM)708or other static storage device coupled to bus702for storing static information and instructions for processor704. A storage device76, such as a magnetic disk or optical disk, is provided and coupled to bus702for storing information and instructions. Computer system700may be coupled via bus702to a display712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device714, including alphanumeric and other keys, is coupled to bus702for communicating information and command selections to processor704. Another type of user input device is cursor control716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor704and for controlling cursor movement on display712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system700may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system700to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system700in response to processor704executing one or more sequences of one or more instructions contained in main memory706. Such instructions may be read into main memory706from another storage medium, such as storage device76. Execution of the sequences of instructions contained in main memory706causes processor704to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device76. Volatile media includes dynamic memory, such as main memory706. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor704for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system700can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus702. Bus702carries the data to main memory706, from which processor704retrieves and executes the instructions. The instructions received by main memory706may optionally be stored on storage device76either before or after execution by processor704. Computer system700also includes a communication interface718coupled to bus702. Communication interface718provides a two-way data communication coupling to a network link720that is connected to a local network722. For example, communication interface718may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface718may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface718sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link720typically provides data communication through one or more networks to other data devices. For example, network link720may provide a connection through local network722to a host computer724or to data equipment operated by an Internet Service Provider (ISP)726. ISP726in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”728. Local network722and Internet728both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link720and through communication interface718, which carry the digital data to and from computer system700, are example forms of transmission media. Computer system700can send messages and receive data, including program code, through the network(s), network link720and communication interface718. In the Internet example, a server730might transmit a requested code for an application program through Internet728, ISP726, local network722and communication interface718. The received code may be executed by processor704as it is received, and/or stored in storage device76, or other non-volatile storage for later execution. Software Overview FIG.8is a block diagram of a basic software system800that may be employed for controlling the operation of computing system700. Software system800and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s). Other software systems suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions. Software system800is provided for directing the operation of computing system700. Software system800, which may be stored in system memory (RAM)706and on fixed storage (e.g., hard disk or flash memory)76, includes a kernel or operating system (OS)810. The OS810manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file input and output (I/O), and device I/O. One or more application programs, represented as802A,802B,802C . . .802N, may be “loaded” (e.g., transferred from fixed storage76into memory706) for execution by the system800. The applications or other software intended for use on computer system700may also be stored as a set of downloadable computer-executable instructions, for example, for downloading and installation from an Internet location (e.g., a Web server, an app store, or other online service). Software system800includes a graphical user interface (GUI)815, for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the system800in accordance with instructions from operating system810and/or application(s)802. The GUI815also serves to display the results of operation from the OS810and application(s)802, whereupon the user may supply additional inputs or terminate the session (e.g., log off). OS810can execute directly on the bare hardware820(e.g., processor(s)704) of computer system700. Alternatively, a hypervisor or virtual machine monitor (VMM)830may be interposed between the bare hardware820and the OS810. In this configuration, VMM830acts as a software “cushion” or virtualization layer between the OS810and the bare hardware820of the computer system700. VMM830instantiates and runs one or more virtual machine instances (“guest machines”). Each guest machine comprises a “guest” operating system, such as OS810, and one or more applications, such as application(s)802, designed to execute on the guest operating system. The VMM830presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. In some instances, the VMM830may allow a guest operating system to run as if it is running on the bare hardware820of computer system800directly. In these instances, the same version of the guest operating system configured to execute on the bare hardware820directly may also execute on VMM830without modification or reconfiguration. In other words, VMM830may provide full hardware and CPU virtualization to a guest operating system in some instances. In other instances, a guest operating system may be specially designed or configured to execute on VMM830for efficiency. In these instances, the guest operating system is “aware” that it executes on a virtual machine monitor. In other words, VMM830may provide para-virtualization to a guest operating system in some instances. A computer system process comprises an allotment of hardware processor time, and an allotment of memory (physical and/or virtual), the allotment of memory being for storing instructions executed by the hardware processor, for storing data generated by the hardware processor executing the instructions, and/or for storing the hardware processor state (e.g. content of registers) between allotments of the hardware processor time when the computer system process is not running. Computer system processes run under the control of an operating system, and may run under the control of other programs being executed on the computer system. Cloud Computing The term “cloud computing” is generally used herein to describe a computing model which enables on-demand access to a shared pool of computing resources, such as computer networks, servers, software applications, and services, and which allows for rapid provisioning and release of resources with minimal management effort or service provider interaction. A cloud computing environment (sometimes referred to as a cloud environment, or a cloud) can be implemented in a variety of different ways to best suit different requirements. For example, in a public cloud environment, the underlying computing infrastructure is owned by an organization that makes its cloud services available to other organizations or to the general public. In contrast, a private cloud environment is generally intended solely for use by, or within, a single organization. A community cloud is intended to be shared by several organizations within a community; while a hybrid cloud comprise two or more types of cloud (e.g., private, community, or public) that are bound together by data and application portability. Generally, a cloud computing model enables some of those responsibilities which previously may have been provided by an organization's own information technology department, to instead be delivered as service layers within a cloud environment, for use by consumers (either within or external to the organization, according to the cloud's public/private nature). Depending on the particular implementation, the precise definition of components or features provided by or within each cloud service layer can vary, but common examples include: Software as a Service (SaaS), in which consumers use software applications that are running upon a cloud infrastructure, while a SaaS provider manages or controls the underlying cloud infrastructure and applications. Platform as a Service (PaaS), in which consumers can use software programming languages and development tools supported by a PaaS provider to develop, deploy, and otherwise control their own applications, while the PaaS provider manages or controls other aspects of the cloud environment (i.e., everything below the run-time execution environment). Infrastructure as a Service (IaaS), in which consumers can deploy and run arbitrary software applications, and/or provision processing, storage, networks, and other fundamental computing resources, while an IaaS provider manages or controls the underlying physical cloud infrastructure (i.e., everything below the operating system layer). Database as a Service (DBaaS) in which consumers use a database server or Database Management System that is running upon a cloud infrastructure, while a DbaaS provider manages or controls the underlying cloud infrastructure and applications. The above-described basic computer hardware and software and cloud computing environment presented for purpose of illustrating the basic underlying computer components that may be employed for implementing the example embodiment(s). The example embodiment(s), however, are not necessarily limited to any particular computing environment or computing device configuration. Instead, the example embodiment(s) may be implemented in any type of system architecture or processing environment that one skilled in the art, in light of this disclosure, would understand as capable of supporting the features and functions of the example embodiment(s) presented herein. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
67,063
11860827
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT(S) The following description is intended to convey an understanding of the present invention by providing specific embodiments and details. It is understood, however, that the present invention is not limited to these specific embodiments and details, which are exemplary only. It is further understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the invention for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs. An embodiment of the present invention is directed to defining business logic and lineage based on data patterns from legacy systems to target systems. An embodiment of the present invention may receive inputs from a source system and identify corresponding business logic for a target system that is disparate from the source system. The innovation analyzes data patterns of SORs as well as consumption attributes to define the business logic. In the example concerning CATEGORY_CODE, when provided with thousands of SOR attributes as an input, the innovative system may identify a subset of relevant SOR attributes and then generate the business logic to derive the consumption attribute. Based on the attributes and/or types of attributes, an algorithm may be applied to generate business logic. In an illustrative example involving mortgage loans, an exemplary attribute may represent “loan status.” The exemplary attribute may include a plurality of values, represented by A, B, C, D and E. The system may recognize that loan status may be represented in a number of different ways in various different legacy systems, applications and channels. In the first legacy system, the attribute values may be represented by 1, 2, 3, 4, 5, 6 . . . 20. In a second legacy system, the attribute values may be represented in a different manner, such as A1, A2, A3, B1, B2, B3 E3. The legacy systems may also implement various communication channels. An embodiment of the present invention may analyze the target attribute values (A, B, C, D, and E) with the legacy attribute values and identify a corresponding business logic. When applied to a large entity, such as a financial institution, there may be millions and millions of loan mortgages over the past several decades. An embodiment of the present invention is directed to implementing machine learning algorithms to infer relevant lineage as well as business logic resulting in significant efficiency gains. Also, decision tree algorithms may be used for discrete data attributes and multiple interaction regression algorithms may be used for continuous data attributes. An embodiment of the present invention may be applied to large data sets in a manner that enables various users, even users without an understanding of machine learning concepts, to interact with the innovative system. For example, an interactive user interface may be provided that enables a user to identify an attribute used in legacy system and automatically generate corresponding business logic that may be used in implementation to another target system. The interactive user interface may also provide reports, analysis, queries and outputs in various formats. FIG.1is an exemplary system diagram that identifies data lineage, according to an embodiment of the present invention. As shown inFIG.1, Data Sources110may represent systems of records. In this example, legacy systems may be represented by database systems. For example, database systems may represent an Integrated Consumer Data Warehouse (ICDW). Server120may execute a machine learning application at122that communicates with Data Files124. For example, Data Files124may represent comma separated values (CSV) files with columns as well as other file formats. Server120may generate a target mapping model at126. Platform130may represent a cloud or other platform that communicates with users, such as business analyst users140. Platform130may provide a portal or other user interface132that communicates with ML Application122via an API, such as Restful API. In addition, user interface132may communicate with users via a communication interface or network represented by136. Platform130may support various data sources, represented by Data Store134. According to an exemplary embodiment, a user may utilize User Interface (UI)132to provide driving information for a data lineage process. This may include providing or otherwise identifying data relating to a source, data set and/or hyper-parameters. Hyper parameters may represent options given to a decision tree model. For example, hyper parameters may represent how many nodes (branches) a tree may have, how many leaf nodes at each branch and the depth of the tree. Data may be extracted from legacy systems, represented by112,114and pre-formatted for a Machine Learning Application, represented by122. Machine Learning Model, represented by126, may be used to determine highly correlated factors. An embodiment of the present invention may then generate recommended factors and engage the user through a notification via communication network136. This may occur via an email notification or other mode of communication. The user may review and modify recommendations to align with a current interrogation of the data set. For example, recommendations may represent possible input parameters for a given output variable. In this scenario, a user may add new parameters to the model as input parameters. This may occur if user thinks there are some input parameters missing in an algorithm recommendation. Machine Learning Model126may run against the data set with the hyper-parameters provided to assist in the determination of SOR columns correlations with dependent features. Output of Machine Learning Model126may then be sent to User140through communication network136. This process can be repeated multiple times until the data set is fully interrogated. FIG.2is an exemplary flowchart that illustrates a data lineage process, according to an embodiment of the present invention. At step210, source, dataset, hyper parameters and algorithm may be identified. At step212, data may be extracted. At step214, the extracted data may be preformatted. At step216, highly correlated factors may be identified. At step218, recommended factors may be generated. At step220, a model may be generated using the algorithm. At step222, correlations may be determined with dependent features. At step224, pseudo code may be generated. The order illustrated inFIG.2is merely exemplary. While the process ofFIG.2illustrates certain steps performed in a particular order, it should be understood that the embodiments of the present invention may be practiced by adding one or more steps to the processes, omitting steps within the processes and/or altering the order in which one or more steps are performed. At step210, source, dataset and hyper parameters may be identified. In addition, an algorithm may be selected. The source may be identified by a link or other location of a file. The algorithm may be selected as a decision tree, regression, Gaussian algorithm and/or other algorithm. A decision tree algorithm may be selected for discrete variables while a regression algorithm may be selected for continuous variables. Other algorithms may be available. In addition, an embodiment of the present invention may automatically apply an optimal algorithm to the datasets based on the various inputs and other considerations. Other inputs may also include feature count and/or other limits and boundaries. An embodiment of the present invention may be applied to files at various locations and systems, including SQL databases and/or other sources. In this example, the inputs may also include a query string, which may be selected from a table or other source. At step212, data may be extracted. Datasets may be extracted from the source location. The extracted data may include features, attributes inputs, etc. At step214, the extracted data may be preformatted. The datasets may be formatted for machine learning analysis. At step216, highly correlated factors may be identified. An embodiment of the present invention may determine a subset of highly relevant factors, features and/or variables. For example, a larger set of features may be received as an input. From this larger set of features, an embodiment of the present invention may identify a subset of features that are most impactful relative to the remaining features. For example, highly correlated represents how much a change in output value changes the input values. If the input variable values are not changing with the output, it may be considered a low correlated value. The system may further generate possible features to be used in determining a dependent label. For example, a user may requested to select continuous and discrete features from a set of available features.FIG.5below provides additional details. At step218, recommended factors may be generated. An embodiment of the present invention may present the highly correlated factors as recommended factors via a user interface to the user. The user may then confirm or reject the recommended factors. According to another example, an embodiment of the present invention may automatically apply the recommended factors. Other variations may be applied. In addition, an embodiment of the present invention may further categorize the recommended factors, source attributes, etc. At step220, a model may be generated using the algorithm. For example, the highly correlated factors may be applied to generate the model. The model may be executed on a dataset with hyper parameters. An embodiment of the present invention may apply machine learning to generate a model that applies and executes logic to the dataset. At step222, correlations may be determined with dependent features. In this steps, SOR columns correlations with dependent features may be determined. For example, correlated input values (e.g., SOR, Source) may be determined based on the output value. The process may be repeated and further refined. At step224, pseudo code may be generated. The pseudo code may be provided via an interactive user interface and may be implemented or executed on a target system. The pseudo code may include various formats, including IF/THEN statements.FIG.6below provides additional details. FIG.3is an exemplary flowchart illustrating a process flow that generates pseudo code, according to an embodiment of the present invention. For example, the process flow may generate pseudo code to decompose Relationship Manager (RM) Category Code (e.g., load status, etc.) from a set of source attributes, e.g., over 200 source attributes. The process may involve preprocessing a Source Data using direct SQL and create comma separated values (CSV) file with header columns along with target column. Other formats may be used. Next, CSV may be processed using dataframes, such as Pandas Dataframes. Pandas is an open source library providing high performance data structures and data analysis tools for python programming language. A set of best source feature attributes may be identified using an elimination method, such as RFE (Recursive Feature Elimination) method in Machine Learning (ML). Recursive feature elimination may refer to repeatedly constructing a model (e.g., support vector machine (SVM) or a regression model) and choose either the best or worst performing feature (for example based on coefficients), setting the feature aside and then repeating the process with the rest of the features. The attributes may be separated to continuous and categorical (e.g., code types) columns. The best selected features may be fed to a machine learning (ML) Decision Tree Algorithm. Decision tree learning represents a predictive modeling approach that may be used in machine learning. Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). A Descriptive Decision Tree Path may be generated in pseudo code. A Decision Tree Logic may be tested for accuracy (e.g., approximately>95%). The Tree may be pruned until the results are satisfactory (e.g., reach a threshold, achieve a desired accuracy rate, etc.). At step310, a dataset may be identified. At step312, data may be preprocessed with a label. At step314, a splitter may be applied to the data to result in X_Valid, at316and Y_Train and Y_Valid, at step318and X_Train at320. X_Train data may be divided into numerical data322and categorical data324. Category data324may be converted to labels at326and represented in binary form at328. Data may be fed into Stacker330. This may involve data collection of transformed features and/or variables. Recursive feature elimination may be applied at332. Feature selector may be applied at324and Hyper parameter selector may be applied at336, via subject matter expert (SME) input338. ML Model Selector340may be applied. Data may be received at Evaluator342where best models with optimized hyperparameters are identified at344. Pseudo code may be generated at346. The order illustrated inFIG.3is merely exemplary. While the process ofFIG.3illustrates certain steps performed in a particular order, it should be understood that the embodiments of the present invention may be practiced by adding one or more steps to the processes, omitting steps within the processes and/or altering the order in which one or more steps are performed. FIG.4is an exemplary illustration of a user interface, according to an embodiment of the present invention. A user may interact with an embodiment of the present invention through a user interface. The user interface may include an Input410, Correlating Factors412, Recommendation Factors414, Model Execution416and Code418. At Input410, a user may provide a label to predict, dataset or file location and hyper parameters. The user may also identify an algorithm, such as a decision tree, regression, etc. At Correlating Factors412, the system may identify one or more correlating factors. At Recommendation Factors414, the system may identify additional factors. The user may then confirm or reject the recommended factors. At Model Execution416, the system may execute a model. At Pseudo-Code418, the system may provide code that represents logic. The code may be in the form of IF and THEN statements. Other code formats may be provided. FIG.5is an exemplary user interface, according to an embodiment of the present invention. As shown inFIG.5, the system may identify a predicting label and an algorithm. In this example, the predicting label is “RM_Category” and the algorithm is a “Decision Tree,” as shown by510. For each available feature, the system may request additional input from the user. In this example, the user may select continuous features (as shown by520) and discrete features (as shown by522) from the available features panel at530. Continuous features may represent a variable with an infinite number of possible values. Discrete features may represent a variable with a finite number of possible values. Discrete features can take on a certain number of values, such as quantitative values. The user may then confirm the features as input variables. An embodiment of the present invention may evaluate the features selected. The system may then identify additional features that have a larger impact relative to the remaining features. The system may identify features to the user, via an interactive interface, as continuous feature and discrete features. FIG.6is an exemplary pseudocode, according to an embodiment of the present invention. The exemplary pseudocode illustrated inFIG.6represents business logic that may be applied to a first system to result in a target system. According to one example, the system may facilitate migration from a legacy system to a modern system. The exemplary logic may identify highly correlated variables and further provide pseudocode to implement and/or execute the pseudocode in various target systems. For example, the logic may include a series of IF/THEN statements, as shown by610. The logic may also include nested and complex formats. Other formats may be generated and applied. The foregoing examples show the various embodiments of the invention in one physical configuration; however, it is to be appreciated that the various components may be located at distant portions of a distributed network, such as a local area network, a wide area network, a telecommunications network, an intranet and/or the Internet. Thus, it should be appreciated that the components of the various embodiments may be combined into one or more devices, collocated on a particular node of a distributed network, or distributed at various locations in a network, for example. As will be appreciated by those skilled in the art, the components of the various embodiments may be arranged at any location or locations within a distributed network without affecting the operation of the respective system. As described above, the various embodiments of the present invention support a number of communication devices and components, each of which may include at least one programmed processor and at least one memory or storage device. The memory may store a set of instructions. The instructions may be either permanently or temporarily stored in the memory or memories of the processor. The set of instructions may include various instructions that perform a particular task or tasks, such as those tasks described above. Such a set of instructions for performing a particular task may be characterized as a program, software program, software application, app, or software. It is appreciated that in order to practice the methods of the embodiments as described above, it is not necessary that the processors and/or the memories be physically located in the same geographical place. That is, each of the processors and the memories used in exemplary embodiments of the invention may be located in geographically distinct locations and connected so as to communicate in any suitable manner. Additionally, it is appreciated that each of the processor and/or the memory may be composed of different physical pieces of equipment. Accordingly, it is not necessary that the processor be one single piece of equipment in one location and that the memory be another single piece of equipment in another location. That is, it is contemplated that the processor may be two or more pieces of equipment in two or more different physical locations. The two distinct pieces of equipment may be connected in any suitable manner. Additionally, the memory may include two or more portions of memory in two or more physical locations. As described above, a set of instructions is used in the processing of various embodiments of the invention. The servers may include software or computer programs stored in the memory (e.g., non-transitory computer readable medium containing program code instructions executed by the processor) for executing the methods described herein. The set of instructions may be in the form of a program or software or app. The software may be in the form of system software or application software, for example. The software might also be in the form of a collection of separate programs, a program module within a larger program, or a portion of a program module, for example. The software used might also include modular programming in the form of object oriented programming. The software tells the processor what to do with the data being processed. Further, it is appreciated that the instructions or set of instructions used in the implementation and operation of the invention may be in a suitable form such that the processor may read the instructions. For example, the instructions that form a program may be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. That is, written lines of programming code or source code, in a particular programming language, are converted to machine language using a compiler, assembler or interpreter. The machine language is binary coded machine instructions that are specific to a particular type of processor, i.e., to a particular type of computer, for example. Any suitable programming language may be used in accordance with the various embodiments of the invention. For example, the programming language used may include assembly language, Ada, APL, Basic, C, C++, COBOL, dBase, Forth, Fortran, Java, Modula-2, Pascal, Prolog, REXX, Visual Basic, JavaScript and/or Python. Further, it is not necessary that a single type of instructions or single programming language be utilized in conjunction with the operation of the system and method of the invention. Rather, any number of different programming languages may be utilized as is necessary or desirable. Also, the instructions and/or data used in the practice of various embodiments of the invention may utilize any compression or encryption technique or algorithm, as may be desired. An encryption module might be used to encrypt data. Further, files or other data may be decrypted using a suitable decryption module, for example. In the system and method of exemplary embodiments of the invention, a variety of “user interfaces” may be utilized to allow a user to interface with the mobile devices or other personal computing device. As used herein, a user interface may include any hardware, software, or combination of hardware and software used by the processor that allows a user to interact with the processor of the communication device. A user interface may be in the form of a dialogue screen provided by an app, for example. A user interface may also include any of touch screen, keyboard, voice reader, voice recognizer, dialogue screen, menu box, list, checkbox, toggle switch, a pushbutton, a virtual environment (e.g., Virtual Machine (VM)/cloud), or any other device that allows a user to receive information regarding the operation of the processor as it processes a set of instructions and/or provide the processor with information. Accordingly, the user interface may be any system that provides communication between a user and a processor. The information provided by the user to the processor through the user interface may be in the form of a command, a selection of data, or some other input, for example. The software, hardware and services described herein may be provided utilizing one or more cloud service models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS), and/or using one or more deployment models such as public cloud, private cloud, hybrid cloud, and/or community cloud models. Although the embodiments of the present invention have been described herein in the context of a particular implementation in a particular environment for a particular purpose, those skilled in the art will recognize that its usefulness is not limited thereto and that the embodiments of the present invention can be beneficially implemented in other related environments for similar purposes.
23,459
11860828
DETAILED DESCRIPTION Definitions Distributed system: A distributed system comprises a collection of distinct, computing and/or storage processes and/or devices that may be spatially separated, and that may communicate with one another through the exchange of messages or events. Replicated State Machine: A replicated state machine approach is a method for implementing a fault-tolerant service by replicating servers and coordinating client interactions with server replicas. These state machines are “replicated” since the state of the state machine evolves in an ordered fashion identically at all learners. Replicas of a single server are executed on separate processors of a distributed system, and protocols are used to coordinate client interactions with these replicas. One example and implementation of a replicated state machine is a deterministic state machine (DSM) that advances its state in a deterministic manner. Proposers: According to one embodiment, proposers are processes that are configured and enabled to suggest proposals, some of which may be configured to mutate data. Acceptors: According to one embodiment, acceptors are processes that are configured to participate in deciding on the order of proposals made by proposers. According to one embodiment, only when a majority of acceptors have determined that a proposal takes a particular place in the global sequence of agreements (further described below) does it become an agreement (e.g., an agreed-upon proposal). Acceptors, according to one embodiment, may be configured to only participate in deciding on the order of agreements and do not reason/care about the underlying contents of the agreements (as described herein, the agreement's value is opaque to the Distributed Coordination Engine, also described below). Acceptors may be configured as application-independent entities. Learners: According to one embodiment, learners learn of agreements made between the proposers and acceptors and apply the agreements in a deterministic order to the application through their output proposal sequence. In one embodiment, an agreement identity is provided, as is a persistent store that, for each replicated state machine, allows a sequence of agreements to be persistently recorded. Each proposal is guaranteed to be delivered at least once to each learner in a particular membership. The Hadoop-Compatible File System (HCFS) namespace is a hierarchy of files and directories. Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation. Files and directories are represented on the NameNode by Modes. Modes record attributes or metadata such as permissions, modification and access times, namespace and disk space quotas. The file content is split into large data blocks (typically 128 MB), and each data block of the file is independently replicated at multiple DataNodes (typically three). One implementation of HCFS is the Hadoop Distributed File System (HDFS). The NameNode is the metadata service of HDFS, which is responsible for tracking changes in the namespace. The NameNode maintains the namespace tree and the mapping of blocks to DataNodes. That is, the NameNode tracks the location of data within a Hadoop cluster and coordinates client access thereto. Conventionally, each cluster has a single NameNode. The cluster can have thousands of DataNodes and tens of thousands of HDFS clients per cluster, as each DataNode may execute multiple application tasks concurrently. The Modes and the list of data blocks that define the metadata of the name system are called the image. NameNode keeps the entire namespace image in Random Access Memory (RAM). The roles of proposers (processes who make proposals to change the state of the namespace to the membership), acceptors (processes who vote on whether a proposal to change the state of the namespace should be agreed by the membership) and learners (processes in the membership who learn of agreements that have been made) are defined in, for example, the implementation of the Paxos algorithm described in Lamport, L.: The Part-Time Parliament, ACM Transactions on Computer Systems 16, 2 (May 1998), 133-169, which is incorporated herein in its entirety. According to one embodiment, multiple nodes may be configured to carry out each of the roles. A Distributed Coordination Engine (also referred to as DConE) may allow multiple learners to agree on the order of events submitted thereto by multiple proposers with the aid of multiple acceptors to achieve high availability. To achieve reliability, availability, and scalability, multiple simultaneously-active NameNodes (which may be generically referred to herein as MDSs) may be provided by replicating the state of the namespace on multiple nodes with the requirement that the state of the nodes on which the namespace is replicated is maintained consistent between such nodes. In one embodiment, however, only one MDS is present in each zone, as discussed hereunder. This consistency between NameNodes in different zones may be guaranteed by the DConE, which may be configured to accept proposals to update the namespace, streamline the proposals into an ordered global sequence of updates and only then allow the MDSs to learn and apply the updates to their individual states in the specified, agreed-upon order. Herein, “consistency” means One-Copy Equivalence, as detailed in Bernstein et al., “Concurrency Control & Recovery in Database Systems”, published by Addison Wesley, 1987, Chapters 6, 7 & 8, which is hereby incorporated herein in its entirety. Since the NameNodes start from the same state and apply the same deterministic updates in the same deterministic order, their respective states evolve identically over time to maintain consistency. According to one embodiment, therefore, the namespace may be replicated on multiple NameNodes (or, more generally, metadata servers or MDSs) provided thata) each MDS is allowed to modify its namespace replica, andb) updates to one namespace replica must be propagated to the namespace replicas on other MDSs in other zones such that the namespace replicas are maintained consistent with one another, across MDSs and across zones. FIG.1shows a cluster running a single distributed file system102spanning different geographically (or otherwise) separated zones. The distributed file system may, for example, incorporate aspects of HDFS. Each of the DataNodes (shown as “DN” inFIG.1) may be configured to communicate (through a DataNode-to-server remote procedure call (RPC) protocol) only within their own zone. That is, the DNs of zone1may only communicate with the nodes (servers)110,112,114. . . of zone1(or nodes adjacent to zone1) and the DNs132,134,136,138. . . of zone2may only communicate with nodes (servers)116,118,120. . . of zone2(or servers adjacent to zone2). In one embodiment, only one metadata service (MDS) storing a replica of the namespace may be present in each zone such as shown at MDS103for zone1and MDS105for zone2. The nodes (servers) of each zone, in turn, communicate only with the MDS of their respective zone. In this manner, nodes110,112,114. . . communicate with MDS103and nodes116,118,120. . . of zone2communicate only with MDS105. The MDSs of both zones1and2may coordinate with each other using one or more (e.g., an odd number such as 3, for High Availability (HA)) inter-zone servers140,142to maintain the state of the namespace consistent throughout the different zones of the distributed filesystem102by streaming changes to the namespace across the WAN108 between zones. Those changes may be received by a server (node) in the other zone, whereupon that server writes the changes locally to the backend storage of that zone, thereby enabling all reads and writes to be performed as local operations, thereby eliminating the need for cross-zone security. The DConE process122may be configured to guarantee that the same deterministic updates to the state of the namespace are applied in the same deterministic order on all MDSs in all zones. In one embodiment, the DConE process122may be embedded in the MDS of each zone. That deterministic order may be defined by a Global Sequence Number (GSN). Therefore, a significant role of the DConE process122is to process agreed-upon proposals to modify or otherwise update the state of the namespace replicas according to commands received by the servers in each zone from HDFS clients and transform them into a globally-ordered sequence of agreements, indexed by the GSN. The servers (or nodes, as the terms may be used interchangeably) may then sequentially apply the agreements from that ordered sequence, which generates updates to the state of the replica of the namespace in their zone. The GSN may be configured as a unique monotonically increasing number. However, the GSN may be otherwise configured, as those of skill in this art may recognize. In this manner and through the sequential execution of the ordered (though the GSN mechanism) set of agreements generated by the DConE process122, and through the streaming of changes in each zone to all of the other zones, the state of the replica of the namespace stored in each of zones is brought to or maintained in consistency. As MDSs start from the same state, this ordered application of updates ensures consistency of the replicas across zones, in that snapshots thereof on MDSs having processed the agreements at the same GSN are identical, both within and across zones. The metadata in the replicas of the namespace maintained by the MDSs may be coordinated instantaneously (or nearly so, accounting for bandwidth and latencies inherent in the network), as the DConE process122delivers the agreements and as changes are streamed between zones. Likewise, all file system data is also automatically replicated across the distributed file system. In such a manner, consistent, continuous data replication takes place between file systems in (e.g., but not limited to, Hadoop) clusters. Client applications may be configured to interact with a virtual file system that integrates the underlying storage across multiple zones. When changes are made to files in one zone, those changes are replicated consistently to the other zones. One embodiment may comprise a software application that allows Hadoop deployments to replicate HCFS data between (e.g., Hadoop) clusters that are running different, even incompatible versions of Hadoop such as, for example, CDH, HDP, EMC Isilon, Amazon S3/EMRFS and MapR. It is also possible, according to one implementation, to replicate between different vendor distributions and versions of Hadoop. Advantageously, embodiments may provide a virtual file system for Hadoop, compatible with all Hadoop applications, a single, virtual namespace that integrates storage from different types of Hadoop, a globally-distributed storage mechanism, and WAN replication using active-active replication technology, delivering single-copy consistent HDFS data, replicated between far-flung data centers. According to one embodiment, some or all of the functionality described herein may be carried out within a server or servers adjacent to MDS, at a higher level in the distributed filesystem stack. In this manner, rather than working deeply at the NameNode level, one embodiment may be configured to operate as a proxy application to the distributed file system. In the ordered global sequence of agreed-upon proposals, (issued from a Deterministic State Machine (DSM) or from some other source), some commands in a sequence of commands might be dependent on others. Dependent commands must be executed in the correct order. For example, consider commands A, B and C, in which command B depends on command A and in which command C depends on commands B and A. For load balancing or other purposes, it may be desired to spread the execution load of such commands across several servers or nodes within a zone. For example, node1may be assigned execution of command A, node2may be assigned execution of command B and node1may be assigned to execute command C. In order for node2to execute command B, it must be made aware that node1has finished executing command A. Likewise, the same is true again for node1when executing command C, as it will need verification that node2has finished execution of command B. This approach risks the introduction of significant delays in processing client change requests (i.e., commands) when delays in inter-node communication occur. As set forth above, execution of dependent commands must be delayed until the command or commands from which they depend have been executed. Independent commands may be executed in parallel. In case of failure, a command might be re-executed, but under no conditions may a sequence of commands be re-executed. That is, each command is idempotent (will produce the same result if executed once or multiple times), but the execution of a sequence of more than one command is not idempotent, in that re-execution of the same sequence will not produce the same result. Therefore, although each command may be individually executed more than once, a sequence of such commands may not. For scalability and high availability, there may be multiple servers executing commands and each server within a zone may execute multiple independent commands in parallel. Even though each node receives the entire ordered global sequence of commands, each command in the global sequence should only be executed by a single node. Herein, nodes or servers are deemed unreliable (in that they are subject to failure and may or may not recover) and are configured to communicate with each other, as described above. One embodiment is a computer-implemented method of writer pre-selection (i.e., the pre-selection of a (in one embodiment, preferred) node that is to execute commands configured to update the state of the namespace) that achieves scalability, high-availability and ensures prompt and reliable execution of the constituent commands of the ordered global sequence, while maintaining the safety of preventing re-execution of sequences of commands. The Writers List According to one embodiment, and as shown inFIG.2, prior to insertion into the ordered global sequence generated by the DConE122, each command or command may be associated with a list of nodes (servers) to execute the command (in the consensus case, an agreement), ordered according to execution preference. In one embodiment, this list of nodes may be ordered such that the nodes appear in the list in the order of preference for execution. This ordered list of nodes is also referred to herein as the writers list202. Each node, according to one embodiment, may be configured to only execute those commands in which it is the first node in the received ordered writers list. According to one embodiment, if the first node in the writers list becomes non-operational (e.g., an expected heartbeat signal is not timely received), the next operational node or server in the ordered writers list202may be designated as the node to execute the command. Thereafter, and in real-time or near real-time, the nodes disseminate, to all or a predetermined number of the nodes, information that includes which commands they executed which, in time, enables all nodes (including itself) to be apprised of all already-executed commands. As shown inFIG.2, command1is an independent command and is associated with writers list202, in which server112is the first-listed node. Therefore, node112is the preferred server to execute command1. If node112fails for any reason, before having executed command1or after execution but before it can disseminate the fact that it has executed command1, then execution of command1falls back to node110(the next-listed node in the writers list202) and thereafter to next-listed node114should server110fail. Any known failed nodes are relegated to the bottom of the writers lists that are generated (or otherwise accessed) for subsequent commands. Since command2is dependent upon the execution of command1, writers list204is associated therewith, which writers list is identical to writers list202. Command3, being an independent command, may have a different writers list206associated therewith. Note that the same writers list as202and204may have been associated with command3, but load balancing and/or other considerations may have recommended a different writers list for that command. In further detail, according to one embodiment, processes placing the commands into the ordered global sequence of agreements may learn of suspected failing nodes and failed nodes and may keep track of the operational status of all of the nodes. For each new execution of client commands, the writers list may, according to one embodiment, include a ordered sequence of preferred operational nodes that should execute the command followed by, in one embodiment, failed, failing or suspicious nodes. Therefore, when a node fails, or is suspected of failing, such node will be pushed to or toward the bottom of the writers list. In one embodiment, the operational nodes in the writers list may be ordered such that dependent commands will preferably receive same writers list (such as shown at202and204inFIG.2) to reduce inter-node execution dependencies and may be ordered such that all nodes have approximately the same distribution of positions. Indeed, dependent commands will preferably receive same writers list to reduce inter-node execution dependencies and consequent latencies. This is because the node that is selected to execute an independent command is the node that is best positioned to execute the command(s) that are dependent thereon, which avoids the latency inherent in waiting for updates to propagate to other nodes for execution of dependent command(s). According to one embodiment, all nodes may have approximately the same distribution of positions within the generated writers lists, to evenly distribute the compute load across all available (that is, operational) servers or nodes. Other considerations may, in some situations, dictate a different ordering of the nodes within the writers list that modifies the above-listed preferences. Node Failure Servers are assumed to be failure-prone. A failure of a node means that it will stop executing commands, it will stop informing others about its execution state, or both. Other nodes have no way to distinguish between these two cases (no way to distinguish between slow, non-reporting or failed nodes). According to one embodiment, when a node fails, its position within the writers list changes, at least for all new or subsequent commands. Indeed, a perceived node failure results, according to an embodiment, in the node being pushed back (i.e., away from the top, toward the bottom) in the writers list for new or subsequent commands, so eventually the system will stop producing compute loads for that perceived failed node until the server recovers and signals that it has returned to nominal operational status. InFIG.3, node112has failed or is suspected of having failed. When a new command is issued, node112will be relegated to the bottom or near the bottom of the writers list302associated with the new command. The commands which were previously assigned to be executed by the failed node as the preferred writer, which were not yet executed, will get stuck from other nodes' point of view (together with any commands depending on them). According to an embodiment, if an external entity reliably confirms that the suspected failed node is not operational and will not suddenly start operating again, these stuck commands may be handled, according to an embodiment, in one of two ways: 1) Dead Node Declaration As shown at402inFIG.4, upon failure of a node X, all other nodes may be notified (in one embodiment, by the DConE process embedded in the MDS) of the failure of node X, as shown at404. Upon receiving such notification, the nodes, according to an embodiment and as shown at406, may remove node X in the writers list from all pending commands which had node X as the first, preferred node for execution purposes, as shown inFIG.4, and will cause the command to be executed by another (in one embodiment, the next) node in the writers list as shown at408, now that node X has been removed from the top of the writers list. 2) Replacement Node Similarly as shown inFIG.5, upon failure of a node X as shown at402, all other nodes may be notified (in one embodiment, by the DConE process embedded in the MDS of the zone that includes failed node X) of the failure of node X, as shown at502. Upon receiving such notification, the nodes, according to an embodiment and as shown at406, may remove node X in the writers list from all pending commands which had node X as the first, preferred node for execution purposes. According to one embodiment, a single other, replacement operational node may be designated to take over the commands previously assigned to the failed node X for execution. This newly-designated replacement node could be a new node (server), in which case the new node may query the other nodes as shown at504to determine what commands were assigned to node X for execution, so that it may execute them instead of the failed node X, as suggested at506. Execution Status Dissemination Each node disseminates information about commands it executed for two reasons:(1) To unblock any dependent commands assigned to other MDSs, and(2) To prevent re-execution of a non-idempotent command in case of node failure. Note that, according to one embodiment, it is not necessary to prevent immediate repetition of the execution of a command, absent intervening dependent commands, if the command is idempotent. In particular, embodiments do not need and do not defend against a server failing after executing a command but before dissemination of that information. Instead, commands that are not idempotent, according to an embodiment, are revised to be equivalent in outcome but idempotent. The manner in which such dissemination occurs need not be strictly conscribed. However such dissemination occurs, it should satisfy the following constraints or properties:To satisfy reason (1), the dissemination should occur quickly, i.e., in near-real time.The act of disseminating information about executed command should be persistent and guarantee delivery. In this manner, even nodes that have failed and that eventually return to operational status can and will eventually learn the dissemination information.The act of disseminating information about executed commands should be operational as long as the disseminating node (the node that executed the command about which the information is being disseminated) is operational.Each node should be able to acknowledge the delivery of such dissemination information to other nodes, for continuity guarantee. One embodiment includes a deterministic finite state machine with a single proposer, a single acceptor (the writing node itself) and many learners (other nodes) that learn of the outcome of the command. In such an embodiment, each executor server (node) has its own deterministic finite state machine. Such a finite state machine may then function as the mechanism by which inter-node information dissemination may take place, as discussed hereunder. Continuity Guarantee Even if a node fails, the execution status of any command must not become lost, except as allowed by idempotence. Suppose a command C is executed. Before sufficiently disseminating this information, a command D that is dependent on C is executed. At this point, even if C is idempotent, re-executing it violates safety but can happen because of the insufficient dissemination. For example, a node may fail after having executed a sequence of commands but before it successfully disseminates the “command completed” updated status thereof. Thereafter, another node may erroneously re-execute the sequence of commands, which would violate safety. Therefore, according to an embodiment, to tolerate failure of N nodes, a node must, according to one embodiment, confirm that dissemination to at least N+1 nodes (including itself) has succeeded before executing any command that is later in the sequence of commands and dependent on the results of that command. Different policies and risk tolerance will determine the number of nodes from which dissemination must be confirmed. The roles of participants in the Paxos consensus algorithm, include proposers (processes who propose commands to change the state of the namespace to the membership), acceptors (processes who vote on whether the proposed command to change the state of the namespace should be agreed by the membership) and learners (processes in the membership who learn of agreements that have been made). According to one embodiment, the nodes may be configured to carry out any one of these roles at any given time. As noted above, a DConE process may allow multiple learners to agree on the order of events submitted to the engine by multiple proposers with the aid of multiple acceptors to achieve high availability. Therefore, under Paxos, in the case in which there are multiple acceptors, a command proposed by a proposer will advance as long as most (i.e., a majority) of the multiple acceptors do not fail. However, according to one embodiment, in the special case in which there is only one acceptor node, a proposed command may advance (i.e., take its place in the ordered sequence of agreements to be executed) without consulting the single acceptor node. A proposed command, therefore, may progress straight from a proposer in a deterministic state machine to an agreement to a learner, bypassing the (sole) acceptor. A Paxos-style consensus is, therefore, unnecessary in this case, as the proposer “knows” that it is not in conflict with itself. Information about any executed command must be disseminated to all learners. In the case in which there is only one proposer node, that node both executes the command and necessarily learns the information about and the changed (e.g., from pending to executed) status of the command. This single proposer, therefore, effectively acts as a persistent pathway (to itself) for the dissemination of information regarding the executed command. In one embodiment, a persistent pathway to the dissemination of information regarding executed commands may also be established to all learner nodes. The proposer, according to an embodiment, may have a deterministic state machine (for which it is the only proposer) associated therewith that allows all learners to learn of information concerning executed commands. Each of the other nodes also have a deterministic state machine (that is active when they are the sole proposer) that allows all other nodes, in their role as learners, such information and to change their state accordingly. In one embodiment and as shown inFIG.6, therefore, each node may have a deterministic state machine (DSM) for which it is the only proposer and for which every other node is a learner. As shown, node602has a DSM601for which it is the only proposer and for which all other nodes604,606,608,610. . . are learners. Likewise, node604has a DSM603for which it is the only proposer and for which all other nodes602,606,608,610. . . are learners. Similarly, node606has a DSM605for which it is the only proposer and for which all other nodes602,604,608,610. . . are learners, and so on. In this case, the DSMs601,603,605. . . act as pre-established and persistent message queues, which significantly reduces inter-node traffic and which reduces the latency inherent in point-to-point communications between nodes. Optional Writers List Optimization As noted above, for each new command of client commands, the writers list may, according to one embodiment, include a preferred ordered sequence of operational nodes that should execute the command followed by, in one embodiment, failed, failing or suspicious nodes. In this manner, the preferred operational node will be at the top of the writers list, followed by an ordered list of fallback operational nodes that are to execute the command should the node at the top of the list turn out to be non-operational, followed by, at the end of the list, known currently non-operational nodes. When a node fails, or is suspected of failing, therefore, it will be pushed toward the bottom of the writers list. According to one embodiment, if the list of the available nodes is known to all participants in advance, then it becomes possible to generate in advance, a separate and unique writers list for each possible permutation of nodes. For example, if there are n nodes, then n! distinct writers lists may be pre-generated and numbered (indexed). These pre-generated and indexed writers lists may then be pre-distributed (i.e., in advance of processing commands) to each of the nodes as a list of writers lists as shown inFIG.7at702. In the illustrative example shown inFIG.7, there are five available or potentially available nodes to which commands may be assigned for preferred execution. There being five available nodes, there are 120 different orderings of these five nodes as shown inFIG.7, and each of these unique ordering of nodes may constitute a writers list, with each writers list being identified by an index k=1 through k=120. Even with a large number of nodes, such simple writers lists would not take up much storage space at each of the nodes nor would the transmission of such writers lists be prohibitive in terms of bandwidth—especially since such dissemination would only have to be carried out one time. Thereafter, instead of sending a writers list with each command to be executed detailing the preferred node to execute the command (the first-listed nodes), followed by fallback nodes and non-operation, failed or suspected failing nodes at the bottom, a simple index k into the pre-generated and pre-distributed list702of writers lists may accompany the command to be executed. Sending an index k into the list702of writers list rather than a complete writers list significantly reduces traffic overhead and provides a ready mechanism to specify the preferred node or server to execute any given command. Upon receiving the index k, only that node at the top of the writers list that corresponds to the received index will execute the command. The selection of the index k into the list of writers lists may, according to one embodiment, take into account a number of factors including, for example, load balancing, command dependencies, geographic proximity, network conditions and network latencies, as well as a knowledge of which nodes are currently non-operative, failing or suspected of failing. Other factors may also be considered in selecting the index k. When the index k for the command to be executed is transmitted, the selected index k will correspond to a list in which, at the very least, non-operative, failing or suspected of failing nodes appear at the bottom. As suggested inFIG.8, therefore, the DConE process122(which may be embedded in the MDS of each zone) may broadcast to all nodes, along with the GSN and the command to be executed, the index k of the writers list, which specifies a unique writers list which, in turn, specifies the preferred node (and fallback, lower-listed nodes) that are to execute the command, as suggested at802. FIG.9is a flowchart of a computer-implemented method according to one embodiment. As shown therein, block B902calls for receiving proposals to mutate a data stored in a distributed and replicated file system coupled to a network, the distributed and replicated data system comprising a plurality of servers (also called nodes herein) and a metadata service that is configured to maintain and update a replica of a namespace of the distributed and replicated file system. Block B904may then be executed, in which updates to the data may be coordinated by generating an ordered set of agreements corresponding to the received proposals, the ordered set of agreements specifying an order in which the nodes are to mutate the data stored in data nodes and cause corresponding changes to the state of the namespace. Each of the nodes may be configured to delay making changes to the data and causing changes to the state of the namespace until the ordered set of agreements is received. As shown at B906, for each agreement in the generated ordered set of agreements, a corresponding writers list may be provided or identified that comprises an ordered list of nodes to execute the agreement and cause corresponding changes to the namespace. As shown at B908, the ordered set of agreements may then be sent to the plurality of nodes along with, for each agreement in the ordered set of agreements, the corresponding writers list or a pre-generated index thereto. Each of the plurality of nodes may be configured, according to one embodiment, to only execute agreements for which it is a first-listed node on the writers list. According to a further embodiment, providing may comprise generating the writers list for at least some of the generated ordered set of agreements. Providing may also comprise selecting from among a plurality of pre-generated writers lists. The writers list may comprise an ordered list of preferred operational nodes toward the top of the writers list and may comprise a list of failed or suspected failed nodes towards the bottom of the writers list. The act of providing may further comprise providing or identifying the same writers list for a second proposal that is dependent upon a prior execution of a first proposal than is provided for the first proposal. The computer-implemented method may further comprise enabling a next-listed node in the writers list to execute the agreement when the first-listed node in the writers list has failed or is suspected of having failed. A predetermined replacement node may also be enabled to execute the agreement when the first-listed node in the writers list has failed. Each node or server having executed an agreement may be further configured to disseminate information relating the executed agreement to each of the plurality of nodes. Disseminating may further comprise guaranteeing delivery of the disseminated information. Upon a node executing an agreement, a deterministic state machine may be updated with information relating to the executed agreement, the deterministic state machine being coupled to each of the other nodes and serving as a persistent messaging service between the plurality of nodes. The computer-implemented method may further comprise learning of failing or failed nodes and placing the failed or failing node at the bottom of any generated writers list. In one embodiment, an indexed writers list may be pre-generated for each of all possible combinations of orderings of the plurality of nodes and distributed to each of the plurality of nodes. The method may then comprise selecting one of the indexed writers lists, and sending the ordered set of agreements to the plurality of nodes along with, for each agreement in the ordered set of agreements, an index to a selected one of the pre-generated indexed writers lists. Another embodiment is a network of nodes configured to implement a distributed file system. The cluster may comprise a plurality of data nodes, each configured to store data blocks of client files; a plurality of servers or nodes, each configured to read and/or mutate the data stored in the data nodes and cause corresponding updates to the state of a namespace of the cluster responsive to changes to the data blocks of client files; and a distributed coordination engine embedded in the metadata service that is configured to coordinate received proposals to mutate the data blocks by generating an ordered set of agreements corresponding to the received proposals, with the ordered set of agreements specifying the order in which the nodes are to make changes to the data stored in the data nodes and cause corresponding changes to the state of the namespace. The metadata service may be further configured, for each agreement in the generated ordered set of agreements, to provide a corresponding writers list that comprises an ordered list of nodes to execute the agreement and cause corresponding changes to the namespace and to send the ordered set of agreements to the plurality of nodes along with, for each agreement in the ordered set of agreements, the corresponding writers list or a pre-generated index thereto. In this manner, each of the plurality of nodes may be only enabled to execute agreements for which it is a first-listed node on the writers list. Physical Hardware FIG.10illustrates a block diagram of a computing device with which embodiments may be implemented. The computing device ofFIG.10may include a bus1001or other communication mechanism for communicating information, and one or more processors1002coupled with bus1001for processing information. The computing device may further comprise a random-access memory (RAM) or other dynamic storage device1004(referred to as main memory), coupled to bus1001for storing information and instructions to be executed by processor(s)1002. Main memory (tangible and non-transitory, which terms, herein, exclude signals per se and waveforms)1004also may be used for storing temporary variables or other intermediate information during execution of instructions by processor1002. The computing device ofFIG.10may also include a read only memory (ROM) and/or other static storage device1006coupled to bus1001for storing static information and instructions for processor(s)1002. A data storage device1007, such as a magnetic disk and/or solid-state data storage device may be coupled to bus1001for storing information and instructions—such as would be required to carry out the functionality shown and disclosed relative toFIGS.1-9. The computing device may also be coupled via the bus1001to a display device1021for displaying information to a computer user. An alphanumeric input device1022, including alphanumeric and other keys, may be coupled to bus1001for communicating information and command selections to processor(s)1002. Another type of user input device is cursor control1023, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor(s)1002and for controlling cursor movement on display1021. The computing device ofFIG.10may be coupled, via a communication interface (e.g., modem, network interface card or NIC)1008to the network1026. As shown, the storage device1007may include direct access data storage devices such as magnetic disks1030, non-volatile semiconductor memories (EEPROM, Flash, etc.)1032, a hybrid data storage device comprising both magnetic disks and non-volatile semiconductor memories, as suggested at1031. References1004,1006and1007are examples of tangible, non-transitory computer-readable media having data stored thereon representing sequences of instructions which, when executed by one or more computing devices, implement aspects of the distributed system and computer-implemented methods described and shown herein. Some of these instructions may be stored locally in a client computing device, while others of these instructions may be stored (and/or executed) remotely and communicated to the client computing over the network1026. In other embodiments, all of these instructions may be stored locally in the client or other standalone computing device, while in still other embodiments, all of these instructions are stored and executed remotely (e.g., in one or more remote servers) and the results communicated to the client computing device. In yet another embodiment, the instructions (processing logic) may be stored on another form of a tangible, non-transitory computer readable medium, such as shown at1028. For example, reference1028may be implemented as an optical (or some other storage technology) disk, which may constitute a suitable data carrier to load the instructions stored thereon onto one or more computing devices, thereby re-configuring the computing device(s) to one or more of the embodiments described and shown herein. In other implementations, reference1028may be embodied as an encrypted solid-state drive. Other implementations are possible. Embodiments of the present invention are related to the use of computing devices to carry out the functionality disclosed herein. According to one embodiment, the methods, devices and systems described herein may be provided by one or more computing devices in response to processor(s)1002executing sequences of instructions, embodying aspects of the computer-implemented methods shown and described herein, contained in memory1004. Such instructions may be read into memory1004from another computer-readable medium, such as data storage device1007or another (optical, magnetic, etc.) data carrier, such as shown at1028. Execution of the sequences of instructions contained in memory1004causes processor(s)1002to perform the steps and have the functionality described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the described embodiments. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. Indeed, it should be understood by those skilled in the art that any suitable computer system may implement the functionality described herein. The computing devices may include one or a plurality of microprocessors working to perform the desired functions. In one embodiment, the instructions executed by the microprocessor or microprocessors are operable to cause the microprocessor(s) to perform the steps described herein. The instructions may be stored in any computer-readable medium. In one embodiment, they may be stored on a non-volatile semiconductor memory external to the microprocessor or integrated with the microprocessor. In another embodiment, the instructions may be stored on a disk and read into a volatile semiconductor memory before execution by the microprocessor. Portions of the detailed description above describe processes and symbolic representations of operations by computing devices that may include computer components, including a local processing unit, memory storage devices for the local processing unit, display devices, and input devices. A command as the term is used in this disclosure may correspond to a high-level directive from a client process and may result in one or more computers executing multiple operations. An operation may include a single machine instruction. Furthermore, such processes and operations may utilize computer components in a heterogeneous distributed computing environment including, for example, remote file servers, computer servers, and memory storage devices. These distributed computing components may be accessible to the local processing unit by a communication network. The processes and operations performed by the computer include the manipulation of data bits by a local processing unit and/or remote server and the maintenance of these bits within data structures resident in one or more of the local or remote memory storage devices. These data structures impose a physical organization upon the collection of data bits stored within a memory storage device and represent electromagnetic spectrum elements. A process, such as the computer-implemented methods described and shown herein, may generally be defined as being a sequence of computer-executed steps leading to a desired result. These steps generally require physical manipulations of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, or otherwise manipulated. It is conventional for those skilled in the art to refer to these signals as bits or bytes (when they have binary logic levels), pixel values, works, values, elements, symbols, characters, terms, numbers, points, records, objects, images, files, directories, subdirectories, or the like. It should be kept in mind, however, that these and similar terms should be associated with appropriate physical quantities for computer commands, and that these terms are merely conventional labels applied to physical quantities that exist within and during operation of the computer. It should also be understood that manipulations within the computer are often referred to in terms such as adding, comparing, moving, positioning, placing, illuminating, removing, altering and the like. The commands described herein are machine operations performed in conjunction with various input provided by a human or artificial intelligence agent operator or user that interacts with the computer. The machines used for performing the operations described herein include local or remote general-purpose digital computers or other similar computing devices. In addition, it is to be noted that the programs, processes, methods, etc. described herein are not related or limited to any particular computer or apparatus nor are they related or limited to any particular communication network architecture. Rather, various types of general-purpose hardware machines may be used with program modules constructed in accordance with the teachings described herein. Similarly, it may prove advantageous to construct a specialized apparatus to perform the method steps described herein by way of dedicated computer systems in a specific network architecture with hard-wired logic or programs stored in nonvolatile memory, such as read only memory. While certain example embodiments have been described, these embodiments have been presented by way of example only and are not intended to limit the scope of the embodiments disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the embodiments disclosed herein.
46,739
11860829
The features and advantages of embodiments will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The following detailed description discloses numerous embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the discussion, unless otherwise stated, adjectives such as “substantially,” “approximately,” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to be within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures and drawings described herein can be spatially arranged in any orientation or manner. Additionally, the drawings may not be provided to scale, and orientations or organization of elements of the drawings may vary in embodiments. Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner. Section II below describes example embodiments for page split detection and affinity in query processing pushdowns. Section III below describes example computing device embodiments that may be used to implement features of the embodiments described herein. Section IV below describes additional examples and advantages, and Section V provides some concluding remarks. II. Example Embodiments for Page Split Detection and Affinity in Query Processing Pushdowns Embodiments herein provide for page split detection and affinity in query processing pushdowns. One example implementation of these embodiments is a distributed processing system that performs query processing for large, scalable database operations. It should be noted, however, that these example implementations are not limiting, but rather, are illustrative in nature. In the context of distributed embodiments, a database/computing system includes several distributed components, including one or more compute nodes, multiple page servers, a log service, and storage. As an example, embodiments may be implemented in Azure® SQL Database from Microsoft Corporation of Redmond, WA The distributed architectures under the embodiments herein enable databases of large sizes, such as those exceeding 100 TB, to perform fast database restores, to perform near-instantaneous backups, and to rapidly scale up and down. In existing distributed solutions, compute nodes handle all incoming user queries and query processing activities, where page servers provide a storage engine with each page server maintaining a set or subset of data pages for a database. The role of a page server in existing solutions is limited to serving pages out to compute nodes and to keeping data pages (also “pages” herein) up-to-date based on ongoing transaction activity. However, in contrast to existing systems, embodiments herein enable page servers to perform pushdown operations for query processing that for which page servers were not previously capable. The methods and systems described herein allow for online transaction processing (OLTP) and hybrid transaction and analytical processing workloads (HTAP) that enables high throughput transaction systems that also require real-time analytics. This is accomplished according to the embodiments herein by improving system efficiencies and the handling of processing operations via pushdowns to page servers, as will be described in detail below. For example, in OLTP-tuned systems, the embodiments herein are configured to operate in improved and efficient ways that match the performance characteristics of business critical systems, and for analytical workloads that are scan-intensive over very large data sets, the distributed nature and operations of the systems and methods herein does not disadvantage performance in comparison to existing business critical, single-system implementations that use locally-associated solid state drives for maintaining data, because embodiments provide for more efficient configurations and capabilities of the described distributed systems herein. That is, the described embodiments efficiently handle HTAP workloads by leveraging available page server(s) compute resources and minimizing remote input/output (I/O) data page movement within the computing system, which current solutions cannot do. As an example, consider the following analytical query against a table of a database with 1.5B (billion) rows that cannot leverage existing indexes for a seek operation. This query is looking for the average stock sale for transaction commissions greater than $95.00:SELECT AVG([T_TRADE_PRICE]*[T_QTY]) AS [Average Sale Amt]FROM [dbo].[TRADE]WHERE [T_COMM]>95.00; Assuming there are 245,598 rows in the table where the commission is higher than $95.00, a typical processing system must have a selective filter that considers the overall size of the table. However, because T_COMM is not the leading column of an existing index in table “TRADE,” the compute node of the system must scan each row in the table to perform the query, according to prior solutions. For a large table, such as in this example, that requires scanning to process a query, the compute node must issue several requests for remote I/O data fulfillment from the page server(s) to the compute node. The required data pages are first loaded from the page server(s) into memory on the compute node which then must process the filter on each row. This means that page servers of the computing system that are associated with the table must provide a very large amount of data over the network that will consume correspondingly large amounts of memory at the compute node, as well as correspondingly large processing resources. In this example, for the 1.5B row table, approximately 30M (million) pages are retrieved by the page server(s), provided over the network to the compute node, and stored in memory of the compute node which then scans/filters all 1.5B rows in the provided 30M pages to complete the query operation. In contrast to the performance of the query operation by prior solutions, the embodiments herein provide moving, e.g., the scan/filter operations to the page server(s), thus achieving a “pushed” or “pushdown” filter that provides a number of system benefits, including but not limited to, moving fewer data pages to the compute node from the page servers, reducing network traffic from the page servers to the compute node, reduce I/O requirements on the compute node, reduce memory and RBPEX (“resilient buffer pool extension”) pressure that occurs flooding the compute node buffer cache, and improving the handling of concurrent OLTP workloads on the compute node by offloading or pushing-down processing of scan-intensive HTAP queries to the page server(s). Thus, taking the example filter/query from above, but in the context of the described embodiments, the page server(s) retrieve, scan, and filter the 1.5B rows of data from the 30M data pages, and in turn, only provide the 245,598 rows in the table to the compute node which can then simply aggregate the rows of data in cases where the rows are provided from different page servers. Simply put, embodiments herein leverage spare processing capacity of the allocated page servers to apply filters and other perform other types of pushdown operations. In addition to the computing system performance improvements achieved, as noted above, the primary customer experience will also be improved greatly over prior solutions via query performance for analytical queries. While not so limited, examples of application for the described embodiments include workload types such as HTAP (e.g., a mix of OLTP and OLAP (online analytical processing)); large data sizes such as very large tables, including tables that exceed the capacity of the buffer cache maximum size for a compute node (which would otherwise result in many remote page server I/O operations); different issues related to page affinity for various data structures such as pushed operations against heaps, B-trees, and column store indexes, as well as detecting page splits by page servers during data page reads; eligible data operations such as row mode filters for non-sargable predicates and row mode bitmap filters, SELECT operation list expressions and row mode scalar aggregation pushdowns, and batch mode filters and aggregations. Therefore, benefits from improved query performance for analytical queries against large data sets are realized by the described embodiments. Eligible queries return rows of data to the compute node(s) instead of full data pages, and thus reduce memory pressure (e.g., no data pages are pushed to buffer pool, as well as reduced memory pressure and evictions of existing pages). Several aspects of improvements provided by the embodiments herein, as noted above, are not to be considered limiting. Page split detection and affinity in query processing pushdowns are described below as comprising a portion of the overall processes and benefits provided by the described embodiments. Methods for page split detection and affinity in query processing pushdowns are performed by systems and devices. Page servers perform pushdown operations based on specific, and specifically formatted and/or generated, information, instructions, and data provided thereto from a compute node. Pushdown operations are processing operations that would normally be performed by a compute node. Page servers also determine that page splits have occurred during the reading of data pages maintained by the page servers during pushdown operations, and also during fulfillment of compute node data requests. To detect that a data page has split, page servers utilize information provided from a compute node that relates to an expected next data page, associated with the data page, which is compared to a next data page maintained in the page server page index. A mismatch in the comparison determined by the page servers indicates that a data page was split. The embodiments herein provide for a page server to be enabled to quickly, and accurately, determine that a page split of a data page maintained by the page server has occurred, and to extend fulfillment of a read request for the data of the data page to the new data page generated by the split without having to fail back to the compute node, or provide incomplete data for the request along with a notification for the compute node that there is data remaining to be read on a different page server. That is, because data page splits can be detected during reads by the page server, the page server will not simply stop after reading the page that was split, which would cause a return of incomplete data-instead, the page server detects the split and continues to read data associated with the read operation from another data page. Compute nodes and page servers also store and maintain off-row data generated during data operations utilizing page affinity considerations where the off-row data is stored at the same page server as the data in the operations which allows a single page server to successfully read and/or provide data associated with an operation without failing back to the compute node. Embodiments herein are described in the context of query processing and query processing pushdowns as non-limiting and exemplarily illustrative examples, including various types of operations performed in association with query processing and query processing pushdowns, such as page split detection and page affinity for new data pages and off-row data. However, embodiments herein are not so limited, and their principles and functions are applicable to other types of processing task, applications, and/or services, in which offloading of operations from a primary computing system may be advantageously implemented. Accordingly, methods for page split detection and affinity in query processing pushdowns are performed by systems and devices. The embodiments herein provide solutions that improve processing loads and efficiency in systems of compute nodes and page servers, reduces memory pressure at compute nodes, and greatly reduce network bandwidth usage between compute nodes and page servers. These and other embodiments for page split detection and affinity in query processing pushdowns will be described in further detail below in association with the Figures, and in the Sections/Subsections that follow. Systems, devices, and apparatuses may be configured in various ways for page split detection and affinity in query processing pushdowns. For instance,FIG.1AandFIG.1Bwill now be described.FIG.1Ashows a block diagram of a system100A, andFIG.1Bshows a block diagram of a cloud-based system100B, each configured for page split detection and affinity in query processing pushdowns, according to embodiments. As shown inFIG.1A, system100A includes user device(s)102(also user device102herein), services/applications host103, a compute node(s)104, and a page server(s)106. In embodiments, user device102, services/applications host103, compute node(s)104, and page server(s)106communicate with each other over a network114. A storage112is also shown in communication with page server(s)106. It should be noted that in various embodiments, different numbers of user devices, hosts, compute nodes, page servers, and/or storages are present. Additionally, according to embodiments, any combination of the systems and/or components illustrated inFIG.1Aare present in system100A. Network114comprises different numbers and/or types of communication links that connect computing devices and hosts/servers such as, but not limited to, the Internet, wired or wireless networks and portions thereof, point-to-point connections, local area networks, enterprise networks, cloud networks/platforms, and/or the like, in embodiments. In an example, network114may be a cloud-based platform network and/or enterprise network through which a user device or other computing system connects to or accesses a service/application that may in turn cause performance of operations by compute nodes and page servers on data persisted in a data storage. Storage112may be any type and/or number of data storage devices or systems, and may comprise internal and/or external storage in various embodiments. While storage112is shown in communication with page server(s)106, in some embodiments, storage112may be connected to network114, or may comprise a portion of page server(s)106. Storage112may comprise a monolithic storage device/system, a cloud-based storage system, a distributed storage system, and/or the like. User device102in different embodiments is any number, type, or combination of computing devices or computing systems, including a terminal, a personal computer, a laptop computer, a tablet device, a smart phone, a personal digital assistant, a server(s), a gaming console, and/or the like, including internal/external storage devices, that are utilized to execute functions/operations described herein for page split detection and affinity in query processing pushdowns, e.g., providing queries to a database (DB) server of services/applications host103, as well as for performing client-side functions/operations of client-server scenarios. User device102also includes additional components (not shown for brevity and illustrative clarity) including, but not limited to, components and subcomponents of other devices and/or systems herein, in various embodiments. User device102may be a computing device associated with a domain which, as used herein, generally refers to a physical and/or logical system boundary under the control of an entity within which applications and/or services are hosted, offered, managed, and/or otherwise implemented, and also encompasses subdomains and/or the like in embodiments. Exemplary, non-limiting domains include, without limitation, web domains, tenancies of hosted cloud platforms, cloud service providers, enterprise systems, and/or any other type of network or system. A tenant is particular type of domain that is a representation of an organization in a cloud platform. The domain of the tenant in the cloud platform is its tenancy in which the tenant registers and manages applications, stores data/files, accesses services, etc. Services/applications host103comprises one or more server computers or computing devices, such as an on-premises server(s) in addition to, or in lieu of, cloud-based servers. Services/applications host103may host one or more services or applications, as would be understood by persons of skill in the relevant art(s) having the benefit of this disclosure, and may act as a portal or interface for users/tenants using user device(s)102by which access to compute node(s)104is obtained. In some embodiments, services/applications host103may host a DB server front end that utilizes compute node(s)104and page server(s)106as a back end. Compute node(s)104comprises one or more server computers or computing devices, such as an on-premises server(s) in addition to, or in lieu of, cloud-based servers. Compute node(s)104, as shown, include anode query processing (QP) pushdown manager108. Node QP pushdown manager108is configured to determine and provide modified operations, operation fragments, modified metadata, page indexes associated with data pages for operations, and/or the like in the context of QP pushdowns to page server(s)106. Node QP pushdown manager108may also be configured to receive data, from data pages managed by page server(s)106, and in embodiments, some such data may be processed by page server(s)106based on QP pushdown requests provided to page server(s)106from node QP pushdown manager108. In such embodiments, node QP pushdown manager108provides this processed data to a query processor or operations processor of compute node(s)104(described in further detail below) for performing QP operations at compute node(s)104. Page server(s)106comprises one or more server computers or computing devices, such as an on-premises server(s) in addition to, or in lieu of, cloud-based servers. Page server(s)106, as shown, include a page query processing (QP) pushdown manager110. Page QP pushdown manager110is configured to determine/detect page splits in data pages during performance of operations such as reading data from data pages, and to continue performance of such read operations on new data pages generated by page splits after existing data pages are read, according to embodiments. Page splits may be determined by Page QP pushdown manager110based at least on page indexes maintained by compute node(s)104. In some embodiments, page QP pushdown manager110is configured to detect page splits when page indexes of data pages maintained by page server(s)106have not yet been updated to reflect changes caused by a page split. Page QP pushdown manager110may also be configured to perform QP pushdown operations in accordance with requests therefor from node QP pushdown manager108, in embodiments, and is configured to store new data pages and off-row data generated by operations based on page affinity, as described herein. It should also be noted that embodiments herein contemplate that compute node(s)104, page server(s)106, storage112, and/or services/applications host103may comprise a portion of an enterprise network portion of network(s)114with which user device(s)102communicate over the Internet. Turning now toFIG.1B, system100B is a cloud-based embodiment of system100A ofFIG.1A. As shown, system100B includes a cloud platform134. In embodiments, cloud platform134is a cloud-based platform such as Microsoft® Azure® from Microsoft Corporation of Redmond, WA, that is accessible by one or more users of user device(s)132(also user device132herein) over a network (not shown here for illustrative clarity and brevity). User device132may be any type and/or number of user device, such as devices similar to those described for user device102inFIG.1A, and may correspond to tenants and/or end users, IT personnel, administrators of systems described herein, of different domains, such as different tenancies within cloud platform134. A tenant in the context ofFIG.1Bis a representation of an organization in a cloud platform. The domain of the tenant in the cloud platform is its tenancy in which the tenant registers and manages applications, stores data/files, accesses services, etc., hosted by cloud platform134. Cloud platform134is illustrated as hosting tenancies118which comprises one or more tenants. Tenants are enabled to provide applications/services, hosted by cloud platform134, to users such as end users of tenancies118. In doing so, a tenant may lease or purchase the use of system resources within cloud platform134for such hosting and may utilized system resources and/or operations for providing their services to end users. For instance, cloud platform134may host a tenant of tenancies118(which may include partners and/or service providers of the owner of cloud platform118), that provides services for a DB server of services/applications120(also “services/apps”120herein) of cloud platform134, in embodiments. Users of user device(s)132having credentials for ones of tenancies118are allowed to authenticate for this tenancy and access data, information, services, applications, etc., e.g., services/apps120of cloud platform134, allowed or instantiated for the tenant. Compute node(s)122and node QP pushdown manager126may be respective embodiments of compute node(s)104and node QP pushdown manager108ofFIG.1A, in the context of cloud platform134. Page server(s)124and page QP pushdown manager128may be respective embodiments of page server(s)106and page QP pushdown manager110ofFIG.1A, in the context of cloud platform134. Storage130may be an embodiment of storage112ofFIG.1A, in the context of cloud platform134. Cloud platform134includes one or more distributed or “cloud-based” servers, in embodiments. That is, cloud platform134is a network, or “cloud,” implementation for applications and/or services in a network architecture/cloud platform. A cloud platform includes a networked set of computing resources, including servers, routers, etc., that are configurable, shareable, provide data security, and are accessible over a network such as the Internet, according to embodiments. Cloud applications/services are configured to run on these computing resources, often atop operating systems that run on the resources, for entities that access the applications/services, locally and/or over the network. A cloud platform such as cloud platform134is configured to support multi-tenancy as noted above, where cloud platform-based software services multiple tenants, with each tenant including one or more users who share common access to certain software services and applications of cloud platform134, as noted herein. Furthermore, a cloud platform is configured to support hypervisors implemented as hardware, software, and/or firmware that run virtual machines (emulated computer systems, including operating systems) for tenants. A hypervisor presents a virtual operating platform for tenants. Portions ofFIGS.1A and1B, and system100A and system100B respectively, such as compute node(s)104and/or122, page server(s)106and/or124, storage112and/or130, and/or cloud platform134also include additional components (not shown for brevity and illustrative clarity) including, but not limited to, components and subcomponents of other devices and/or systems herein, e.g., an operating system, as shown inFIG.11described below, in embodiments. Additionally, as would be understood by persons of skill in the relevant art(s) having the benefit of this disclosure, system100A and system100B illustrate embodiments in which system resources utilized for applications and/or services, such as DB server hosting, may be scaled out on demand or as needed to any size, throughput, capacity, etc., and the embodiments herein provide for the pushdown of operations to page servers that were up until now performed exclusively by compute nodes, and also provide for specific handling of different operations and functions by compute nodes and/or page servers to successfully and accurately perform these pushdown operations. Non-limiting examples of such specific handling include, without limitation, the detection of page splits at page servers caused by concurrent operations generating/changing data in a data page after a request to read the page is received by the page server and prior to the data page being read, page affinity for managing off-row data, and/or the like as described herein. Systems, devices, and apparatuses are configured in various ways for page split detection and affinity in query processing pushdowns, in embodiments. For instance,FIGS.2and3will now be described in this context. Referring first toFIG.2, a block diagram of a system200is shown for page split detection and affinity in query processing pushdowns, according to an example embodiment. System200as exemplarily illustrated and described is configured to be an embodiment of system100A ofFIG.1Aand/or system100B ofFIG.1B.FIG.3shows a flowchart300for page split detection and affinity in query processing pushdowns, according to an example embodiment. System200may be configured to operate in accordance with flowchart300. System200is described as follows. System200includes a computing system202which is any type of server or computing system, as mentioned elsewhere herein, or as otherwise known, including without limitation cloud-based systems, on-premises servers, distributed network architectures, and/or the like, and may be configured as a compute node and/or as a page server, in various examples as described herein. As shown inFIG.2, computing system202includes one or more processors (“processor”)204, one or more of a memory and/or other physical storage device (“memory”)206, as well as one or more network interfaces (“network interface”)228. In embodiments, computing system202also includes a query processing (QP) pushdown manager238that is an embodiment of one or more of node QP pushdown manager108ofFIG.1A, node QP pushdown manager126ofFIG.1B, page QP pushdown manager110ofFIG.1A, and/or page QP pushdown manager128ofFIG.1B. Computing system202may also include an operations processor222, an allocator224, and one or more page indexes226. System200includes a storage236that includes data pages, or portions thereof, in embodiments, and may be configured as, or similarly as, storage112ofFIG.1Aand/or storage130ofFIG.1B. It is contemplated herein that any components of system200may be grouped, combined, separated, etc., from any other components in various embodiments, and that the illustrated example of system200inFIG.2is non-limiting in its configuration and/or numbers of components, as well as the exemplary arrangement thereof. Processor204and memory206may respectively be any type of processor circuit(s)/system(s) and memory that is described herein, and/or as would be understood by a person of skill in the relevant art(s) having the benefit of this disclosure. Processor204and memory206may each respectively comprise one or more processors or memories, different types of processors or memories (e.g., one or more types/numbers of caches for query processing, allocations for data storage, etc.), remote processors or memories, and/or distributed processors or memories. Processor204may be multi-core processors configured to execute more than one processing thread concurrently. Processor204may comprise circuitry that is configured to execute and/or process computer program instructions such as, but not limited to, embodiments of QP pushdown manager230, including one or more of the components thereof as described herein, which may be implemented as computer program instructions, as described herein. For example, in performance of/operation for flowchart300ofFIG.3, processor204may execute program instructions as described. Operations processor222may be a query processor or a portion of a DB server, in embodiments, configured to perform DB operations such as performing queries against a DB. Operations processor222may comprise program instructions that are carried out by processor204, in embodiments, or may be a hardware-based processing device as described herein. Memory206includes volatile storage portions such as a random access memory (RAM) and/or persistent storage portions such as hard drives, non-volatile RAM, and/or the like, to store or be configured to store computer program instructions/code for page split detection and affinity in query processing pushdowns, as described herein, as well as to store other information and data described in this disclosure including, without limitation, embodiments of QP pushdown manager230, including one or more of the components thereof as described herein, and/or the like, in different implementations contemplated herein. Memory206also includes storage of page index(es)226, which includes an index of data pages associated with databases that identifies parent and leaf data page structures as well as page servers that maintain particular data pages, in embodiments, allocation caches as described herein, as well as data utilized and/or generated in performance of operations/functions noted herein, and/or the like, such as metadata, etc. In the context of a compute node, page index226may include information regarding each of the page servers associated with maintaining data pages of the DB, while in the context of a page server, page index226may include information regarding the data pages of the DB maintained by the page server. Allocator224is configured to manage allocation of storage space for new data pages and associated page index modifications, as well as for off-row data, to improve page affinity for related data and performance of QP pushdown operations. As noted above, memory206includes one or more allocation caches in embodiments that are allocated to store persistent version store pages having data/information associated with different versions of a DB, as well as other data such as other off-row data. In embodiments, each instance of a compute node or a page server may include its own allocation cache, and in some embodiments, multiple instances of allocation caches may be implemented as corresponding to different DB files/objects associated with or maintained by a compute node or a page server. Allocator224is configured to manage allocation caches and the storage of data therein, and may include sub-units for management of persistent version store (PVS) data pages, small large object (SLOB) pages (e.g., secondary page overflow), unordered collections of rows such as heap forwarded rows, and new data pages and associated page index modifications. Storage236may comprise a portion of memory206, and may be internal and/or external storage or any type, such as those disclosed herein. In embodiments, storage236stores one or more data pages that comprise a DB object or DB file. When configured to function as a page server, system200stores any number of data pages in storage236. Additionally, more than one page server may be implemented via multiple instances of system200, and data pages of a DB object or DB file may be large enough in number and/or data size such that data pages of a single DB object or DB file span multiple instances of storage236across multiple, respective page servers. In embodiments where system200is configured to function as a compute node, storage236stores data pages and/or portions of data pages provided from one or more pages servers responsive to requests from the compute node. In embodiments, storage236may also include allocation caches as described herein. Network interface228may be any type or number of wired and/or wireless network adapter, modem, etc., configured to enable system200, including computing system202, to communicate intra-system with components thereof, as well as with other devices and/or systems over a network, such as communications between computing system202and other devices, systems, hosts, of system100A inFIG.1Aand/or system100B inFIG.1B, over a network/cloud platform such as network112and/or cloud platform134. System200also includes additional components (not shown for brevity and illustrative clarity) including, but not limited to, components and subcomponents of other devices and/or systems herein, as well as those described below with respect toFIG.11, e.g., an operating system, etc., according to embodiments. In embodiments, computing system202may be configured as a compute node and/or as a page server, and QP pushdown manager230of computing system202may be corresponding configured in such embodiments. That is, QP pushdown manager230may be configured as a node QP pushdown manager and/or as a page QP pushdown manager. Accordingly, QP pushdown manager230may be implemented in various ways to include a plurality of components for performing the functions and operations described herein for page split detection and affinity in query processing pushdowns, in a compute node context and/or in a page server context. As illustrated, system200ofFIG.2shows two non-exclusive options for configuring QP pushdown manager230: a node QP pushdown manager232and a page QP pushdown manager234. Node QP pushdown manager232includes, without limitation, an index manager210, a metadata generator212, and a pushdown generator214. Page QP pushdown manager234includes, without limitation, a page split engine216, an off-row data manager218, and a pushdown engine220, although additional components, as described herein or otherwise, are also included and some components may be excluded, in various embodiments. Additionally, features described for compute nodes may be included in page server embodiments, and vice versa. Referring to node QP pushdown manager232, index manager210is configured to determine indexes of data pages required for QP and/or QP pushdown operations based on page index226. In embodiments, this may include next data pages associated with data pages to be read by a page server. Metadata generator212is configured to determine metadata needed for, and to generate versions of metadata and/or modify metadata associated with a DB for, performing different operations described herein such as QP pushdown operations to be performed by a page server. In embodiments, metadata generator212is configured to serialize metadata required for operations as provided to a page server. Pushdown generator214is configured to generate pushdown operations at a compute node for provision to a page sever. In embodiments, pushdown generator214generates query fragments (e.g., including query operators, expressions, etc.) that, along with appropriate metadata, are assembled to form query plans for QP pushdown operations performed by one or more page servers that would otherwise be incapable of performing the required QP pushdown operations. Referring now to page QP pushdown manager234, page-split engine216is configured to determine when a page split has occurred at a page server during a reading of the data page that was split. In embodiments, page-split engine216determines page splits based on a comparison between an expected next data page from page index226and a provided next data page from a compute node. Off-row data manager218is configured to determine that off-row data is generated in association with an operation on data from a data page, and to determine a storage location for the generated off-row data that provides page affinity with the data and/or other off-row data associated with the data. Pushdown engine220is configured to generate QP pushdown operations, from information provided by a compute node, such that operations processor222is enabled to process the operations. Referring also now toFIG.3, flowchart300begins with step302. In step302, it is determined by a page server that a page split in a data page has occurred. For example, referring again to system200inFIG.2, as described above, page-split engine216is configured to perform step302of flowchart300. That is, page-split engine216is configured to determine that a page split has occurred during the reading of a data page by a page server in which page-split engine216is included. Page-split engine216is configured to determine when a page split has occurred at a page server that is caused by concurrent operations generating/changing data in the data page to split the data page after a request to read the page is received by the page server but prior to the data page being read by the page server. Page-split engine216determines a page split has occurred by comparing a next data page identifier that is retrieved from page index226of the page server with an expected next data page identifier provided by the compute node that requests the data page be read. When the expected next data page identifier provided by the compute node does not match the next data page identifier in page index226, page-split engine216identifies the occurrence of a page split for the data page. Flowchart300ofFIG.3continues with step304. In step304, a new data page generated by the page split is located at the page server prior to communicating with a compute node regarding the page split. For instance, page-split engine216of system200inFIG.2is configured to locate the new data page generated by the split. In this manner, data from the original data page that is now located at the new data page because of the split, in addition to the data that required the split to occur, are able to be read by the page server without missing this data (i.e., if only the originally-identified data page from the compute node request was read by the page server). The page server, based on page index226that includes the next data page based on the split, locates the new page and continues the operation to read the requested data that was only associated with the original data page initially, when the new data page is located at the page server. In some cases, a new data page generated in association with a page split may be stored at a different page server, and in such cases, the page server returns the portion of the data that was read from the original data page to the compute node with a data-remaining notification for the requested read based on the determination that the new data page is located at the different page server If the next data page identifier in page index226, that is associated with the new data page, matches the expected next data page identifier provided by the compute node, page-split engine216determines that all data associated with the data page that was split has been read, and the operation of reading the data page concludes. The read data may then be used by the page server to perform one or more QP pushdown operations, or may be provided back to the compute node. Accordingly, the embodiments herein provide for a page server to be enabled to quickly, and accurately, determine that a page split of a data page maintained by the page server has occurred, and to extend fulfillment of a read request for the data of the data page to the new data page generated by the split without having to fail back to the compute node or unknowingly provide incomplete data for fulfillment of a request. As noted above forFIGS.1A,1B,2, and3, embodiments herein provide for page split detection and affinity in query processing pushdowns. System100A ofFIG.1A, system100B ofFIG.1B, and/or system200ofFIG.2may be configured to perform functions and operations for such embodiments. It is further contemplated that the systems and components described above are configurable to be combined in any way.FIG.4,FIG.5A,FIG.5B,FIG.5C, andFIG.6will now be described. FIG.4shows a flowchart400for page split detection and affinity in query processing pushdowns, according to example embodiments. System100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate according to flowchart400, which may be an embodiment of flowchart300ofFIG.3. Further structural and operational examples will be apparent to persons skilled in the relevant art(s) based on the following descriptions. Flowchart400is described below in the context of system100B inFIG.1Band system200ofFIG.2, and with respect toFIG.5A,FIG.5B,FIG.5C, andFIG.6. It should be understood, however, the following description is also applicable to system100A inFIG.1A. FIG.5A,FIG.5B, andFIG.5Ceach show a block diagram representation of data page and a page index states comprising a page server system (system500A, system500B, and system500C, respectively), andFIG.6shows a flow diagram600, which may be an embodiment of flowchart400, and which system100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate in accordance with, in example embodiments for page split detection and affinity in query processing pushdowns. RegardingFIG.4, flowchart400begins with step402. In step402, a first page of a database is stored, the first page including data. For example, a page, or data page, of a data base may be stored and maintained at a page server, such as system200ofFIG.2when so configured, and/or one of page server(s)124of system100B inFIG.1B. The data page may be stored in a storage such as storage130inFIG.1Band/or storage236ofFIG.2, which are a portion of and/or maintained by pager server(s)124and computing device202when configured as a page server. The first page in step402may be one, or one of many, pages of a table in a database, and may be stored at the page server in various ways as would be understood by persons of skill in the relevant art(s). Referring also toFIG.5Aand system500A, a first data page502-1and a second data page502-2are shown in a first state for an example database of which a first data page, and inFIG.5Aalso a second data page, have been stored at the page server (although different numbers data pages may be included as described above for system200ofFIG.2when so configured and page server(s)124of system100B inFIG.1B). First data page502-1is illustrated as including data504-1and a page identifier (ID) of a next adjacent page in the database with respect to first data page502-1, in this example a page2ID506-1that corresponds to second data page502-2. Second data page502-2is illustrated as including data504-2and a page ID of a next adjacent page in the database with respect to second data page502-2, in this example a page3ID506-2that corresponds to a third data page of the database, of N pages (not shown for brevity and clarity of illustration). FIG.5Aand system500A also include a page index508(e.g., as an embodiment of page index226inFIG.2) that corresponds to the data pages of the database, including first data page502-1and second data page502-2. Page index508is illustrated as indexing N data pages of the database maintained by system500A. Page index508may be a hierarchical index, e.g., a B-tree structure, that includes a root index level510identifying data pages1-N stored and maintained at system500A, M first level indexes512(shown as Level1.1to Level1.M, which may be referred to as sub-indexes), each of which serve to index portions of root index level510, and a plurality of leaf level indexes514for each of data pages1-N. For purposes of description, a first leaf index516and a second leaf index518are identified inFIG.5Aand system500A. As illustrated in system500A, the data pages1-N, including first data page502-1and second data page502-2, are forward-scanned in this configuration, shown as each leaf index514, which corresponds to a data page, having a logically adjacent next leaf and corresponding data page as sequentially forward link, shown exemplarily as a sequentially forward scan520, which progresses from first leaf index516(and first data page502-1), to second leaf index518(second data page502-2), to the third leaf index (“Leaf (3)” inFIG.5A) that corresponds to the third data page, etc. It should be noted, however, that sequentially backward links are also contemplated herein and are described in further detail inFIG.5Cbelow. It should be noted that any number of levels, and sub-indexes within a given level, of page index508for any number of index pages may be present in embodiments, and that the specific configuration/state of page index508is illustratively exemplary, and non-limiting in nature. In step404of flowchart400inFIG.4, a request that is associated with the data of the first page, and a next page identifier of a logically adjacent page of the database that is logically adjacent with respect to the first page at the time the request is generated, are received from a compute node of the computing system. For instance, a compute node, such as one of compute node(s)122ofFIG.1Band/or system200when acting as a compute node is configured according to embodiments to provide a request, to the page server (e.g., system200when so configured), that is associated with the data stored by the page server in a data page. The page server, referring to system200in such a configuration, may be enabled to receive the request via network interface228. In embodiments, the request includes a next data page ID that is valid at the time the request is generated by the compute node and that specifies what the next adjacent, sequential data page is as identified in a page index maintained by the page server. Referring now toFIG.5B, system500B may be an embodiment of system500A ofFIG.5A, and includes first data page502-1, second data page502-2, and page index508, as described above forFIG.5A, with changes noted as follows for a second configuration/state of page index508, as well as second data page502-2, and page index508. For example, in the context of step404in flowchart400,FIG.5Billustrates a request522provided by a compute node to system500B where request522comprises the operation/request, which may be a request to read data for a specific data page, as well as a next page ID that identifies the adjacent, sequential data page determined by index manager210of system200inFIG.2(at a time T1when the request was generated by the compute node) for the data page in the request, as described above. In an example for which the operation/request is to read data504-1of first data page502-1inFIG.5A, request522specifies this operation/request and specifies that the next data page ID in compute node page index at the time T1is page2ID506-1as shown inFIG.5A. While request522is described as referencing data of a single data page, it should be noted that embodiments herein also contemplate requests in which multiple data pages are specified having respective next expected data pages in other embodiments. However, in the time that it took request522to propagate from the compute node to system500B inFIG.5Bto being the request/operation, another different operation at the compute node caused a page split for first data page502-1at a time T2that is later than time T1, in continuance of the example above. This is illustrated inFIG.5Bas first data page1including a data portion504-1A, which comprises a part of data504-1shown inFIG.5A. The page split of first page502-1also generates a new data page which is illustrated as new data page502-3(data page N+1) that includes data504-1B which comprises another part of data504-1ofFIG.5Aand which may also comprise new data that required or provoked the page split. Additionally, the page split may cause the next page IDs of the data pages of system500B to be updated based on corresponding updates to page index508. For instance, page2ID506-1is updated to a next data page ID of page N+1 ID506-1A while new data page502-3includes a next data page ID of page2ID506-1B. Likewise, root level index510and first level index1.1of first level indexes512are shown as reflecting the new data page502-3generated from the page split with a data page ID ‘N+1’, which is also reflected in a new leaf index524that is generated at time T3(also after time T1). Still further, the next adjacent, sequential data page linking for first leaf index516(and first data page502-1) now points to leaf N+1 (and new data page502-3), which in turn points to second leaf index518(and second data page502-2). Accordingly, the forward links, from left and first data page502-1to right and an ultimate data page with page N ID (not shown) corresponding to leaf index N, is maintained, where an ultimate forward link526, when necessary, points to a data page of another page server. Referring now toFIG.5C, as noted above, sequentially backward links, rather than forward links in sequentially forward scan520ofFIG.5B, are also contemplated herein.FIG.5Cmay be an embodiment ofFIG.5Aand page index508, as described above forFIG.5A, with changes noted as follows for sequentially backward links in page index508. Page index508inFIG.5Cis show as an alternate implementation of page index508inFIG.5Athat includes a sequentially backward scan530. In this implementation, newly-generated data pages, e.g., from page splits, are inserted in page index508to the left, instead of to the right, where an ultimate backward link532, when necessary, points to a data page of another page server. The embodiments herein are enabled to handle forward and backward sequential scans without deviating from the other operations and functions described herein. Referring back again toFIG.5B, it is described above how the updating of data pages and page indexes is performed for page splits that generate new data pages. However, because request522inFIG.5Bwas generated by the compute node at time T1before the page split and subsequent updating were performed (at times T2and T3), as reflected in the changes in the page index and data page from system500A inFIG.5Ato system500B inFIG.5B, the page server in the second state shown in system500B ofFIG.5Breceives request522, in this example to read data504-1of first data page502-1inFIG.5A, and would, without the embodiments herein implemented, simply read first data page502-1for data504-1and return the read data to the compute node because request522does not reflect the page split described above. In such scenarios under prior implementations, incomplete and/or incorrect data would be read and/or returned. Referring also toFIG.4, in step406, a portion of the data is read from the first page. For instance, based on request522as received in step404described above, a page server is configured to read data of the requested data page, e.g., via operations processor222of computing device202when the request is for data to be returned to the compute node and/or pushdown engine220of page QP pushdown manager234when the request is associated with a pushdown operation. In view ofFIG.5Band continuing with the example above, request522specifies that data504-1be read from first data page502-1, and that the expected next adjacent data page be second data page502-2. Operations processor222or pushdown engine220of system200inFIG.2, however, instead read data504-1A (because of the page split) from first data page502-1. Referring also again toFIG.4, in step408of flowchart400, a second page identifier is identified from the first page. For example, after step406and completion of reading data504-1from first data page502-1, page split engine216of page QP pushdown manager234inFIG.2is configured to identify from first data page502-1a next data page ID that corresponds to the adjacent, sequential data page with respect to first data page502-1. In the described example, page split engine216reads page N+1 ID506-1A as the second page identifier, which corresponds to new data page502-3. In410of flowchart400, it is determined that a page split in the first page has occurred at the page server subsequent to receiving the request, the page split generating a second page at the page server, or at a different page server, as a new page in the database that includes another portion of the data, based at least on a comparison between the second page identifier and the next page identifier. For instance, page split engine216, having identified the second page identifier of the next adjacent page in step408above, is configured to compare the second page identifier, e.g., page N+1 ID506-1A (also reflected in page index508inFIG.5Bat leaf index524) with the next page identifier (page2ID506-1) specified in request522provided by the compute node. In other words, a page split has occurred that generated new data page502-3, which includes in data504-1B a portion of original data504-1(fromFIG.5A) that is not in data504-1A of first data page502-1, and page split engine216is still enabled by the embodiments herein to determine during the requested reading of the data that the page split has occurred and that a portion of original data504-1resides in another data page or other data pages and not in first data page502-1. In other words, because the actual next data page does not match the expected data page from the perspective of the compute node at time T1when the request was generated, page split engine216determines the page split has occurred and, thus, new data page502-3has been generated. In embodiments, it is determined that the new page in the database that is generated by the page split is located/stored a different page server. In such scenarios, the page server may perform operations as similarly described below for the steps of flow diagram600, inFIG.6, where only the portion of the data read at the page server on the first data page is returned to the compute node by the page server, and in embodiments the returned portion of the data is provided with a data-remaining notification that indicates other portions of the data are stored at the different page server. Therefore, page split engine216enables a page server to extend fulfillment of the read operation by reading the remainder of the requested data from another data page, according to embodiments. The examples herein also provide for the handling of diverse scenarios with respect to extensions of fulfillment, available of data and data pages, performing QP pushdown operations, and/or the like, a non-limiting set of which will be described below additionally in view ofFIG.6. InFIG.4, and step412of flowchart400, subsequent to the determining, fulfillment of the request is extended beyond reading the first page by reading, from the second page, the other portion of the data when the second page is at the page server. For example, page split engine216of system200inFIG.2is configured to detect page splits, as described above, and to locate newly-generated data pages that correspond to the page splits. After the page server reads a portion of the requested data from the data page and page split engine216determines a page split of the data page has occurred (as in step410), the split page that is newly generated and that now includes another portion of the requested data and/or new data that should be read in association with the request, is read by the page server, e.g., via operations processor222of computing device202when the request is for data to be returned to the compute node and/or pushdown engine220of page QP pushdown manager234when the request is associated with a pushdown operation. This extends fulfillment of the request (in the example provided: to read data504-1from first data page502-1) beyond reading the first data page, and also reading the second (new) data page, to capture all of data504-1. In the example above, this requires reading data504-1B in new data page502-3by the page server. Therefore, page split engine216enables a page server to extend fulfillment of the read operation by preventing the operation from concluding/failing and by causing the remainder of the requested data to be read from another data page, according to embodiments. The examples herein also provide for the handling of diverse scenarios with respect to extensions of fulfillment, availability of data and data pages, performing QP pushdown operations, and/or the like, a non-limiting set of which will be described below in view of step414and step416of flowchart400, and additionally in view of flow diagram600inFIG.6. Referring now toFIG.6, flow diagram600, begins subsequent to step412of flowchart400, in embodiments. Flow diagram600illustrates the handling of diverse scenarios with respect to extensions of fulfillment, availability of data and data pages, performing QP pushdown operations, etc., in view of flowchart400. For example, in step602of flow diagram600, a third page identifier of a logically adjacent page of the database with respect to the second page is identified from the second page. Step602may be performed similarly as described above for step408except that the page read iteration in step602is based on the second data page instead of the first data page. Continuing with the illustrative example from above, new data page502-3inFIG.5A, having a next page ID of page2ID506-1B that corresponds to second data page502-2, may correspond to the second page in step602. In step604, it is determined if the third page identifier from step602matches the next page ID received in the request (e.g., request522ofFIG.5B) that corresponds to the next data page anticipated/expected by the compute node when request522is generated at time T1. In embodiments, this determination may be performed by page split engine216of system200inFIG.2, as similarly described with respect to step410of flowchart400inFIG.4. In the described example, request522calls for data504-1of first data page502-1to be read, and also provides that the next expected data page is second data page502-2via including in request522the next page ID of page2ID506-1. If page split engine216determines a match between the page IDs, step604of flow diagram600may proceed to step414and/or step416of flowchart400as the read operation is complete and all of data504-1is read (i.e., both data504-1A and data504-1B2have been read) and the next data page expected by the compute node has been identified. Turning again toFIG.4, flowchart400may proceed from step412to step414, in embodiments. In step414, the portion of the data, and the other portion of the data if read, are returned to the compute node from the page server. For example, a page sever as described herein is configured to fulfill requests for data from compute nodes by returning requested data procured via read operations over a network to the compute nodes. Based on the embodiments herein, requests for data from compute nodes may be fulfilled in a more complete and correct manner by page servers based on the operations of page split engine216for detecting page splits that otherwise would not be detected prior to a page server returning only a portion of the requested data, which would cause the compute note to issue another I/O operation to the page server for the rest of the data, delaying completion of the request and expending processing and network resources unnecessarily. Additionally, as noted above, only the portion of the data read from the page server may be returned to the compute node when the new page generated by the page split is stored at a different page server. Flowchart400may additionally or alternatively proceed from step412to step416, in embodiments. In step416of flowchart400, a query processing operation, indicated by the compute node, is performed at the page server based on the portion of the data and the other portion of the data. For instance, a QP pushdown operation acting on the data that was requested and then read, as described above, may be performed by pushdown engine220in page QP pushdown manager234of system200inFIG.2. In existing solutions, page servers are not configured and enabled to perform QP operations through pushdowns from a compute node; that is, the compute node handles QP operations exclusively. In contrast, the embodiments herein, e.g., via pushdown engine220, are enabled to perform QP pushdown operations received from a compute node. QP pushdown operations performed utilizing pushdown engine220may include, without limitation, eligible data operations such as row mode filters for non-sargable predicates and row mode bitmap filters, SELECT operation list expressions and row mode scalar aggregation pushdowns, and batch mode filters and aggregations, etc. As an illustrative and non-limiting example, a compute node may provide serialized metadata information and query text fragments to a page server, along with data page IDs corresponding to data page that include the data required for the QP pushdown operations, from metadata generator212and pushdown generator214of node QP pushdown manager232of system200inFIG.2. QP pushdown engine220is configured to compile the query text fragments using the metadata at the page server to generate an executable query plan for the pushdown operation. This also enables a compute node to pushdown QP operations to different page servers that run different code packages, which in turn allows independent upgrades of either compute nodes or page servers without version conflicts. In embodiments, results of QP pushdown operations may be provided from the page server to the compute node. Referring back again toFIG.6, if page split engine216determines at step604that the third page identifier from step602does not match the next page ID received in the request, page split engine216also determines that the page split from step410in flowchart400causes multiple new pages to be generated or that another page split associated with the data requested has occurred, e.g., shown as another new data page502-4inFIG.5Bhaving a corresponding leaf index528in page index508. While other new data page502-4is shown for illustration with respect to leaf index528in page index508, it is contemplated that other new data page502-4may be referenced in a different page server's index because other new data page502-4is stored at the different page server. In such cases, flow diagram600then continues to step606. In step606, it is determined if the third page identifier from step602matches the page identifier of a data page on another page server, e.g., if the third page identifier is not present in page index508of the current page server (as exemplarily illustrated inFIG.5B). If the next data page is located at another page server, flow diagram proceeds to step608. In step608, the portion of the data and the other portion of the data are returned to the compute node from the page server with a data-remaining notification for the request which may cause the compute node to issue an I/O operation to the other page server that maintains the data page having the remaining data that was previously requested in request522. As described herein, embodiments also contemplate that a first page split may result in a new data page being stored at the different page server. That is, the flow diagram600, as exemplarily illustrated, is not so limited, and its steps may be performed when an identifier of second data page, after the page split, indicates that the next data page resides at the different/other page server. Similarly, any given page resulting from one or more page splits may be handled in such a fashion based on a comparison of a next actual data page indicated in a page index of a page server and a next expected data page provided from the compute node. If is determined that the next data page is located at the current page server, flow diagram600continues from step606to step610. In step610, a portion of the data is read from the page associated with the third page identifier, as similarly described for reading data in step406of flowchart400, and flow diagram600may return to step602from step610. That is, any number of iterations of flow diagram600may be performed for a corresponding number of page splits that have occurred. As previously noted, embodiments herein also provide for page affinity in storing data generated and/or altered (in value/content, storage, configuration, and/or the like) by operations, including without limitation, new data pages an page index level splits caused by page splits, and off-row data. Off-row data comprises various types of data associated with the data in a data page but maintained outside of rows of the data, e.g., on another data page. Off-row data includes, but is not limited to, data such as persistent version store (PVS) data pages, small large object (SLOB) pages (e.g., secondary page overflow), unordered collections of rows such as heap forwarded rows, and/or the like. In prior solutions, new data pages generated from page splits and off-row data associated with a table or database might be stored at any page server, and are not guaranteed to be collocated at the same page server with their associated data in the table or database. Therefore, any page server read that access off-row data may need to contact other page servers to complete the request-however, different implementations do not allow for direct communications/requests between page servers, and thus, page servers must fail back to the compute node which in turn provides additional I/O requests to other page servers so the data required can be read locally at the compute node. As noted herein, this approach has drawbacks such as network bandwidth impact, delayed time to complete operations, memory/processing usage impacts at the compute node, etc. Additionally, page servers may be precluded from performing QP pushdown operations when the data and off-row data for a particular operation are not collocated at a single page server. The embodiments herein reduce these impacts and issues by providing page allocation for new data pages and for off-row data so that pages belonging to the same data object are collocated at a page server. Allocator224is configured to increase affinity and collocation of data pages/page indexes and off-row data, as described herein. The described embodiments are also applicable to on premise configurations to collocate data pages with off-row data on the same file. In embodiments, a page server such as one of page server(s)124ofFIG.1Band/or system200ofFIG.2via allocator224, when so configured, may be enabled to reserve or pre-allocate an amount or a percentage of space in data pages stored at storage130and/or storage236, respectively, or in memory206in various embodiments, for newly-generated data pages and off-row data to achieve affinity and/or collocation for related data, i.e., data that is “valid” for achieving collocation and page affinity as described herein. In other words, embodiments herein may require that new data pages, etc., and generated off-row data be validated as related to existing data already stored at a page server in order to store the new/generated data. Newly-generated data pages and changes to page indexes, such as those generated by page splits as described above, may be allocated to page servers that maintain related data pages and page indexes. That is, rather than allocating data pages/page indexes to different page servers as in prior implementations, e.g., for load balancing, storage considerations, based on scheduling, etc., embodiments herein provide for collocating new data pages and changes to page indexes at the same page server that maintains related data. This allows for QP pushdown operations to be performed by page servers through page affinity. When strict page affinity is not possible due to storage space constraints/availability at a page server, data pages, etc., may still be stored at other page servers using a “soft” affinity such that operations which generate new data are allowed to complete without failing. In prior implementations, PVSs use an allocation cache that is partitioned on a scheduler so that there is an entry point in the PVS from each scheduler irrespective of which page server or file the data page belongs to. In such implementations, a background task pre-allocates PVS pages and adds them to the allocation caches to avoid potential file growth operations on write paths. In contrast, PVS page allocation according to the embodiments herein utilizes allocation caches so that there is one allocation cache for each page server. Additionally, embodiments pre-allocate PVS pages to allocate pages for each cache in a round robin fashion. When generating a version of a DB, if a DB server requires a new PVS page, it first looks in the allocation cache that matches the page server of the data page. If the file or page server is not full, a new page is allocated to the cache, while if the file or page server is full, a page in a different cache is located rather than failing the operation back to the compute node. In this case, a request would return from the page server to the compute node where it will be processed locally. However, to prevent the scenario in which the new page is not collocated, as noted above, a page server may reserve storage to accommodate the allocation for PVS pages. As version scans are common in many DB servers, the embodiments herein for PVS page allocation significantly improve page server collocation. Regarding heap forwarding, heaps are on-disk data structures that do not guarantee any ordering. Heaps are implemented as a sequence of pages, and in heaps, rows are identified by reference to a row identifier (RID) that includes the file number, the data page number, and the slot on the page (e.g., FileID:PageID:SlotID). Because heaps are identified by their physical locations, they cannot be moved to a different page or slot. If a heap row is updated, and as a result it no longer fits on a page, a new page must be identified that has sufficient space to move the contents of the row there, while keeping a stub that points to the new RID in the original location. This process is called “forwarding.” Prior heap allocation algorithms are agnostic to page servers, and thus, new page resulting from forwarding can be allocated on a different page server. To avoid this, embodiments herein utilize a similar scheme as described above for handling PVS page allocation. For example, the heap free space cache is populated with pages from all page servers in a round robin fashion, and when an update operation needs to forward the row for the heap, a page in the cache in the same page server that hosts the original page is identified. Thus, in most cases, embodiments herein avoid multiple I/O trips between the storage layer of page servers and the compute node when heap rows span multiple pages. As in the case of PVS pages, a page on the same page server may not be available during heap forwarding, and requests may be returned to the compute node for local processing. In prior solutions, SLOB pages are used to store columns of data that do not fit on the main page. These SLOB pages are typically created in a different allocation unit than the one used for data pages. As a result, these allocation units can be created on different page servers, and this scenario limits QP pushdown operations at page servers because a row can span multiple page servers. To address this concern, a SLOB allocation, e.g., by allocator224inFIG.2, is performed in the same page server as the allocation where the main data resides in its data page. As with the other cases, when not possible to collocate these allocations on the same page server, a request can fail back to the compute node for other allocation options. Referring now toFIG.7, a flowchart700is shown for page split detection and affinity in query processing pushdowns, according to an example embodiment. System100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate according to flowchart700. Further structural and operational examples will be apparent to persons skilled in the relevant art(s) based on the following descriptions. Flowchart700is described below in the context of system100B inFIG.1Band system200ofFIG.2. It should be understood, however, the following description is also applicable to system100A inFIG.1A. Flowchart700begins at step702. In step702, data is received at a compute node of the processing system. For example, a compute node such as compute node(s)122ofFIG.1Band/or system200ofFIG.2, when so configured, receive data stored in data pages from page servers such as such as page server(s)124ofFIG.1Band/or system200ofFIG.2, when so configured, as described herein. In step704, an operation is performed on the data by the compute node. For instance, a compute node as defined herein may perform QP operations on data that is returned from page servers (as in step702). In step706, it is determined, based on the operation, that at least one of a split data page associated with the data or off-row data has been generated, the off-row data being associated with the data and maintained outside of rows of the data. For example, an operations processor of a compute node, such as operations processor222in system200, is configured to determine that the operation performed in step704generates a new, split data page or off-row data that is associated with the data received in step702. In step708, a data page at a page server of a plurality of page servers at which to store the generated split data page or at which to store the off-row data is determined, based on locating the data page that is stored by the page server and that includes the data which corresponds to the operation. For instance, allocator224in system200is configured to determine storage space to be allocated for maintaining data pages generated from page splits and/or for maintaining off-row data. Allocator224is configured to locate a page server that maintains the data pages in which the received and operated-on data from step702and step704is stored. In other words, allocator224is configured to determine where the data from the operation is stored so that collocation and affinity of any new data pages from page splits, and any new off-row data, with the operated-on data can be achieved. Such collocation and affinity allows for QP pushdown operations to later be performed by page servers, and also decreases network traffic between compute nodes and page servers, decreases compute node resource usage, and improves times to finish operations, as noted herein. In step710, store at least one of the generated split data page or the off-row data at the page server based on the data being stored by the page server. For example, allocator224is configured to cause the compute node to provide the generated split data page or the off-row data to the page server identified via step708for storage thereof, providing collocation and affinity for any new data. In step712, a pushed-down query processing operation associated with the data and with the off-row data is received at the page server and subsequent to the off-row data being stored at the page server. For instance, a page server, as described herein, is configured to receive QP pushdown operations from a compute node, e.g., via pushdown generator214of system200inFIG.2, to be performed by the page server. In step714, the pushed-down query processing operation is performed at the page server based on both the data and the off-row data being stored at the page server. For example, pushdown engine220and/or operations processor222of system200inFIG.2are configured to compile, assemble, and/or execute pushed-down query processing operations, as described herein. Step714may be performed similarly as described for step416of flowchart400, where step714performs the QP pushdown operation based on the collocated data stored in step710and other related data that was stored by the page server prior to step710. It should be noted that the page server is enabled to perform the QP pushdown operation, according to embodiments, in step714because the data and off-row data required for the operation are collocated at the page server based on affinity as described above. InFIG.8, a flowchart800is shown for page split detection and affinity in query processing pushdowns, according to an example embodiment. System100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate according to flowchart800. Further structural and operational examples will be apparent to persons skilled in the relevant art(s) based on the following descriptions. Flowchart800is an embodiment of flowchart700inFIG.7, e.g., performed prior to or as part of step710. In embodiments, the steps of flowchart800are specific to various types of off-row data described herein, and in such embodiments, a single step may be performed for a specific off-row data, with other steps being optional. Flowchart800is described below in the context of system100B inFIG.1B, system200ofFIG.2, and flowchart700. It should be understood, however, the following description is also applicable to system100A inFIG.1A. Flowchart800begins at step802. In step802, a new page space is allocated at the page server, and the generated split data page is stored in the new page space. For example, allocator224of system200inFIG.2is configured to allocate new page space at the page server to store a generated split data page, i.e., a new data page generated because of a page split. Allocator224causes a compute node to provide the generated split data page to a page server for storage thereof at the allocated space. In embodiments, the allocated space is determined/specified by allocator224in a compute node that directs a page server to allocate the new page space. Allocator224determines/specifies the new page space based on entries (e.g., log records of transactions/operations) in a log cache of a log server (not shown for brevity and illustrative clarity) that may be communicatively coupled to compute nodes and/or to page servers, in embodiments, as described herein. In step804, a new page is allocated in an allocation cache of the page server, and the off-row data is stored in the new page, wherein the off-row data comprises persistent version store data. For instance, allocator224of system200is configured to allocate a new page in an allocation cache of a page server, e.g., of memory206and/or storage234of system200inFIG.2, and to store the PVS page data in the new page of the allocation cache. Allocator224causes a compute node to provide the PVS page data to a page server for storage thereof at the allocation cache. In step806, the off-row data is stored in the page or in another page of the page server, wherein the off-row data comprises unsorted data or a large object type of data. For example, allocator224of system200is configured to cause a compute node to provide the off-row data, as unsorted data, e.g., heap forwarded rows, or a large object type of data, e.g., SLOB pages, to the page server that includes the data page determined at step708of flowchart700for storage thereof. As noted above, allocation of space for new pages may be performed by allocator214based on information maintained in a log cache of a log server. In embodiments, a compute node may be configured to perform both read and write operations that alter the state of the database. In order to maintain Atomicity, Consistency, Isolation and Durability (ACID) properties of the transaction, a compute node may be configured to generate a log record for the transaction when the transaction commits and store that record locally in a transaction log of the log cache before any data modifications caused by the transaction are written to disk. A log record for a committed transaction includes all the information necessary to re-do the transaction in the event there is a problem (e.g., power failure) before the data modified by the transaction can be stored (e.g., in data page(s)222of storage236ofFIG.2). A log record may comprise information that includes, but is not limited to, a transaction identifier, a log sequence number, a time stamp, information indicating what data object or objects was/were modified and how, and the like. Regarding log sequence numbers, the transaction log operates logically as if the transaction log is a sequence of log records with each log record identified by a log sequence number (LSN). Each new log record is written to the logical end of the transaction log with an LSN that is higher than the LSN of the record before it. Log records are stored in a serial sequence as they are created such that if LSN2is greater than LSN1, the change described by the log record referred to by LSN2occurred after the change described by the log record LSN1. Each log record also contains a transaction identifier of the transaction to which it belongs. That is, a transaction identifier is information that uniquely identifies the transaction corresponding to the log record (e.g., a globally unique identifier (GUID)). The log record corresponding to the transaction is thereafter forwarded to the log server which is configured to provide a log service, in an embodiment. The log service on the log server accepts log records from the compute node, persists them in the log cache, and subsequently forwards the log records to any other compute nodes or compute replicas (i.e., secondary compute nodes) so they can update their local log caches. The log server also forwards the log records to the relevant page server(s) so that the data can be updated there. In this way, all data changes from the compute node are propagated through the log service to all the secondary compute nodes and page servers. Finally, the log records are pushed out to long-term storage such as, for example, storage236. In addition to transaction commits, other types of operations are may also be recorded at a primary compute node and subsequently be forwarded to including, but not limited to, the start of a transaction, extent and page allocation or deallocation, creating or dropping a table or index, every data or schema modification, and/or the like. InFIG.9, a flowchart900is shown for page split detection and affinity in query processing pushdowns, according to an example embodiment. System100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate according to flowchart900. Further structural and operational examples will be apparent to persons skilled in the relevant art(s) based on the following descriptions. Flowchart900is an embodiment of flowchart700inFIG.7, and is described below in the context of system100B inFIG.1B, system200ofFIG.2, and flowchart700. It should be understood, however, the following description is also applicable to system100A inFIG.1A. Flowchart900begins at step902. In step902, an allocation of storage space is reserved at the page server as off-row data storage. For example, allocator224of system200inFIG.2is configured to reserve an amount of storage space at a page server for the storage of off-row data, as described herein. Allocator224is configured to reserve or pre-allocate an amount or a percentage of space in data pages, allocation caches, etc., stored at storage130and/or storage236, ofFIGS.1and2respectively, or in memory206ofFIG.2, in embodiments. In this way, achieving collocation and affinity for related data is readily achieved, according to embodiments. In step904, the off-row data is determined as being valid for inclusion in the off-row data storage prior to the off-row data being stored at the page server in the off-row storage. For instance, allocator224, off-row data manager218, and/or operations processor222of system200inFIG.2may be configured to determine that data is “valid,” or related to existing data already stored at a page server, via page index226of system200. If the off-row data is “valid,” it may be stored in the reserved off-row storage. InFIG.10, a flowchart1000is shown for page split detection and affinity in query processing pushdowns, according to an example embodiment. System100A inFIG.1A, system100B inFIG.1B, and/or system200inFIG.2are configured to operate according to flowchart1000. Further structural and operational examples will be apparent to persons skilled in the relevant art(s) based on the following descriptions. Flowchart1000is an alternate embodiment of flowchart700inFIG.7, e.g., subsequent to step708, and is described below in the context of system100B inFIG.1B, system200ofFIG.2, and flowchart700. It should be understood, however, the following description is also applicable to system100A inFIG.1A. Flowchart1000begins at step1002. In step1002, it is determined, subsequent to the page stored by the page server being determined, that the page server lacks space to store the off-row data. For example, in step708of flowchart700, a data page at a page server that stores related to the off-row data is determined by allocator224of system200inFIG.2in order to identify the page server as the location for storing the off-row data. However, it may be determined by allocator224, index manager210, or another component of system200, is configured to determine, e.g., via page index226or another component related to data storage management, that the identified page server is full, or lacks the required, free storage capacity to store the off-row data. In such embodiments, the off-row data cannot be collocated with the related data through strict affinity, and therefore storage with soft affinity may be performed. In step1004, another page server that includes space to store the off-row data is identified. For instance, allocator224of system200may identify another page server, e.g., of page server(s)124inFIG.1B, at which the off-row data may be stored. In step1006, the off-row data is stored at the other page server to avoid failing the operation. For example, allocator224of system200is configured to cause a compute node to provide off-row data to a different page server instead of the page server that stores data related to the off-row data. III. Example Computing Device Embodiments Embodiments described herein may be implemented in hardware, or hardware combined with software and/or firmware. For example, embodiments described herein may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, embodiments described herein may be implemented as hardware logic/electrical circuitry. As noted herein, the embodiments described, including but not limited to, system100A inFIG.1A, system100B inFIG.1B, system200inFIG.2, system500A ofFIG.5A, system500B ofFIG.5B, and/or system500C ofFIG.5C, along with any components and/or subcomponents thereof, as well any operations and portions of flowcharts/flow diagrams described herein and/or further examples described herein, may be implemented in hardware, or hardware with any combination of software and/or firmware, including being implemented as computer program code configured to be executed in one or more processors and stored in a computer readable storage medium, or being implemented as hardware logic/electrical circuitry, such as being implemented together in a system-on-chip (SoC), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a trusted platform module (TPM), and/or the like. A SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions. Embodiments described herein may be implemented in one or more computing devices similar to a mobile system and/or a computing device in stationary or mobile computer embodiments, including one or more features of mobile systems and/or computing devices described herein, as well as alternative features. The descriptions of computing devices provided herein are provided for purposes of illustration, and are not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). Embodiments described herein may be implemented in one or more computing devices similar to a mobile system and/or a computing device in stationary or mobile computer embodiments, including one or more features of mobile systems and/or computing devices described herein, as well as alternative features. The descriptions of computing devices provided herein are provided for purposes of illustration, and are not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). FIG.11depicts an exemplary implementation of a computing device1100in which embodiments may be implemented. For example, embodiments described herein may be implemented in one or more computing devices or systems similar to computing device1100, or multiple instances of computing device1100, in stationary or mobile computer embodiments, including one or more features of computing device1100and/or alternative features. The description of computing device1100provided herein is provided for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, servers, and/or clusters, etc., as would be known to persons skilled in the relevant art(s). As shown inFIG.11, computing device1100includes one or more processors, referred to as processor circuit1102, a system memory1104, and a bus1106that couples various system components including system memory1104to processor circuit1102. Processor circuit1102is an electrical and/or optical circuit implemented in one or more physical hardware electrical circuit device elements and/or integrated circuit devices (semiconductor material chips or dies) as a central processing unit (CPU), a microcontroller, a microprocessor, and/or other physical hardware processor circuit. Processor circuit1102may execute program code stored in a computer readable medium, such as program code of operating system1130, application programs1132, other programs1134, etc. Bus1106represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory1104includes read only memory (ROM)1108and random access memory (RAM)1110. A basic input/output system1112(BIOS) is stored in ROM1108. Computing device1100also has one or more of the following drives: a hard disk drive1114for reading from and writing to a hard disk, a magnetic disk drive1116for reading from or writing to a removable magnetic disk1118, and an optical disk drive1120for reading from or writing to a removable optical disk1122such as a CD ROM, DVD ROM, or other optical media. Hard disk drive1114, magnetic disk drive1116, and optical disk drive1120are connected to bus1106by a hard disk drive interface1124, a magnetic disk drive interface1126, and an optical drive interface1128, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs, and other hardware storage media. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system1130, one or more application programs1132, other programs1134, and program data1136. Application programs1132or other programs1134may include, for example, computer program logic (e.g., computer program code or instructions) for implementing embodiments described herein, such as but not limited to system100A inFIG.1A, system100B inFIG.1B, system200inFIG.2, system500A ofFIG.5A, system500B ofFIG.5B, and/or system500C ofFIG.5C, along with any components and/or subcomponents thereof, as well as the flowcharts/flow diagrams described herein, including portions thereof, and/or further examples described herein. A user may enter commands and information into the computing device1100through input devices such as keyboard1138and pointing device1140. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit1102through a serial port interface1142that is coupled to bus1106, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A display screen1144is also connected to bus1106via an interface, such as a video adapter1146. Display screen1144may be external to, or incorporated in computing device1100. Display screen1144may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen1144, computing device1100may include other peripheral output devices (not shown) such as speakers and printers. Computing device1100is connected to a network1148(e.g., the Internet) through an adaptor or network interface1150, a modem1152, or other means for establishing communications over the network. Modem1152, which may be internal or external, may be connected to bus1106via serial port interface1142, as shown inFIG.11, or may be connected to bus1106using another interface type, including a parallel interface. TPM1154may be connected to bus1106, and may be an embodiment of any TPM, as would be understood by one of skill in the relevant art(s) having the benefit of this disclosure. For example, TPM1154may be configured to perform one or more functions or operations of TPMs for various embodiments herein. As used herein, the terms “computer program medium,” “computer-readable medium,” “computer-readable storage medium,” and “computer-readable storage device,” etc., are used to refer to physical hardware media. Examples of such physical hardware media include the hard disk associated with hard disk drive1114, removable magnetic disk1118, removable optical disk1122, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media (including memory1120ofFIG.11). Such computer program media, computer-readable storage devices, computer-readable media, and/or computer-readable storage media are distinguished from and non-overlapping with communication media and propagating signals (do not include communication media and propagating signals). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media. As noted above, computer programs and modules (including application programs1132and other programs1134) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface1150, serial port interface1142, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device1100to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of the computing device1100. Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium or computer-readable storage medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware. IV. Additional Examples and Advantages As described, systems and devices embodying the techniques herein may be configured and enabled in various ways to perform their respective functions for page split detection and affinity in query processing pushdowns. In embodiments, one or more of the steps or operations of any flowchart and/or flow diagram described herein may not be performed. Moreover, steps or operations in addition to or in lieu of those in any flowchart and/or flow diagram described herein may be performed. Further, in examples, one or more operations of any flowchart and/or flow diagram described herein may be performed out of order, in an alternate sequence, or partially (or completely) concurrently with each other or with other operations. As described herein, systems, devices, components, etc., of the embodiments that are configured to perform functions and/or operations are also contemplated as performing such functions and/or operations. The embodiments herein provide for increased accuracy of data reads from data pages at page servers by configuring a page server to detect that the data page being read has split. Thus, the page server is enabled to identify another data page at the page server in which a portion of the desired data is now located, to read the portion of data in other data page, and to return complete, and accurate data without additional network traffic and actions required by the compute node, while achieving storage affinity to enable page servers to perform pushdown operations from a compute node. The described embodiments are also adaptable to server systems in addition to database systems that may be configured, as described herein, to perform pushdown operations and/or affinity for data storage. According to the described embodiments for page split detection and affinity in query processing pushdowns, solutions are provided with unique components and configurations to improve processing loads and efficiency in systems of compute nodes and page servers, reduce memory pressure at compute nodes, and greatly reduce network bandwidth usage and I/O operations between compute nodes and page servers, while also providing faster times to complete operations, e.g., via pushdown operations, that were previously not available for software-based services, much less for the specific embodiments described herein for compute nodes and associated page servers. Accordingly, improved query performance for analytical queries against large data sets are realized by the described embodiments. The additional examples and embodiments described in this Section may be applicable to examples disclosed in any other Section or subsection of this disclosure. Embodiments in this description provide for systems, devices, and methods for page split detection and affinity in query processing pushdowns. For example, a method performed by a page server in a computing system is described herein for performing such embodiments. The method includes storing a first page of a database, the first page including data; receiving, from a compute node of the computing system, a request that is associated with the data of the first page, and a next page identifier of a logically adjacent page of the database that is logically adjacent with respect to the first page at the time the request is generated; reading a portion of the data from the first page; identifying a second page identifier from the first page; determining that a page split in the first page has occurred at the page server subsequent to receiving the request, the page split generating a second page at the page server as a new page in the database that includes another portion of the data, based at least on a comparison between the second page identifier and the next page identifier; and subsequent to the determining, extending fulfillment of the request beyond reading the first page by reading, from the second page, the other portion of the data. In an embodiment, the method includes identifying, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; and further extending the fulfillment of the request by reading additional data from an additional page that is associated with the third page identifier. In an embodiment, the method includes identifying, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; and concluding the fulfillment of the request based at least on a determination that the third page identifier matches the next page identifier. In an embodiment of the method, concluding the fulfillment includes returning the portion of the data and the other portion of the data to the compute node from the page server. In an embodiment, the method includes identifying, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; determining that a third page associated with the third page identifier is located at a different page server; and returning the portion of the data and the other portion of the data to the compute node from the page server with a data-remaining notification for the request. In an embodiment of the method, at least one of the reading the portion of the data from the first page or the reading the other portion of the data from the second page includes reading newly-written data that caused the page split. In an embodiment of the method, logically adjacent comprises at least one of sequentially forward or sequentially backward. In an embodiment, the method includes performing a query processing operation, indicated by the compute node, at the page server based on the portion of the data and the other portion of the data. A system is also described herein. The system may be configured and enabled in various ways for page split detection and affinity in query processing pushdowns, as described herein. In an embodiment, the system includes a memory that stores program instructions, and a processing system configured to execute the program instructions. The program instructions cause the processing system to store a first page of a database, the first page including data; receive, from a compute node of the computing system a request that is associated with the data of the first page, and a next page identifier of a logically adjacent page of the database that is logically adjacent with respect to the first page at the time the request is generated; read a portion of the data from the first page; identify a second page identifier from the first page; determine that a page split in the first page has occurred at the page server subsequent to receiving the request, the page split generating a second page at the page server as a new page in the database that includes another portion of the data, based at least on a comparison between the second page identifier and the next page identifier; and subsequent to the determining: extend fulfillment of the request beyond reading the first page by reading, from the second page, the other portion of the data based on the second page being stored at the page server; or return the portion of the data to the compute node from the page server with a data-remaining notification for the request based on a determination that the second page is located at a different page server. In an embodiment of the system, the second page is stored at the page server, and the program instructions cause the processing system to identify, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; and further extend the fulfillment of the request by reading additional data from an additional page that is associated with the third page identifier. In an embodiment of the system, the second page is stored at the page server, and the program instructions cause the processing system to identify, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; and conclude the fulfillment of the request based at least on a determination that the third page identifier matches the next page identifier. In an embodiment of the system, the program instructions, for concluding the fulfillment, cause the processing system to return the portion of the data and the other portion of the data to the compute node from the page server. In an embodiment of the system, where the program instructions cause the processing system to identify, from the second page, a third page identifier of a logically adjacent page of the database with respect to the second page; determine that a third page associated with the third page identifier is located at a different page server; and return the portion of the data and the other portion of the data to the compute node from the page server with a data-remaining notification for the request. In an embodiment of the system, for the program instructions, at least one of the reading the portion of the data from the first page or the reading the other portion of the data from the second page includes reading newly-written data that caused the page split; or logically adjacent comprises at least one of sequentially forward or sequentially backward. In an embodiment of the system, the second page is stored at the page server, and the program instructions cause the processing system to perform a query processing operation, indicated by the compute node, at the page server based on the portion of the data and the other portion of the data. A computer-readable storage medium having program instructions recorded thereon that are configured to cause a processing system that executes the program instructions to perform operations and functions is also described. The program instructions are for page split detection and affinity in query processing pushdowns. The program instructions cause the processing system that executes the program instructions to receive data at a compute node of the processing system; perform an operation on the data by the compute node; determine, based on the operation, that at least one of a split data page associated with the data or off-row data has been generated, the off-row data being associated with the data and maintained outside of rows of the data; determine a data page at a page server of a plurality of page servers at which to store the generated split data page or at which to store the off-row data, based on locating the data page that is stored by the page server and that includes the data which corresponds to the operation; and store at least one of the generated split data page or the off-row data at the page server based on the data being stored by the page server. In an embodiment of the computer-readable storage medium, the program instructions cause the processing system that executes the program instructions to receive, at the page server and subsequent to the off-row data being stored at the page server, a pushed-down query processing operation associated with the data and with the off-row data; and perform, at the page server, the pushed-down query processing operation based on both the data and the off-row data being stored at the page server. In an embodiment of the computer-readable storage medium, the program instructions cause the processing system that executes the program instructions, in order to store the off-row data, to perform at least one of to: allocate a new page space at the page server, and store the generated split data page in the new page space; allocate a new page in an allocation cache of the page server, and store the off-row data in the new page, wherein the off-row data comprises persistent version store data; or store the off-row data in the page or in another page of the page server, wherein the off-row data comprises unsorted data or a large object type of data. In an embodiment of the computer-readable storage medium, the program instructions cause the processing system that executes the program instructions to reserve an allocation of storage space at the page server as off-row data storage. In an embodiment, the program instructions are further configured to cause the processing system that executes the program instructions to determine the off-row data as being valid for inclusion in the off-row data storage prior to the off-row data being stored at the page server in the off-row storage. In an embodiment of the computer-readable storage medium, the program instructions cause the processing system that executes the program instructions to determine, subsequent to the page stored by the page server being determined, that the page server lacks space to store the off-row data; identify another page server that includes space to store the off-row data; and store the off-row data at the other page server to avoid failing the operation. V. Conclusion While various embodiments of the disclosed subject matter have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the embodiments as defined in the appended claims. Accordingly, the breadth and scope of the disclosed subject matter should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
112,478
11860830
DETAILED DESCRIPTION In the following description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. General Overview Described herein are approaches for storing columns of a table in either row-major format or column-major format in an in-memory DBMS. For a given table, one set of columns is stored in column-major format; another set of columns for a table are stored in row-major format. This way of storing columns of a table is referred to herein as dual-major format. Managing, processing, and storing an in-memory dual-major database by an in-memory DBMS is referred to herein as Dual In-Memory Storage. In addition, a row in a dual-major table is updated “in-place”, that is, updates are made directly to column-major columns without creating an interim row-major form of the column-major columns of the row. The changes are immediately visible to database transactions of a DBMS after a database transaction commits the update, in accordance with standard transaction processing protocols. Users may submit database definition language (“DDL”) commands that declare the row-major columns and column-major columns of a table. Thus, a database may be designed to exploit the advantage of both row-major and column-major format for the same table. A storage format of a column may be selected and optimized in light of expected access patterns for the column. In-Memory Dual-Major Database According to an embodiment of the present invention, an in-memory DBMS manages, processes, and stores data in an in-memory dual-major database. In an in-memory DBMS, the DBMS manages a database that is primarily stored in random access memory (RAM). RAM in which the database is stored may comprise only volatile RAM, non-volatile RAM (e.g. PRAM), or a combination of both. Examples of an in-memory DBMS are described in (1) U.S. patent application Ser. No. 12/719,264, entitled Automated Integrated High Availability of the In-Memory Database Cache and the Backend Enterprise Database, filed by Sourav Ghosh, et al. on Mar. 8, 2010, the content of which is incorporated herein by reference, and in (2) U.S. patent application Ser. No. 12/030,094, Database System with Active Standby and Nodes, filed by Rohan Aranha on Feb. 12, 2008, the content of which is incorporated herein by reference. Specifically, the columns of a table are stored in memory pages in the RAM of an in-memory DBMS. A memory page is an addressable unit of memory that is contiguous within an address space of the memory. The row-major columns of a table are stored in memory pages referred to herein as row pages. According to an embodiment of the present invention, row-major columns of a table are stored in a set of row pages referred to as a row page partition. The column-major columns of a table are stored in memory pages referred to herein as column pages. Each column page stores one or more columns of a column group in column-major format. For a column group of a table, the column group is stored in a set of column pages referred to as a column partition. A table may have multiple column groups, each column in the column group being stored in a set of column pages in a column partition. FIG.1depicts row page partitions and column page partitions according to an embodiment of the present invention. Referring toFIG.1, it depicts row page partition RP, which includes row-major page RP1, row-major page RP2, and row page RPN, column partition DE, which includes column page DE1, and column partition F, which includes column page F1. In a row page partition, each row-major page stores row-major columns of a row as a row-part tuple in row-major format. The row's row-part tuple is stored in a row slot of the row page. A row and respective row-part tuple and row slot is functionally mapped to a row id, as shall be described in further detail. According to an embodiment of the present invention, a row page holds N tuples of a row in N row slots. Each row slot is associated with a row slot number, which corresponds to the row slot's ordinal position within a row page, and which is used to identify the row slot within a row page. Row page RP1is divided into slots 1, 2 through N, each slot holding a row-part tuple for row-major columns A, B, and C in row-major format. The slot number of a slot is denoted inFIG.1in column A. Likewise, row partition RP2and RPNare each divided into row slots 1, 2 through N, each row slot holding a row-part tuple for row-major columns A, B, C in row-major format. The row pages of a row partition are sequentially ordered. The row page order of row pages in row partition RP is row page RP1, row page RP2, followed by row page RPN, as specified by the subscript. According to an embodiment of the present invention, the ordinal number corresponding to the order of the row page within a row partition serves as a row page id that identifies the row page. The row page id of a row page RP1is 1, of row page RP2is 2, of row page RPNis N, and so forth. A row id of a row is a functionally mapped to the row's location in a row partition. According to an embodiment of the present invention, the row id is derived using a mathematical formula based on the row page id of the row page and row slot number of the row slot that holds the row-part tuple of a row, as follows. (Row Page Id−1)*N+row slot number Accordingly, the row ids of the rows contained in row page RP1for slot 1, 2, and N are 1, 2, and N, respectively. The row ids of the rows contained in row page RP2for row slots 1, 2, and N are N+1, N+2, and 2N, respectively. Column Pages and Partitions Column partition DE stores columns of a column group comprising columns D and column E. Column partition DE comprises column page DE1, and other columns pages not depicted inFIG.1. Column pages in a column partition are sequentially ordered similar to row pages, the ordering being denoted in the subscript of the label. Columns stored in a column page are in column-major format in a column run comprising column slots. For each column in a column group of a column partition, each column page of the column partition holds a column run. Column page DE1includes a column run D and a column run E, which store column values for columns D and E, respectively. Each column run has the same number of column slots. Each column slot is associated with a column slot number, which corresponds to the column slot's ordinal position within a column run, and which is used to identify the column slot within a column run. The column run D includes column slots 1, 2 through N2; column run E in column page DE1also includes slots 1, 2 through N2. Column partition F includes column page F1and other column pages not depicted inFIG.1. Column run F in column page F1includes column slots 1, 2 through N2. For a given row of a table, column values stored in column partitions for the row are stored in column pages that have the same column page id and in a column slot having the same column slot number. Thus, in column run D and column run E in column page DE1, and column F in column page F1, the column slot with column number 1 holds a column value for the same row. According to an embodiment of the present invention, column pages have larger memory sizes, and contain a greater number of slots than the respective row pages. For a given column group, a column page stores the column values for the rows of corresponding multiple row pages. The number of slots in a column page is a multiple of the N slots in the row page. According to an embodiment of the present invention, that multiple is N, although present invention is not so limited. Thus, each column page of column partition DE and column partition F contains N2column slots. The above arrangement between column pages, the column slots therein, the row pages and the row slots therein, facilitates the functional mapping of a row's row slot to the column slots holding column values for the row, through use of simple mathematical formulas that are based in effect on the location of the row in the row pages. For a given row stored in a row slot of a row page, within each column partition of a table, the column page id and column slot number of a column page holding a column value for the row is determined by the following formulas CLR. CLR Column Page Id=(Row page id−1)/N+1 Column Slot Number=((Row page id−1)*N+row slot number−1)%N2+1 For example, for row N+2, the row page id is 2 and the row slot number therein is 2. According to the above formulas, the column page id in column partition DE that holds a column value for the row is (2−1)/N+1, which is 1. The column slot number is ((2−1)*N+2−1)% N2+1, which equals N+2. Unless specified otherwise, the division denoted in the formulas presented herein is integer division, and % refers to integer modulus. In the above arrangement and according to formulas CLR, a row page is associated with only one column page; a row slot is associated with only one column slot. However, a column page is associated with N row pages, but a column slot is associated with only one row slot. The association is dictated by formulas CLR and other formulas described herein. The above one-to-one arrangement facilitates functional mapping of row slots and column slots through the use of simple mathematical formulas. In effect, the column slots and column pages that hold a row's column values is dictated by the row's storage location within a row page partition of the table of the row. Location and Row ID Resolution Location resolution refers to the operation of determining a location of a row in a row partition, row page, column partition, and/or column page. Columnar location resolution refers to determining the location of a row in a column partition and/or column page column given input about a row's location in a row page and/or row partition, such as row's row page id and row slot number, or row id. Columnar location resolution is performed based on formulas CLR. As illustrated above, the outcome of columnar location resolution is a column page id and column slot number. Row location resolution refers to determining the location of a row in a row partition or row page given input from which the row's location is derived, such as the column page id and column slot number of the row, or row id. The outcome of row location resolution is a row page id and row slot number. The functional one-to-one mapping between the row slots and column slots may be exploited by using the formulas to quickly perform row location resolution. The following formulas RLRC yield a row location based on column page id and column slot number of a row. RLRC Row Page Id=((Column Page Id−1)*N2)+(Column Slot Number−1)/N+1 Row Slot Number=(Column Slot Number) %N For example, to determine the corresponding row page for column slot number N+2 for which the column page id is 1, the row page is calculated ((1−1)*N2)+(N+2−1)/N+1, which is 2. To determine the corresponding row slot number, the row slot number is calculated as (N+2)% N, which is 2. To determine the row location from a row id, the following formulas RLRR may be used. RLRR Row Page Id=(Row Id−1)/N+1 Row Slot Number=(Row Id−1)%N+1 Row location resolution and columnar location resolution using formulas CLR, RLRC, and RLRR as described thus far yield locations that are logical; a row page id and row slot number comprise a logical location of a row in a row page, and a column page and column slot number comprise a logical location of a row in a column page. To access the row in memory, a memory address is needed. Thus, location resolution also includes generating a memory address for a row in a row page or column page. To enable access to memory addresses of logical locations, a page directory is maintained within an in-memory DBMS. The page directory maps the row page ids of tables to the memory addresses of the row pages, and column pages of column groups to memory addresses of the column pages. To retrieve a row-part tuple for a row using a row page id and row slot number, the page directory is accessed and examined to determine the memory address mapped to the row page id of the row page, and the row slot number is used as an offset to access the row tuple within the row page. To retrieve a column value for a row using a column page and column slot number, the page directory is accessed and examined to determine the memory address mapped to the column page id of the column page, and the column slot number is used as an offset to access the column value within the column page. Additional Page Structures FIG.2shows a row page and column page with additional detail pertinent to various database operations described herein, such as columnar scanning and in-place updating operations discussed later. Referring toFIG.2, it depicts row page RP1, with column A as a primary key column, and column B as a column storing row ids. According to an embodiment of the present invention, in column B, each row-part tuple in row partition RP stores the row ids of the respective row, allowing for a quick resolution of a row id of a row. According to an embodiment of the present invention, a table may include a primary key in the row-major columns of the table. In addition, the row-major columns of a table include at least one row control column. Row control column R includes information for managing the row, including information for concurrency and transaction control, such as reference counts, a transaction list, etc. According to an embodiment of the present invention, each column page of a column partition includes value statistics on column values stored in the column page. Column page CP1includes Min-Max statistics MM and Bloom filter BM. Min-Max statistics specifies the minimum and maximum column value stored in a column page. A Bloom filter is a probabilistic data structure that is used to identify values that are not a member of a set. A Bloom filter indicates existence of a large set of values by allowing for a certain degree of error. The nature of the Bloom filter is such that application of a Bloom filter against a value reveals one of two possibilities: (1) the value is not in the set for which the Bloom filter was derived; or (2) the value could be but is not necessarily in the set for which the Bloom filter was derived. That is to say, application of the Bloom filter may result in false positives, but never in false negatives. A Bloom filter in the value statistics of a column page is used to identify the existence or not of a column value in a column page. It is derived from the column values from the column page. An in-memory DBMS may store, maintain, and use other structures for optimizing data access. For example, a row-major column and column-major column may be indexed by an index, such as a binary tree index or bitmap index. Such indexes may be created and maintained by the in-memory DBMS for a row-major or column-major column in response to receiving a DDL statement that declares the index for the column. Columnar Scanning The power of Dual In-Memory DBMS Storage is demonstrated by performance of the fundamental database operation of columnar scanning. In columnar scanning, a column is scanned to determine the rows that have column values that satisfy one or more filtering criteria. For purposes of illustration, the one or more filtering criteria comprise a filter criterion that a column value equal a predicate value specified in the predicate of a query being computed by the in-memory DBMS. The query is computed by the in-memory DBMS in response to receiving a database statement. An output of columnar scanning of a column page includes information that identifies the rows having a column value that satisfies the one or more filtering criteria and/or that specifies the location of each of the rows. According to an embodiment of the present invention, the output includes a set of row pointers. A row pointer includes location information for locating the row-part tuple of a row. A row pointer may include a row id of the row, or the row page id and row slot number of the row. Using columnar location resolution and a row pointer for a row, or row page id and row slot number, the column values of the row can be located in the respective row page partitions and column partitions. In effect, a row pointer identifies the location of all the row's column values. FIG.3is a flow chart depicting a columnar scanning procedure for Dual In-Memory DBMS Storage. The columnar scanning procedure processes each column page of a column page partition. Referring toFIG.3, for each column page, at305, it is determined from the value statistics in the column page whether any column values stored in the column page could satisfy the filtering criteria. If not, further columnar scanning processing of the column page is avoided, and no row pointers are returned for the column page. Another column page is then processed by the procedure, if any. At310, the column values in the column page are scanned. The column slot numbers of the column slots that have a column value that satisfy the filtering criteria are tracked. At315, row pointers are generated from the column slot numbers tracked in310. The row pointers are generated using row location resolution based on the column page id of the column page and the column slot numbers tracked. At320, the row pointers generated are returned. Then another column page of the column partition is processed by the procedure. In Place Updating As mentioned previously, under Dual In-Memory Columnar Storage, a row may be updated in place in the memory that stores column-major data. To disambiguate in-place updating under Dual In-Memory Columnar Storage from various other approaches for updating column-major data, the various other approaches are described herein in further detail. As mentioned earlier, in the row-copy-first approach, a DBMS maintains two versions of its database, a column-major version of the database and a row-major version of the database. Updates are first made to the row-major version, are committed or otherwise completed, and are then applied to the column-major version, often in batches. Specifically, database transactions that are executed in the DBMS update the row-major version. The database transactions eventually are committed, thereby committing the updates made by the database transactions in the row-major version. After these transactions are committed, the updates are applied and or otherwise propagated to the column-major version. In the change-inline approach, when a transaction updates a row stored in a data block in column-major format, the columns for the row are stitched together to store a row-major version of the row in the data block, which is updated accordingly. When the transaction is committed, the data block is committed with the row stored in row-major form. Subsequently, the data block may be reformatted to remove the row-major form of the row and to restore the row to column-major form in the data block. In in-place updating under Dual In-Memory Columnar Storage, when a database transaction updates the row of a table stored in a row page and the column partition of the table, the changes are made in the row partition in row-major form and column partitions in column-major form. When the database transaction is committed, the changes in the row partition and column partitions are committed. No row-major form of the column-major columns of the row is generated and committed for the database transaction by a database server of the in-memory DBMS. FIG.4is a flow chart depicting a procedure for in-place updating performed by a database server of a DBMS to execute a database transaction, herein referred to as the “current database transaction”. The database transaction is executed by the in-memory DBMS in response to the database server receiving a database statement requesting the update. For purposes of illustration, a column D of a “current row” is being updated with a new value by the database transaction. Referring toFIG.4, at405, the row-part tuple of the row is marked as deleted. Specifically, the row control column of a row-part tuple is marked to indicate that the row is deleted by the current database transaction. The column values of the row are left unchanged by the database transaction in the row-part tuple and in the corresponding column slots of the row. Marking the column in this way enables other database transactions that scan the row before the current database transaction commits to see the row in the state as it existed before it was updated, in accordance with protocols for transaction processing. At410, an open row slot not being used to store a row is found in a row page of the row partition for the table. Because the row slot is open, the corresponding column slots for the columnar groups are also open. At415, the row being updated is stored in the open row slot and corresponding column slots. The columns slots are identified and located using columnar location resolution based on the row page id and row slot number of the open row slot. In addition, the new value is stored in column D. The row control column in the open row slot is updated to reflect that the row was stored in the row slot to update the current row for the current database transaction. At420, the current database transaction is committed, thereby committing the row in the open row slot and corresponding column slots. Committing the database transaction may entail updating the value statistics of the respective column pages to reflect the column values now committed in the corresponding slots. The min/max statistics and Bloom filters in the column pages are updated to reflect the existence of the column values added to the column pages. Run Length Encoding of Columns A column in a column partition may be compressed using run length encoding. Run-length coding is a form of data compression in which sequences of the same data value are stored as a combination of a data value and count instead of the sequences. The count represents the number of occurrences of the data value represented by the combination, and is referred to herein as a run count; the data value is referred to herein as the run value; the combination is referred to herein as an encoded run. The occurrences of a data value that are represented by an encoded run is referred to herein as a run. Run-length encoding is particular valuable for bodies of data with larger sequences of occurrences of the same data value. According to an embodiment of the present invention, a single column stored in a column partition may be compressed using run-length encoding. Each run is represented by a column run tuple, which in effect is an encoded run. Column run tuples are stored in column slots in the same relative order the runs occur in the column. In run-length encoding of a column in a table with L rows stored in L row slots, the column may be represented by K column run tuples stored in K column slots, where K is different and much smaller than L. This arrangement alters the one-to-one mapping relationship between row slots and column slots upon which the previously described formulas used for location resolution are based. To facilitate performance of location resolution, a positional index is maintained for a run-length encoded column partition. A positional index may also be used to index a column partition that is not run-length encoded. In addition to run-length encoding, a row partition or column partition may be compressed using other techniques, such as dictionary compression or delta-encoding. In an embodiment of the present invention, a column partition may be compressed using dictionary compression and run-length encoding, thereby encoding runs of dictionary tokens. Dictionary compression of columns is described in U.S. patent application Ser. No. 13/224,327, entitled Column Domain Dictionary Compression, filed by Tirthankar Lahiri, et al. on Sep. 9, 2011, the contents of which are incorporated herein by reference. Positional Index According to an embodiment of the present invention, a positional index is used to identify column values based on row location information, such as a row id, or other location information, such as row page id and row slot number. The positional index is bi-directional, and can be used to determine a range of row ids, row page id and row slot numbers of rows that correspond to a column value, and to determine a column page id and column slot number of column slots of rows that correspond to the column value. FIG.5depicts a positional index I according to an embodiment of the present invention, which indexes column partition P. Column partition P is compressed using run-length encoding. For purposes of exposition, each column page of column partition P contains two “column run tuples”, each encoding and representing a run of column values. In order, column partition P includes column page P1, column page P2, column page P3, and column page P4. Column partition P1contains column run tuple (V1, C1), where V1is the run value and C1is the run count. Other column run tuples in the column partition P are identically structured. The next tuple in column page P1is (V2, C2). Following column page P1is column page P2, with column run tuple (V3, C3) followed by (V4, C4), and so forth. Positional index I is a hierarchical index comprising nodes I1, I2, and I3. Nodes in positional index I are organized in a parent-child hierarchy with other nodes in positional index I and with column pages in column partition P. For purposes of exposition, each node contains two “index tuples”, each of which is a parent of other index nodes, or in the case of index tuples in leaf index nodes, a parent of a column page in the indexed column partition P. Each index tuple contains (a) a link to a child, which is either an index node or column page of the indexed column partition, and (b) an aggregate run count, which is the sum of the run counts of runs which in effect is represented by a child of the index tuple. A sequence of runs is referred to as an aggregate run; the sum of the run counts of the aggregate run is referred to as the aggregate run count. Thus, each index tuple represents an aggregate run and contains an aggregate run count. For the aggregate run represented by an index node, the index node includes a first aggregate run, a subsequent second aggregate run, and so forth. The first tuple represents the first aggregate run and holds the aggregate run count thereof. The next tuple represents the second aggregate run and holds the aggregate run count thereof, and so forth. Note the runs and the respective locations in the column partition are represented by virtue of the order of index tuples in the index nodes and the parent-child hierarchical relationship between the index nodes, which reflects the order of the runs. An implication of the arrangement of a positional index described above is that the root index node in effect represents all the runs in the indexed column partition. Referring toFIG.5, index node I3is the root node and includes two tuples. The first index tuple is linked to index node I1and contains an aggregate run count that is the sum of C1, C2, C3, and C4. In effect, the first index tuple represents the first runs in column partition P (i.e. in column page P1and P2) by virtue of its order in index node I3and its aggregate count covering the aggregate count of the first runs. The second index tuple is linked to index node I2and contains an aggregate run count that is the sum of C5, C6, C7, and C8. In effect, the second index tuple represents the second and remaining runs in column partition P (i.e. in column pages P3and P4). Index node I2is a child of the first tuple in root index node I3and represents that first tuple's aggregate run. Index node I2itself contains index tuples. The first index tuple is linked to column partition P1and contains an aggregate run count that is the sum of C1and C2. This first index tuple represents the first runs of column partition P (those of column pages P1and P2) by virtue of its order in index node I1and index node I1's position in the parent-child hierarchy of positional index I. The second index tuple represents the next runs of column partition P (those of column pages P3and P4) by virtue of its order in index node I1and index nodes I1's position in the parent-child hierarchy of positional index I. Index node I2is a child of the second index tuple in root index node I3and represents that second tuple's aggregate run (those of column pages P3and P4) in the same way as index node I1represents the aggregate run of the first tuple in root index node I3. Given a row id of a row, index nodes in positional index I are traversed to identify the column run tuple and column page id that holds the column value for the row. Importantly, within column partition P for a row having a row id R, the total run count of the encoded runs preceding the encoded run of the row plus the run count of the encoded run representing R must be at least great as the row id. During the traversal, index nodes and index node tuples are visited. As index node tuples are visited, an accumulated run count is tracked. If the accumulated run count plus the aggregate run count of the index tuple being visited cover the row id, then the child index node tuple is visited. Otherwise, the aggregate run count of the index tuple is added to the accumulated run count and the next index tuple is visited. Eventually the column partition containing the run for the row id is visited and evaluated to retrieve the column value. For example, assume a row id is C1+C2+C3+C4+2, and that C5is greater than 2. In traversing positional index I, the first node visited is root index node I3. In examining the first tuple therein, it is determined that the aggregate run count C1+C2+C3+C4is less than the row id. Therefore the aggregate run of the first tuple cannot include the row. Next, the second tuple is examined. Since accumulated run count C1+C2+C3+C4, and aggregate run count C5+C6+C7+C8together is greater than the row-id, index node I2must represent an aggregate run that covers the row id. The child index node I2to which the second index tuple is linked is traversed to and visited. Next, the first tuple in index node I2is examined. Since the cumulative aggregate run count C1+C2+C3+C4plus the aggregate run count C5+C6of the first tuple is at least as great as the row, the first tuple in index node I3must represent an aggregate run that covers the row id. The column page P3, to which the first index tuple is linked, is visited. Based on the difference between the cumulative aggregate run count and the row id, which is less than C5, it is determined the first column run tuple contains the value for the row id, which is value V5. Reverse Traversal to Find Row ID In an embodiment of the present invention, the child of an index node has a parent link to the index node. For a column page, a parent link to the parent index node may be stored in the column page or in the page directory. Positional index I can be traversed bottom up to find row ids and ranges of row ids of rows that contain a column value. Given a column value, a column run tuple with a matching value is found, and positional index I is traversed to find a corresponding row id or range of row ids. As each node is visited, the aggregate run counts of the preceding node index tuple of the parent index tuple are accumulated to determine the row id or range of row ids for the column value. For example, for the value V7in column page P4, the parent link of column page P4is followed to visit node I2. It is determined that there is index tuple preceding the parent index tuple of column page P4; C5+C6, the aggregate run count of the preceding parent index tuple, is accumulated. Next, the parent link for index node I2is followed to visit node I3. It is determined that there is an index tuple preceding parent index tuple; C1+C2+C3+C4is added to the accumulated run count of C5+C6. The traversal of positional index I being complete, the range row ids is found to be the range of (accumulated run count+1) to (C7−1). Once the corresponding row id or row ids are determined, row location resolution may be performed to find the rows in row page partition and column page partitions. DBMS Overview Embodiments of the present invention are used in the context of a DBMS. Therefore, a description of a DBMS is useful. A DBMS manages a database. A DBMS may comprise one or more database servers. A database comprises database data and a database dictionary that are stored on a memory mechanism, such as a set of hard disks, or in the case of an in-memory DBMS, RAM. Users interact with a database server of a DBMS by submitting to the database server commands that cause the database server to perform operations on data stored in a database. A user may be one or more applications running on a client computer that interact with a database server. Multiple users may also be referred to herein collectively as a user. A database command may be in the form of a database statement that conforms to a database language. A database language for expressing database commands is SQL. There are many different versions of SQL, some versions are standard and some proprietary, and there are a variety of extensions. DDL commands are issued to a database server to create or configure database objects, such as tables, views, or complex data types. A disk-based DBMS uses disk storage to store databases. A disk-based DBMS is designed, programmed, configured, and optimized under the assumption that the data items and related data structures primarily reside on disk. Optimization algorithms, buffer pool management, and indexed retrieval techniques are designed based on this fundamental assumption. One problem with disk storage is that access to the data items and to the data structures is relatively slow. Even when a disk-based DBMS has been configured to cache many data items and data structures in main memory, its performance is hobbled by assumptions of disk-based data residency. These assumptions cannot be easily reversed because they are hard-coded in processing logic, indexing schemes, and data access mechanisms. An in-memory DBMS stores a database primarily in RAM. By managing a database in RAM and optimizing the data structures and data access algorithms for RAM, an in-memory DBMS is able to provide improved responsiveness and throughput compared even to a fully cached, disk-based DBMS. For example, an in-memory DBMS is designed with the knowledge that the data items reside in RAM in memory pages, and is thus able to take more direct routes to the data items, reducing lengths of code paths, and simplifying algorithms and data structures. When the assumption of disk-residency is removed, complexity can be reduced. The number of machine instructions drops, buffer pool management may disappear, extra copies of the data items and/or data structures are not needed, and indexes shrink. Database statements may be computed faster. An in-memory DBMS may provide database persistency, by for example, archiving data from main memory to disk, or by maintaining a disk or flash-based transaction log. A dual-memory DBMS may store a portion of a database as an in-memory database and another portion of the database as a disk-based database. Such a DBMS is configured to handle the complexities of both types of database storage. The in-memory database may be a copy or mirrored version of a portion of the disk-based database. Alternatively, an in-memory portion of the database may comprise database objects that are different than database objects in the disk-based database. Examples of a dual-memory DBMS are described in U.S. Provisional Patent Application No. 61/880,852 by Vineet Marwah, Jesse Kamp, Amit Ganesh, et al. (Oracle International Corporation as Applicant), entitled Mirroring, in Memory, Data From Disk To Improve Query Performance, filed on Sep. 21, 2013, the content of which is incorporated herein by reference. A multi-node DBMS is made up of interconnected nodes that share access to the same database. Typically, the nodes are interconnected via a network and share access, in varying degrees, to shared storage, e.g. shared access to a set of disk drives and data blocks stored thereon. The nodes in a multi-node database system may be in the form of a group of computers (e.g. work stations, personal computers) that are interconnected via a network. Alternately, the nodes may be the nodes of a grid, which is composed of nodes in the form of server blades interconnected with other server blades on a rack. Each node in a multi-node database system hosts a database server. A server, such as a database server, is a combination of integrated software components and an allocation of computational resources, such as memory, a node, and processes on the node for executing the integrated software components on a processor, the combination of the software and computational resources being dedicated to performing a particular function on behalf of one or more clients. A database is defined by a database dictionary. The database dictionary contains metadata that defines database objects physically or logically contained in the database. In effect, a database dictionary defines the totality of a database. Database objects include tables, columns, data types, users, user privileges, and storage structures used for storing database object data. The database dictionary is modified according to DDL commands issued to add, modify, or delete database objects. For example, an in-memory DBMS receives a DDL statement that declares a table and certain columns of the table. The DDL statement may declare column groups, the columns that belong to each of the column groups, and that the column group is column-major. A row-major column may be declared row-major by default by not explicitly specifying so in the DDL statement, or by explicitly specifying so in the DDL statement. Alternatively, if the DDL does not specify whether a column is row-major or column-major, the columns may by default be column-major. In response to receiving the DDL statement, an in-memory DBMS modifies its database dictionary to add metadata defining the table, a column group, the column group as column-major, the columns that belong to the column group, and one or more row-major columns. Further in response to receiving the DDL statement, the in-memory DBMS creates column partitions for the column group, and one or row partitions for row-major columns. A database dictionary is referred to by a DBMS to determine how to execute database commands submitted to a DBMS. Changes to a database in a DBMS are made using transaction processing. A database transaction is a set of operations that change database data. In a DBMS, a database transaction is initiated in response to database statement requesting a change, such as a data manipulation language statement (DML) requesting as an update, insert of a row, or a delete of a row. Committing a transaction refers to making the changes for a transaction permanent. Under transaction processing, all the changes for a transaction are made atomically. When a transaction is committed, either all changes are committed, or the transaction is rolled back. Hardware Overview According to one embodiment of the present invention, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.6is a block diagram that illustrates a computer system600upon which an embodiment of the invention may be implemented. Computer system600includes a bus602or other communication mechanism for communicating information, and a hardware processor604coupled with bus602for processing information. Hardware processor604may be, for example, a general purpose microprocessor. Computer system600also includes a main memory606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus602for storing information and instructions to be executed by processor604. Main memory606also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor604. Such instructions, when stored in non-transitory storage media accessible to processor604, render computer system600into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system600further includes a read only memory (ROM)608or other static storage device coupled to bus602for storing static information and instructions for processor604. A storage device610, such as a magnetic disk or optical disk, is provided and coupled to bus602for storing information and instructions. Computer system600may be coupled via bus602to a display612, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device614, including alphanumeric and other keys, is coupled to bus602for communicating information and command selections to processor604. Another type of user input device is cursor control616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor604and for controlling cursor movement on display612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system600may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system600to be a special-purpose machine. According to one embodiment of the present invention, the techniques herein are performed by computer system600in response to processor604executing one or more sequences of one or more instructions contained in main memory606. Such instructions may be read into main memory606from another storage medium, such as storage device610. Execution of the sequences of instructions contained in main memory606causes processor604to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device610. Volatile media includes dynamic memory, such as main memory606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor604for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system600can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus602. Bus602carries the data to main memory606, from which processor604retrieves and executes the instructions. The instructions received by main memory606may optionally be stored on storage device610either before or after execution by processor604. Computer system600also includes a communication interface618coupled to bus602. Communication interface618provides a two-way data communication coupling to a network link620that is connected to a local network622. For example, communication interface618may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface618may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface618sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link620typically provides data communication through one or more networks to other data devices. For example, network link620may provide a connection through local network622to a host computer624or to data equipment operated by an Internet Service Provider (ISP)626. ISP626in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”628. Local network622and Internet628both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link620and through communication interface618, which carry the digital data to and from computer system600, are example forms of transmission media. Computer system600can send messages and receive data, including program code, through the network(s), network link620and communication interface618. In the Internet example, a server630might transmit a requested code for an application program through Internet628, ISP626, local network622and communication interface618. The received code may be executed by processor604as it is received, and/or stored in storage device610, or other non-volatile storage for later execution. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
48,697
11860831
DETAILED DESCRIPTION Under conventional approaches, a platform for entering data is provided. Through platforms provided under conventional approaches, a user has to manually define each data collection construct for use in collecting data regardless of whether each data collection construct shares commonalities or are otherwise related. Specifically, under conventional approaches a user has to manually define an entire data collection construct without sharing code commonalities with other data collection constructs associated with the data collection construct. Additionally, through platforms provided under conventional approaches, a user has to manually define each data storage construct for storing data collected through a data collection construct regardless of whether each data storage construct shares commonalities or are otherwise related. Specifically, under conventional approaches a user has to manually define an entire data storage construct without sharing code commonalities with other data storage constructs associated with the data storage construct. A claimed solution rooted in computer technology overcomes problems specifically arising in the realm of computer technology. In various embodiments, a user can provide user input for defining a data set. User input provided by a user defining a data set can be used in collecting data for the data set. In certain embodiments, a data collection construct can be defined for collecting data in the data set based on the user input. The user input can be used to generate and/or update a data entry user interface based on the user input for purposes of collecting data in the data set based on the user input. For example, the data entry user interface can be automatically generated and/or updated when the user defines the data set. The data entry user interface can be compatible with different types of front end systems. In various embodiments, a data storage construct for storing data in a datastore of the data set input through the data entry user interface of the data collection construct is automatically defined based on the user input. A data storage construct can be defined by automatically defining table schema (and/or other schema) and index mappings for retrieving data of the data set input through the data entry user interface according to the data collection construct and stored in the datastore. In some embodiments, the data storage construct does not define and/or otherwise enforce any schema (e.g., if the datastore lacks a schema or is a blob store). For example, the data entry user interface can validate the user input and provide feedback directly to the user. Further, the data storage construct can be defined based on the user input while refraining from querying the user for additional input for defining the data storage construct. In various embodiments, queries for use in retrieving the data in the data set input through the data entry user interface and stored in the datastore using the data storage construct are automatically defined. Queries for use in retrieving the data in the data set input through the data entry user interface and stored in the datastore can be automatically defined as part of defining the data storage construct. In certain embodiments, additional user input indicating modifications to the data set can be received. Additional user input indicating modifications to the data set can be received through the data entry user interface included as part of the defined data collection construct. In various embodiments, the data collection construct, the data storage construct, and the queries are updated based on the additional user input indicating modifications to the data set. FIG.1illustrates an example environment100for entering data. The example environment100includes a repository datastore102. In some embodiments, the repository datastore102comprises a central repository datastore configured to store data in a centralized location. In some embodiments, the repository datastore102may comprise one or more non-central repositories (e.g., distributed repositories) instead of, or in addition to, a central repository. The one or more non-central repositories may be configured to store data in one or more non-centralized locations. This may allow, for example, different entities having different requirements for permissions to share the system for data entry/collection without sharing a data storage construct. Additionally, this may allow users to independently scale out the data storage construct for high data availability without affecting other aspects of the system. It will be appreciated that reference to a central repository. For example, the repository datastore102can store data for an enterprise in a centralized location. Further, the repository datastore102can store data in a location remote from a source of the data. For example, the repository datastore102can be implemented as a cloud-based datastore configured to store data remote from an enterprise system that generates the data. As shown inFIG.1, the example environment100also includes a customizable data entry system104. The example environment100can include one or more processors and memory. The one or more processors and memory of the example environment100can be included as part of the customizable data entry system104. The processors can be configured to perform various operations of the customizable data entry system104by interpreting machine-readable instructions. The customizable data entry system104can be implemented through, at least in part, or otherwise accessed through a graphical user interface presented to a user. In various embodiments, the customizable data entry system104can be implemented through, at least in part, a graphical user interface presented to a user as part of a data entry user interface. In various embodiments, the customizable data entry system104is configured to store and retrieve data stored in the repository datastore102. The customizable data entry system104can store and retrieve data stored in the repository datastore102through one or an applicable combination of a local area network, a wide area network, an enterprise network, and a local device. In various embodiments, the customizable data entry system104can store and retrieve data stored in the repository102that is entered through the customizable data entry system104by a user. More specifically, the customizable data entry system104can store and retrieve all or portions of data in a data set that is input through the customizable data entry system104and stored in the repository102by the customizable data entry system104. In various embodiments, the customizable data entry system104can send all or portions of a data set stored locally on a system or part of a network on which the customizable data entry system104is implemented to the repository102, for use in storing all or portions of a data set at the repository102. In various embodiments, the customizable data entry system104can present or cause presentation of a data entry graphical user interface, hereinafter referred to as a data entry user interface, to a user for use in inputting data by the user. The customizable data entry system104can generate a data entry user interface based on input received from a user as part of a data collection construct. Additionally, the customizable data entry system104can modify an already created data entry user interface based on input received from a user as part of modifying the data collection construct according to the user input. In some embodiments, the customizable data entry system104can automatically generate and/or update the data entry user interface (e.g., when the user defines a data set). The data entry user interface can be compatible with different types of front end systems. As shown inFIG.1, in some embodiments, the customizable data entry system104can include a data collection construct management engine106, a data storage construct management engine108, a repository storage engine112, and a datastore110. The data collection construct management engine106, the data storage construct management engine108, and the repository storage engine112can be executed by the processor(s) of the customizable data entry system104to perform various operations including those described in reference to the data collection construct management engine106, the data storage construct management engine108, and the repository storage engine112. In various embodiments, the data collection construct management engine106is configured to define a data collection construct for use in collecting data in a data set from a user. In defining, a data collection construct for use in collecting data in a data set, the data collection construct management engine106can generate and update a data collecting construct. For example, the data collection construct management engine106can modify an already created data collection construct. A data collection construct can include a type of data in a data set to collect, a format in which to collect data in a data set, needed fields for collecting data, rules associated with collecting the data through a data collection construct, and a data entry user interface for use in collecting data in a data set. For example, a data collection construct created by the data collection construct management engine106can include a data entry user interface with fields a user can populate based on a data type defined for the fields. In another example, the data collection construct management engine106can modify a data collection construct by adding a field to a form as part of a data entry user interface. In yet another example, a data collection construct created by the data collection construct management engine106can include validation constraints for validating data entered through the data collection construct. In various embodiments, the data collection construct management engine106is configured to define a data collection construct based on user input. In defining a data collection construct based on user input, the data collection construct management engine106can generate a new data collection construct for use in collecting data based on the user input. For example, the data collection construct management engine106can generate a data entry user interface of a specific format for collecting data of a specific type, as indicated by user input indicating the format in which to collect the specific type of data. Further, in defining a data collection construct based on user input, the data collection construct management engine106can modify an already created data collection construct for use in collecting data based on the user input. For example, the data collection construct management engine106can change a format of a data entry user interface of an already created data collection construct based on user input specifying a new format. In various embodiments, the data collection construct management engine106is configured to use a previously defined data collection construct to define a data collection construct. For example, the data collection construct management engine106can use a form of a previously defined data collection construct to define a new data collection construct. In using a previously defined data collection construct to define a data collection construct, the data collection construct management engine106can group, chain, or nest the defined data collection construct with the previously defined data collection construct. For example, the data collection construct management engine106can associate a data collection construct used in collecting data for a specific organization with a previously defined data collection construct for the organization. In another example, the data collection construct management engine106can nest a data collection construct with a previously defined data collection construct to cause data entered to the data collection to also be entered in the previously defined data collection construct after being entered through the data collection construct. In various embodiments, the data collection construct management engine106is configured to analyze data at the data collection construct as it is entered through the data collection construct. In analyzing data at the data collection construct as it is entered through the data collection construct, the data collection construct management engine106can validate the data at the data collection construct. For example, if validation constraints specify data entered into a field can only include numbers, then the data collection construct management engine106can validate data entered into the field to ensure the entered data does not include letters. In various embodiments, the data collection construct management engine106is configured to present or cause presentation of a data entry user interface to a user for purposes of providing functionalities to the user for entering data through the interface. The data collection construct management engine106can present a data entry user interface to a user as defined by a data collection construct. For example, if a defined data collection construct specifies to present a data entry user interface including a graph node, then the data collection construct management engine106can present a data entry user interface including the graph node to a user. In another example, if a data collection construct is modified to change a user interface from including a graph node to a table row, then the data collection construct management engine106can modify a data entry user interface from presenting the graph node to the table row. In various embodiments, the data storage construct management engine108is configured to define a data storage construct. A data storage construct created by the data storage construct management engine108can include either or both table schema and index mappings for storing data through the data storage construct. Additionally, a data storage construct created by the data storage construct management engine108can include queries for use in retrieving data stored according to the data storage construct. For example, a data storage construct created by the data storage construct management engine108can include queries for use in presenting to a user data entered into fields as the user enters the data into the fields. The data storage construct management engine108can define a data storage construct for use in storing data entered through a data collection construct defined by the data collection construct management engine. Additionally, the data storage construct management engine108can define a data storage construct for use in storing data at the datastore110. In various embodiments, the datastore110can be implemented locally with respect to the customizable data entry system104. For example, the datastore110can be implemented on a device used to present a data entry user interface to a user in the operation of the customizable data entry system104. In another example, the datastore110can be implemented within a local area network or an enterprise network of a user entering data through the customizable data system104. In various embodiments, the data storage construct management engine108is configured to define a data storage construct specifically associated with a data collection construct. In being associated with a data collection construct, a data storage construct can be used to store data entered through the data collection construct. For example, the data storage construct management engine108can define a new data storage construct associated with a data collection construct. In another example, the data storage construct management engine108can associate an already created data storage construct with a data collection construct when the data collection construct is created. A data storage construct defined by the data storage construct management engine108can be associated with a plurality of data collection constructs. For example, a data storage construct defined by the data storage construct management engine108can be associated with data collection constructs with data entry user interfaces configured to collect data of a specific type. In some embodiments, a data collection construct can be associated with a plurality of data storage constructs. This may, for example, be helpful as different datastores are optimized for varying performance gain (e.g., search, availability, batch entry, and/or the like). In various embodiments, the data storage construct management engine108can automatically define a data storage construct for a data collection construct. In automatically defining a data storage construct for a data collection construct, the data storage construct management engine108can automatically define the data storage construct for the data collection construct absent user input specifying how to define the data storage construct. Additionally, in automatically defining a data storage construct for a data collection construct, the data storage construct management engine108can define a data storage construct absent input from a developer. In defining a data storage construct for a data collection construct, the data storage construct management engine108can automatically define a data storage construct based on received user input defining the data collection construct. For example, if a data collection construct is defined to collect a specific type of data based on user input, then the data storage construct management engine108can define a data storage construct for the data collection construct based on the specific type of data. In another example, if a data collection construct is defined to collect data using a specific format of a data entry user interface, then the data storage construct management engine108can define a data storage construct for storing data input through the specific format of data entry. In various embodiments, the data storage construct management engine108is configured to automatically define a data storage construct based on data storage construct definition rules. Data storage construct definition rules include applicable rules for defining a data storage construct without specific instructions from a user regarding defining a data storage construct. For example, data storage construct definition rules can indicate one or a combination of table schema, index mappings, and queries to use in defining a data storage construct. Data storage construct definition rules can be specific to aspects of collecting data. For example, data storage construction rules can be specific to a data type of data collected using a data collection construction. In another example, data storage construction rules can be specific to a form used in collecting data using a data collection construct. In various embodiments, the data storage construct management engine108is configured to analyze data entered through a defined data collection construct and stored according to a defined data storage construct. In analyzing data entered through a defined data collection construct and stored according to a defined data storage construct, the data storage construct management engine108can either or both collect granular metrics of the data and gather analytics of incremental data of the data. For example, the data storage construct management engine108can analyze data input through a data collection construct and stored according to a defined data storage construct to determine average values of the data input into certain fields. In another example, the data storage construct management engine108can analyze data input through a data collection construct and stored according to a define data storage construct to determine changes to the data over time. In various embodiments, the data storage construct management engine108is configured to validate data stored according to a data storage construct. More specifically, the data storage construct management engine108can validate data stored in the datastore110according to a data storage construct. For example, the data storage construct management engine108can determine whether data stored according to a data storage construct improperly includes null values. In validating data stored according to a data storage construct, the data storage construct management engine108can validate the data according to data validation constraints. For example, if data validation constraints specify that data cannot be greater than three symbols, then the data storage construct management engine108can validate data to check whether data stored according to a data storage construct at the datastore110is greater than three symbols. In some embodiments, the data storage construct management engine108is configured to migrate (e.g., transform) data stored based on changes to the data collection construct. Manual user intervention when it comes to migration may be permissible (e.g., for safety and correctness). In various embodiments, the repository storage engine112is configured to control transfer of data to the centralized datastore102. The repository storage engine112can control transfer of data collected according to a data collection construct to the centralized datastore102. Additionally, the repository storage engine112can control transfer of data stored in a datastore according to a data storage construct to the centralized datastore102. The repository storage engine112can control transfer of data collected according to a data collection construct defined by the data collection construct management engine106. Further, the repository storage engine112can control transfer of data stored in the datastore110according to a data storage construct defined by the data storage construct management engine108. In controlling transfer of data, the repository storage engine112can select specific data to transfer and subsequently transfer that data to the repository102. In various embodiments, the repository storage engine112is configured to transfer data to the repository102according to either or both a data collection construct and a data storage construct. More specifically, the repository storage engine112can transfer data to the repository102based on one or a combination of a data type of data entered, a time at which data is entered, and a user who entered data. For example, the repository storage engine112can extract only data that has been updated in the last day, and subsequently transfer the data from the datastore110to the repository102. In another example, the repository storage engine112can extract data of a specific type that has been entered, and subsequently transfer the data from the datastore110to the repository102. FIG.2illustrates an example environment200for defining a data collection construct. As shown inFIG.2, the example environment200includes a data collection construct management engine202. The example environment200can include one or more processors and memory. The one or more processors and memory of the example environment200can be included as part of the data collection construct management engine202. The processors can be configured to perform various operations of the data collection construct management engine202by interpreting machine-readable instructions. The data collection construct management engine202can be implemented through, at least in part, or otherwise accessed through a graphical user interface presented to a user. In various embodiments, the data collection construct management engine202is configured to receive user input for use in defining a data collection construct. The data collection construct management engine202can receive user input to define a new data collection construct. For example, the data collection construct management engine202can receive user input defining a form of a data entry user interface to use in collecting data through a data collection construct. Additionally, the data collection construct management engine202can receive user input to modify an already existing data collection construct. For example, the data collection construct management engine202can receive user input indicating an added field to a data entry user interface to use in collecting data through a data collection construct. As shown inFIG.2, in some embodiments, the data collection construct management engine202can include a user input communication engine204, a user input datastore206, a data collection construct definition engine208, a data collection construct datastore210, a data entry user interface presentation engine212, and a data collection construct data analytics engine214. The user input communication engine204, the data collection construct definition engine208, the data entry user interface presentation engine212, and the data collection construct data analytics engine214can be executed by the processor(s) of the data collection construct management engine202to perform various operations including those described in reference to the user input communication engine204, the data collection construct definition engine208, the data entry user interface presentation engine212, and the data collection construct data analytics engine214. In various embodiments, the user input communication engine204is configured to receive user input regarding data entry. The user input communication engine204can receive user input regarding a data collection construct. For example, the user input communication engine204can receive user input indicating a form in which to collect data through a data entry user interface. The user input communication engine204can receive user input indicating changes to make to an already defined data collection construct. For example, the user input communication engine204can receive user input indicating to remove a field from a data entry user interface as part of a data collection construct. The user input communication engine204can store received user input related to a data collection construct in the user input datastore206. In various embodiments, the data collection construct definition engine208is configured to define a data collection construct for use in collecting data. The data collection definition engine208can define a data collection construct based on user input stored in the user input datastore206and received by the user input communication engine204. For example, if user input indicates a data entry user interface should be in a table format, then the data collection construct definition engine208can define a data collection construct to include a data entry user interface including a table. The data collection construct definition engine208can define a data collection construct to indicate a type of data in a data set to collect, a format in which to collect data in a data set, needed fields for collecting data, rules associated with collecting the data through a data collection construct, and a data entry user interface for use in collecting data in a data set. In various embodiments, the data collection construct definition engine208is configured to define a data collection construct using an already existing data collection construct. More specifically, the data collection construct definition engine208can use computer executable instructions defining an already existing data collection construct to define a new data collection construct. For example, the data collection construct definition engine208can create a data collection construct using computer executable instructions for a data entry user interface in a previously defined data collection construct. The data collection construct definition engine208can use a previously created data collection construct to define a new data collection construct based on one or a combination of a data type, an enterprise associated with a user, and a specific user. For example, the data collection construct definition engine208can create a data collection construct for a user associated with an enterprise using a previously defined data collection construct for the enterprise. In various embodiments, the data collection construct definition engine208is configured to modify an already existing data construct. The data collection construct definition engine208can modify an already existing data construct based on user input stored in the user input datastore206and received by the user input communication engine204. For example, if user input indicates to change a form of a data collection construct from a graph node format to a table row format, then the data collection construct definition engine208can modify the data collection construct to include a data entry user interface in the table row format. The graph node format may indicate relationships between entries in a data collection construct. Accordingly, entries of a data storage construct do not necessarily have to stem from the same data set. In another example, if user input indicates adding a field to a data collection construct, then the data collection construct definition engine208can change the data collection construct to include the field. In various embodiments, the data collection construct definition engine208is configured to generate and update data collection construct data stored in the data collection construct datastore210to indicate a defined data collection construct. For example, the data collection construct definition engine208can generate data collection construct data stored in the data collection construct datastore210to indicate a newly defined data collection construct. In another example, the data collection construct definition engine208can modify data collection construct data stored in the data collection construct datastore210to indicate changes made to a data collection construct. In various embodiments, the data collection construct definition engine208can group, chain, or nest the defined data collection construct with the previously defined data collection construct. For example, the data collection construct definition engine208can associate a data collection construct used in collecting data for a specific company with a previously defined data collection construct for the company. In another example, the data collection construct definition engine208can nest a data collection construct defined for a user into a previously defined data collection construct defined for the user. In various embodiments, the data entry user interface presentation engine212is configured to present a data entry user interface of a defined data collection construct to a user for purposes of receiving data entered through the user interface by a user. The data entry user interface presentation engine212can use data collection construct data of a defined data collection construct stored in the data collection construct datastore210to present a data entry user interface to a user. For example, if data collection construct data of a defined data collection construct indicates presenting a user interface in a graph node form, then the data entry user interface presentation engine212can present an interface in graph node form to a user. The data entry user interface presentation engine212can modify a data entry user interface to a user based on modifications made to a data collection construct. For example, if a data collection construct is modified to remove a field, then the data entry user interface presentation engine212can modify a data entry user interface presented to a user by removing the field from the interface. In various embodiments, the data collection construct data analytics engine214is configured to perform analytics on data entered through a data collection construct. The data collection construct data analytics engine214can perform analytics on data as it is entered through a data collection construct. More specifically, the data collection construct data analytics engine214can validate data as it is entered through a data collection construct according to validation constraints. For example, if validation constraints indicate data must be at least three characters long and a user enters data that is only two characters long, then the data collection construct data analytics engine214can provide a notification to a user, through a data entry user interface, indicating that the user has entered invalid data. In some embodiments, the data collection construct data analytics engine214is configured to cooperate with one or more services (e.g., an event service). This may allow, for example, services to listen for changes in the datastore and behave/react accordingly. For example, if a new entry is added to a user defined dataset, another service can listen for new entries to the data set and perform search and/or aggregations on behalf of the user. FIG.3illustrates an example environment300for defining a data storage construct. As shown inFIG.3, the example environment300includes a data storage construct management engine302. The example environment300can include one or more processors and memory. The one or more processors and memory of the example environment300can be included as part of the data storage construct management engine302. The processors can be configured to perform various operations of the data storage construct management engine302by interpreting machine-readable instructions. The data storage construct management engine302can be implemented through, at least in part, or otherwise accessed through a graphical user interface presented to a user. In various embodiments, the data storage construct management engine302is configured to define a data storage construct for use in storing data entered through a data collection construct. The data storage construct management engine302can automatically define a data storage construct based on a data collection construct defined according to user input. For example, in defining a data storage construct, the data storage construct management engine302can define table schema and index mappings based on a defined data collection construct. In another example, in defining a data storage construct, the data storage construct management engine302can define queries for use in retrieved data stored according to the data storage construct based on a defined data collection construct. As shown inFIG.3, in some embodiments, the data storage construct management engine302can include a data collection construct datastore304, a data storage construct definition engine306, a data storage construct datastore308, a data storage engine310, a datastore312, and a data storage construct data analytics engine314. The data storage construct definition engine306, the data storage engine310, and the data storage construct data analytics engine314can be executed by the processor(s) of the data storage construct management engine302to perform various operations including those described in reference to the data storage construct definition engine306, the data storage engine310, and the data storage construct data analytics engine314. In various embodiments, the data collection construct datastore304is configured to store data collection construct data indicating a defined data collection construct. A data collection construct indicated by data collection construct data stored in the data collection construct datastore304can be maintained by an applicable engine for defining a data collection construct, such as the data collection construct management engines described in this paper. Data collection construct data stored in the data collection construct datastore304can include a type of data in a data set to collect, a format in which to collect data in a data set, needed fields for collecting data, rules associated with collecting the data through a data collection construct, and a data entry user interface for use in collecting data in a data set. Data collection construct data stored in the data collection construct datastore304can be modified based on user input. For example, data collection construct data stored in the data collection construct datastore304can be updated to indicate changes made to a defined data collection construct based on user input. In various embodiments, the data storage construct definition engine306is configured to define a data storage construct for use in storing data entered through a data collection construct. The data storage construct definition engine306can define a data storage construct automatically for a data collection construct based on the data collection construct. More specifically, the data storage construct definition engine306can automatically define a data storage construct for a data collection construct absent user input indicating how to define the data storage construct. For example, the data storage construct definition engine306can define a data storage construct based on a data type a data collection construct is defined to collect. The data storage construct definition engine306can generate and update data storage construct data stored in the data storage construct datastore308to indicate a defined data storage construct. In various embodiments, the data storage construct definition engine306can define either or both table schema and index mappings in defining a data storage construct. Additionally, the data storage construct definition engine306can define queries, as included as part of a data storage construct, for use in retrieving data stored according to the data storage construct. For example, the data storage construct definition engine306can define queries for use in retrieving specific portions of data stored according to a data storage construct. In various embodiments, the data storage construct definition engine306is configured to define a data storage construct according to data storage construct definition rules. Data storage construct definition rules can be specific to one or a combination of a data type collected according to a data collection construct, a user who created a data collection construct, a user who is utilizing a data collection construct to enter data, and an entity or enterprise associated with a data collection construct. For example, data storage construct definition rules can be unique to a company with employees using a data collection construct to enter data. In various embodiments, the data storage construct definition engine306is configured to define a data storage construct based on already created data storage constructs. For example, the data storage construct definition engine306can use an already created data storage construct for storing a specific type of data to create a new data storage construct for storing the specific type of data. In another example, the data storage construct definition engine306can use an already created data storage construct created for a user to define a new data storage construct for the user. In some embodiments, other users may also leverage existing data storage constructs regardless of who the original owner/creator was (e.g., depending on permissions). In various embodiments, the data storage engine310is configured to store data in the datastore312according to a data storage construct indicated by data storage construct data stored in the data storage construct datastore308. For example, the data storage engine310can store data in the datastore312according to index mappings defined as part of a data storage construct. The data storage engine310can store data entered through a data collection construct associated with a data storage construct. For example, if a data storage construct is defined for a data collection construct, then the data storage engine310can store data entered through the data collection construct using the data storage construct. In various embodiments, the data storage engine310is configured to retrieve data from the datastore312. The data storage engine310can retrieve data from the datastore312using a data storage construct used to store the data in the datastore312. For example, the data storage engine310can use queries included as part of a data storage construct to retrieve and/or otherwise obtain data stored in the datastore312using the data storage construct. For example, the data storage engine310can fetch specific records, find records that a user did not know existed, and/or finding related records to keywords (e.g., keywords manually entered by a user). As used herein, a record is equivalent to a data entity that is entered in a data collection construct. In various embodiments, the data storage engine310is configured to modify a data storage construct based on modifications made to a data collection construct associated with the data storage construct. In modifying a data storage construct based on modifications made to a data collection construct, the data storage engine310can modify one or a combination of a table schema, index mappings, and queries of the data storage construct. For example, if a user modifies a data collection construct to include an additionally data entry field, then the data storage engine310can modify an index mapping of a data storage construct to allow for storage data entered through the additional data entry field. In various embodiments, the data storage construct data analytics engine314is configured to perform analytics of data stored in the datastore312according to a data storage construct. The data storage construct data analytics engine314can either or both collect granular metrics of the data and gather analytics of incremental data of the data. For example, the data storage construct data analytics engine314can determine a number of null values in a data set. Further, the data storage construct data analytics engine314can analyze the data at a storage construct level by analyzing the data as it is stored in the local datastore312according to the data storage construct. More specifically, the data storage construct data analytics engine314can analyze data stored in datastore312before it is transferred to a repository, e.g. a remote system. In various embodiments, the data storage construct data analytics engine314is configured to validate data stored in the datastore312. The data storage construct data analytics engine314can validate data at a data storage construct level, e.g. as it is stored in the datastore312and before it is transferred to a repository. In some embodiments, validation may also occur before the data enters the datastore using the user interface that the client uses to enter data. Custom validation schemes (or, user-defined validation schemes) may be implemented. Custom validation schemes may define, for example, when and/or how validation occurs. The data storage construct data analytics engine314can validate data stored in the datastore312according to validation constraints. For example, if validation constraints specify a field cannot have a null value, then the data storage construct data analytics engine314can validate data to ensure the field does not have a null value. The data storage construct data analytics engine314can provide a notification to a user, through a data entry user interface, indicating entered data is invalid if the data storage construct data analytics engine314determines the entered data is invalid. FIG.4illustrates an example environment400for selectively transferring data to a repository. As shown inFIG.4, the example environment400includes a repository402. The repository402can be implemented at a remote location. For example, the repository402can be implemented in the cloud. Additionally, the repository402can be specific to one or a plurality of entities or enterprises. For example, the repository402can be a remote data storage system of a company. As shown inFIG.4, the example environment400includes a repository storage engine402. The example environment400can include one or more processors and memory. The one or more processors and memory of the example environment400can be included as part of the repository storage engine402. The processors can be configured to perform various operations of the repository storage engine402by interpreting machine-readable instructions. In various embodiments, the repository storage engine404is configured to selectively transfer stored data to the repository402. The repository storage engine404can selectively transfer data entered through a data collection construct defined based on user input. Additionally, the repository storage engine404can selectively transfer data stored according to a data storage construct automatically created based on a data collection construct. As shown inFIG.4, in some embodiments, the repository storage engine404can include a datastore406, a data selection engine408, and a repository data transfer engine410. The data selection engine408and the repository data transfer engine410can be executed by the processor(s) of the repository storage engine404to perform various operations including those described in reference to the data selection engine408and the repository data transfer engine410. In various embodiments, the datastore406is configured to store data of a data set capable of being transferred to the repository402. Data stored in the datastore406can be entered through a data collection construct defined based on user input. Additionally, data stored in the datastore406can be stored according to a data storage construct automatically defined based on a data collection construct. In various embodiments, the data selection engine408is configured to select data stored in the datastore406to transfer to the repository402. The data selection engine408can select data to transfer to the repository402based on one or a combination of a data type of the data, a user who entered the data, and a time when the data was entered or modified. For example, the data selection engine408can select data to transfer to the repository402once the data is created or updated. The data selection engine408can select data to transfer to the repository based on data size. More specifically, the data selection engine408can select up to a specific amount of data to transfer to the repository402. For example, the data selection engine408can select 100 Gb of data stored in the datastore406to transfer to the repository402. The rate of transfer can be a function of resource constraints and/or client requirements. For example, if the environment400is under stress, the amount of data transferred back to the repository402at any given time can be low (e.g., in order to not add stress to the other system(s) in the environment400). If the client requirements are to have strong consistency guarantees between the data in the engine404and the repository402, then the transfer of data may be continuous and the resource demands of the engine404may be greater. In various embodiments, the data selection engine408is configured to perform auditing functionality. The auditing functionality may be toggled on/off. The auditing functionality can provide a fine grain list of every instance that data has been inserted, updated and/or deleted can be chronicled. The resulting audit log can also be sent to the repository402for storage. In various embodiments, the repository data transfer engine410is configured to transfer data stored in the datastore406to the repository. The repository data transfer engine410can transfer data selected by the data selection engine408. For example, the repository data transfer engine410can transfer a subset of data selected by the data selection engine408based on the subset of the data being updated by a user. The repository data transfer engine410can transfer data to the repository402at scheduled times. For example, the repository data transfer engine410can transfer data to the repository402every day at the same time. FIG.5illustrates a flowchart of an example method500, according to various embodiments of the present disclosure. The method500may be implemented in various environments including, for example, the environment100ofFIG.1. The operations of method500presented below are intended to be illustrative. Depending on the implementation, the example method500may include additional, fewer, or alternative steps performed in various orders or in parallel. The example method500may be implemented in various computing systems or devices including one or more processors. At block502, user input defining a data set is received. An applicable engine for receiving user input, such as the user input communication engines described in this paper, can receive user input defining a data set. User input received at block502can include applicable information describing a desired way in which to collect data. For example, user input received at block502can indicate fields a data entry user interface should have for gathering data in the data set. At block504, a data collection construct for entering data is defined based on the user input. An applicable engine for defining a data collection construct, such as the data collection construct definition engines described in this paper, can define a data collection construct for entering data based on the user input. A defined data collection construct can include a defined data entry user interface for use by the user in inputting data in the data set. At block506, a data storage construct for the data collection construct is defined based on the user input used to define the data collection construct. An applicable engine for defining a data storage construct, such as the data storage construct definition engines described in this paper, can automatically define a data storage construct for the data collection construct based on the user input. A data storage construct can be automatically defined based on the data collection construct without receiving explicit input defining the data storage construct from the user. Additionally, a data storage construct can be automatically defined using previously defined data storage constructs. At block508, queries for use in retrieving the data in the data set entered through the data collection construct and stored using the data storage construct are automatically defined. An applicable engine for defining a data storage construct, such as the data storage construct definition engines described in this paper, can define queries for use in retrieving the data in the data set entered through the data collection construct and stored using the data storage construct. Queries for retrieving the data in the data set can be included as part of the defined data storage construct. At block510, the data collection construct, the data storage construct, and the queries are automatically updated based on received additional user input indicating modifications to the data set. An applicable engine for defining a data collection construct, such as the data collection construct definition engines described in this paper, can automatically update the data collection construct based on received additional user input indicating modifications to the data set. An applicable engine for defining a data storage construct, such as the data storage construct definition engines described in this paper, can automatically update the data storage construct and the queries based on received additional user input indicating modifications to the data set. Hardware Implementation The techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include circuitry or digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, server computer systems, portable computer systems, handheld devices, networking devices or any other device or combination of devices that incorporate hard-wired and/or program logic to implement the techniques. Computing device(s) are generally controlled and coordinated by operating system software, such as iOS, Android, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, iOS, Blackberry OS, VxWorks, or other compatible operating systems. In other embodiments, the computing device may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things. FIG.6is a block diagram that illustrates a computer system600upon which any of the embodiments described herein may be implemented. The computer system600includes a bus602or other communication mechanism for communicating information, one or more hardware processors604coupled with bus602for processing information. Hardware processor(s)604may be, for example, one or more general purpose microprocessors. The computer system600also includes a main memory606, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus602for storing information and instructions to be executed by processor604. Main memory606also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor604. Such instructions, when stored in storage media accessible to processor604, render computer system600into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system600further includes a read only memory (ROM)608or other static storage device coupled to bus602for storing static information and instructions for processor604. A storage device610, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus602for storing information and instructions. The computer system600may be coupled via bus602to a display612, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a computer user. An input device614, including alphanumeric and other keys, is coupled to bus602for communicating information and command selections to processor604. Another type of user input device is cursor control616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor604and for controlling cursor movement on display612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. The computing system600may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The computer system600may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system600to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system600in response to processor(s)604executing one or more sequences of one or more instructions contained in main memory606. Such instructions may be read into main memory606from another storage medium, such as storage device610. Execution of the sequences of instructions contained in main memory606causes processor(s)604to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device610. Volatile media includes dynamic memory, such as main memory606. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor604for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system600can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus602. Bus602carries the data to main memory606, from which processor604retrieves and executes the instructions. The instructions received by main memory606may retrieves and executes the instructions. The instructions received by main memory606may optionally be stored on storage device610either before or after execution by processor604. The computer system600also includes a communication interface618coupled to bus602. Communication interface618provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface618may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface618may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface618sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”. Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface618, which carry the digital data to and from computer system600, are example forms of transmission media. The computer system600can send messages and receive data, including program code, through the network(s), network link and communication interface618. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface618. The received code may be executed by processor604as it is received, and/or stored in storage device610, or other non-volatile storage for later execution. Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art. It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof. Engines, Components, and Logic Certain embodiments are described herein as including logic or a number of components, engines, or mechanisms. Engines may constitute either software engines (e.g., code embodied on a machine-readable medium) or hardware engines. A “hardware engine” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware engines of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware engine that operates to perform certain operations as described herein. In some embodiments, a hardware engine may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware engine may include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware engine may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware engine may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware engine may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware engines become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware engine mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware engine” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented engine” refers to a hardware engine. Considering embodiments in which hardware engines are temporarily configured (e.g., programmed), each of the hardware engines need not be configured or instantiated at any one instance in time. For example, where a hardware engine comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware engines) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware engine at one instance of time and to constitute a different hardware engine at a different instance of time. Hardware engines can provide information to, and receive information from, other hardware engines. Accordingly, the described hardware engines may be regarded as being communicatively coupled. Where multiple hardware engines exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware engines. In embodiments in which multiple hardware engines are configured or instantiated at different times, communications between such hardware engines may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware engines have access. For example, one hardware engine may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware engine may then, at a later time, access the memory device to retrieve and process the stored output. Hardware engines may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented engines that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented engine” refers to a hardware engine implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented engines. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented engines may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented engines may be distributed across a number of geographic locations. Language Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or concept if more than one is, in fact, disclosed. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. It will be appreciated that an “engine,” “system,” “data store,” and/or “database” may comprise software, hardware, firmware, and/or circuitry. In one example, one or more software programs comprising instructions capable of being executable by a processor may perform one or more of the functions of the engines, data stores, databases, or systems described herein. In another example, circuitry may perform the same or similar functions. Alternative embodiments may comprise more, less, or functionally equivalent engines, systems, data stores, or databases, and still be within the scope of present embodiments. For example, the functionality of the various systems, engines, data stores, and/or databases may be combined or divided differently. “Open source” software is defined herein to be source code that allows distribution as source code as well as compiled form, with a well-publicized and indexed means of obtaining the source, optionally with a license that allows modifications and derived works. The data stores described herein may be any suitable structure (e.g., an active database, a relational database, a self-referential database, a table, a matrix, an array, a flat file, a documented-oriented storage system, a non-relational No-SQL system, and the like), and may be cloud-based or otherwise. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, engines, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
78,895
11860832
DETAILED DESCRIPTION Systems and methods are described herein that provide dynamic inclusion of custom columns into a logical model. In one embodiment, cloud applications have a data model that may contain an arbitrary number of custom attributes (defined by the customer) or fields stored in a separate child table as name/type and value pairs. As described herein, a business intelligence (BI) platform such as Oracle® Business Intelligence Enterprise Edition (OBIEE), Oracle® Analytics Cloud (OAC) or Oracle® Application Server (OAS) may be extended to integrate the custom attributes and map them dynamically into one or more BI repository files (RPD file) which represents the logical model that is presented to the customer. The custom columns appear dynamically in the dimensions of the logical model, which may be presented in OBIEE, OAC and OAS Data Visualization (DV) graphical user interfaces (GUIs). In one embodiment, custom columns may be represented as name-value pairs in child tables associated with main dimension tables of the logical model. This is a very flexible representation for custom columns. In one embodiment, systems and methods described herein provide a mechanism to define a flexible, dynamic mapping of custom fields to existing BI repository structures. In one embodiment, the number of available custom fields for a dimension table has an upper limit set by system administrators. In practice, a limit of 100 custom fields per dimension table is generally more than sufficient to accommodate all the custom fields that may be desired by the customer, but the limit can be raised or lowered as needed or desired. For example, a DV tool such as Oracle® Utility Global Business Unit's (UGBU's) Analytic Visualization or Oracle® Data Visualization may be configured to integrate up to 100 custom columns per dimension by default, but this could be readily increased to 200 or more custom columns as needed. Or, the default number of columns can be readily decreased. Advantageously, the entire infrastructure that maps custom column metadata into a logical BI repository model is generated. This simplifies the delivery and extension of the mapping capabilities. For example, using the systems and methods described herein, the UGBU's Analytic Visualization tool maps about 20,000 logical custom columns to the dynamic infrastructure (and many more custom columns in a presentation layer). In an implemented example system, (such as business intelligence system105shown and described below with reference toFIG.1) this mapping generation runs in only about 20 seconds, although this amount of time for the mapping generation is dependent on the size of the model, and the time taken increases linearly with increase in model size. In one embodiment, this mapping generation is a step in a larger overall generation process that takes less than 2 minutes in the implemented example system. In one embodiment, the systems and methods for dynamic inclusion of custom columns into a logical model maps custom columns defined in a table with name-value pairs to the logical model. This design supports a high number of attributes per dimension table, such as 100 attributes. As discussed above, the upper limit can be defined when the BI repository file is created and can be easily extended by configuration changes. In one embodiment, the mapping of custom columns can be done dynamically by customers without modifying the BI repository file. After dynamic mapping of the custom columns, the mapped columns will be visible to the customer after DV canvases are refreshed. In one embodiment, the sequence in which custom columns appear can be defined by the customer. In one embodiment, the customer may assign a customer-defined name to each custom column. In one embodiment, only actively mapped custom columns are shown in the DV model. All unmapped columns do not appear in the DV model. Once the infrastructure of custom columns represented as name-value pairs in child tables is established, no schema changes in the physical data model are required to map, unmap, or remap custom fields. Clutter in the DV model is eliminated by not showing unmapped BI repository columns and usability is maximized by allowing the customer to use meaningful, customer-defined names for custom columns rather than generic names such as UDA001, . . . , UDA100 columns. The systems and methods described herein provide automatic, dynamic inclusion of custom columns into a logical model in minutes or seconds, where it was not previously possible for computing systems to automatically or dynamically include custom columns into a logical model at all, and where adding custom columns to a logical model otherwise requires several person-months of potentially error-prone manual extension of the BI repository file. In one embodiment, the automatic, dynamic inclusion of custom columns into a logical model:1) defines a large number of placeholders for custom fields on all eligible dimension tables (and/or, in one embodiment, all eligible fact tables) as required by RPD;2) defines a SQL-based mapping layer between logical model columns and physical table with the custom columns;3) uses metadata describing the custom data to dynamically assign each custom column in the logical model a user-defined name; and4) uses metadata describing the custom data to hide unmapped custom column slots to avoid unused custom column fields (potentially thousands of fields) appearing in the logical model. While example systems and methods are described herein with reference to improving Oracle® business intelligence tools such as those based on OBIEE or OAC and that utilizes BI repository files, the solution approach may be generalized to other BI solutions and Data Warehouse solutions. In particular, the approach described herein may be used for dimensional reporting even if OBIEE and OAC are not used. For example, other BI tools such as Microstrategy's business intelligence applications or IBM's Cognos analytics applications may be improved in a similar manner by the systems and methods for dynamic inclusion of custom columns into a logical model described herein. Further, while example systems and methods are described herein with reference to visualizing data using Oracle® data visualization tools, the systems and methods described herein may also use other data visualization tools such as Tableau and other reporting tools. No action or function described or claimed herein is performed by the human mind. An interpretation that any action or function can be performed in the human mind is inconsistent with and contrary to this disclosure. Further, the techniques described herein were not previously performed manually. Example Business Intelligence Environment FIG.1illustrates one embodiment of a computing system100associated with dynamic inclusion of custom columns into a logical model. In one embodiment, system100includes a business intelligence system105connected by the Internet110(or another suitable communications network or combination of networks) to an enterprise network115. In one embodiment, business intelligence system105may be an OBIEE or OAC service implementation. In one embodiment, business intelligence system105includes various systems and components such as dynamic custom columns inclusion system120, other business intelligence system components125, data store(s)130, and web interface server135. In one embodiment, dynamic custom columns inclusion system120includes one or more components configured for implementing methods (such as method200), functions, and features described herein associated with dynamic inclusion of custom columns into a logical model. In one embodiment, other business intelligence system components125may include business intelligence applications and functions for retrieving, analyzing, mining, visualizing, transforming, reporting, and otherwise making use of data associated with operation of a business. In one embodiment, other business intelligence system components125may include data gathering components that capture and record the data associated with operation of the business in a data repository such as data store(s)130. In one embodiment, other business intelligence system components125may further include user administration modules for governing the access of users to business intelligence system105. Each of the components of business intelligence system105is configured by logic to execute the functions that the component is described as performing. In one embodiment, the components of business intelligence system may each be implemented as sets of one or more software modules executed by one or more computing devices specially configured for such execution. In one embodiment, the components of business intelligence system105are implemented on one or more hardware computing devices. In one embodiment, the components of business intelligence system105are each implemented by dedicated computing devices. In one embodiment, the components of business intelligence system105are implemented by a common (or shared) computing device, even though represented as discrete units inFIG.1. In one embodiment, business intelligence system105may be hosted by a dedicated third party, for example in an infrastructure-as-a-service (IAAS), platform-as-a-service (PAAS), or software-as-a-service (SAAS) architecture. In one embodiment, the components of business intelligence system105intercommunicate by electronic messages or signals. These electronic messages or signals may be configured as calls to functions or procedures that access the features or data of the component, such as for example application programming interface (API) calls. Each component of business intelligence system105may parse the content of an electronic message or signal received to identify commands or requests that the component can perform, and in response to identifying the command, the component will automatically perform the command or request. Enterprise network115may be associated with a business. For simplicity and clarity of explanation, enterprise network115is represented by an on-site local area network140to which one or more personal computers145, or servers150are operably connected, along with one or more remote user computers155or mobile devices160that are connected to the enterprise network115through the Internet110. Each personal computer145, remote user computer155, or mobile device160is generally dedicated to a particular end user, such as an employee or contractor associated with the business, although such dedication is not required. The personal computers145and remote user computers155can be, for example, a desktop computer, laptop computer, tablet computer, or other device having the ability to connect to local area network140or Internet110. Mobile device160can be, for example, a smartphone, tablet computer, mobile phone, or other device having the ability to connects to local area network140or Internet110through wireless networks, such as cellular telephone networks or Wi-Fi. Users of the enterprise network115interface with business intelligence system105across the Internet110(or another suitable communications network or combination of networks). In one embodiment, remote computing systems (such as those of enterprise network115) may access information or applications provided by business intelligence system105through web interface server135. In one embodiment, the remote computing system may send requests to and receive responses from web interface server135. In one example, access to the information or applications may be effected through use of a web browser on a personal computer145, remote user computers155or mobile device160. For example, these computing devices145,155,160of the enterprise network115may request and receive a web-page-based graphical user interface (GUI) for dynamically configuring (for example, setting up or removing) custom columns for use in business intelligence system105. In one example, web interface server135may present HTML code to personal computer145, server150, remote user computers155or mobile device160for these computing devices to render into the GUI for business intelligence system105. In another example, communications may be exchanged between web interface server135and personal computer145, server150, remote user computers155or mobile device160, and may take the form of remote representational state transfer (REST) requests using JavaScript object notation (JSON) as the data interchange format for example, or simple object access protocol (SOAP) requests to and from XML servers. For example, computers145,150,155of the enterprise network110may request information included in the custom columns, or may request information derived at least in part from information included in the custom columns (such as analytics results based at least in part on the custom columns). In one embodiment, data store160includes one or more operational databases configured to store and serve a broad range of information relevant to the operation of a business, such as data about enterprise resource planning, customer relationship management, finance and accounting, order processing, time and billing, inventory management and distribution, employee management and payroll, calendaring and collaboration, product information management, demand & material requirements planning, purchasing, sales, sales force automation, marketing, ecommerce, vendor management, supply chain management, product lifecycle management, descriptions of hardware assets and their statuses, production output, shipping and tracking information, and any other information collected by the business. Such operational information may be stored in the operational database in real-time at the time the information is collected. In one embodiment, the data store160includes a mirror or copy database for each operational database which may be used for disaster recovery, or secondarily to support read-only operations. In one embodiment, the operational database is an Oracle® database. In some example configurations, data store(s)160may be implemented using one or more Oracle® Exadata compute shapes, network-attached storage (NAS) devices and/or other dedicated server device. Example Method for Dynamic Custom Column Inclusion In one embodiment, each step of computer-implemented methods described herein may be performed by a processor (such as processor810as shown and described with reference toFIG.8) of one or more computing devices (i) accessing memory (such as memory815and/or other computing device components shown and described with reference toFIG.8) and (ii) configured with logic to cause the system to execute the step of the method (such as dynamic custom columns inclusion logic830shown and described with reference toFIG.8). For example, the processor accesses and reads from or writes to the memory to perform the steps of the computer-implemented methods described herein. These steps may include (i) retrieving any necessary information, (ii) calculating, determining, generating, classifying, or otherwise creating any data, and (iii) storing any data calculated, determined, generated, classified, or otherwise created. References to storage or storing indicate storage as a data structure in memory or storage/disks of a computing device (such as memory815, or storage/disks835of computing device805or remote computers865shown and described with reference toFIG.8). In one embodiment, each subsequent step of a method commences automatically in response to parsing a signal received or stored data retrieved indicating that the previous step has been performed at least to the extent necessary for the subsequent step to commence. Generally, the signal received or the stored data retrieved indicates completion of the previous step. FIG.2illustrates one embodiment of a method200associated with dynamic inclusion of custom columns into a logical model. In one embodiment, the steps of method200are performed by dynamic custom columns inclusion system120(as shown and described with reference toFIGS.1and7, and elsewhere herein). In one embodiment, dynamic custom columns inclusion system120is a special purpose computing device (such as computing device805) configured with dynamic custom columns inclusion logic830. In one embodiment, dynamic custom columns inclusion system120is a module of a special purpose computing device configured with logic830. The method200may be initiated automatically based on various triggers, such as in response to receiving a signal over a network or parsing stored data indicating that (i) a user (or administrator) of dynamic custom columns inclusion system120has initiated method200, (ii) method200is scheduled to be initiated at defined times or time intervals, or (iii) a user (or administrator) of dynamic custom columns inclusion system120has provided an input indicating selection of an available custom logical column for mapping to a custom physical column, for example through a GUI. The method200initiates at START block205in response to parsing a signal received or stored data retrieved and determining that the signal or stored data indicates that the method200should begin. Processing continues to process block210. At process block210, the processor maps a selected custom logical column in the logical model to a custom physical column represented as a row in a physical table in real time by assigning a column sequence identifier uniquely associated with the selected custom logical column to the custom physical column. In one embodiment, a logical model is one expression of a dimensional model used for business intelligence. In one embodiment, the custom physical column is represented as a row of a table in the physical layer. In one embodiment, the row is a row in a custom table (also referred to as a characteristics table). In one embodiment, the row is a potential value for a custom column mapped to an available logical column. In one embodiment, the processor receives a selection of a custom physical column and a selection of a custom logical column to which the custom physical column is to be mapped. In one embodiment, these selections may be received from a user through an interface such as a graphical user interface. In one embodiment, the selections may be made automatically by a computing device configured to assign custom columns according to an algorithm such as “most used custom column” or “alphabetical order.” In one embodiment, the processor parses the selections to determine which custom physical column has been chosen, and to determine which column sequence identifier has been selected. The processor identifies the custom logical column associated with the column sequence identifier. The processor automatically generates a mapping that describes the row that represents the custom physical column and that indicates the column sequence identifier. In one embodiment, the mapping definition includes two parts: first, a mapping of the column sequence identifier and a value related to a custom column, and then second, a mapping of the column sequence identifier and value to a physical column. In one embodiment, the processor may store the mapping. For example, the processor may further store the assignment of the column sequence identifier to the custom physical column that describes the mapping in a configuration table of mappings for the logical model. The configuration table may be part of the logical model, and may be maintained as a data structure in data store130. In one embodiment, the processor writes the values of the mapping—the column sequence identifier, and value indicating the custom column (such as a user-defined attribute number) and a physical column identifier—as a data structure that indicates that they are related as a mapping. In one embodiment, the steps of process block210are performed by mapping, definition, and pivot query module715of dynamic column inclusion module710as shown and described with reference toFIG.7. These actions are automatically performed in real-time—in immediate or prompt response to input information—by the processor. Once the processor has thus completed mapping a selected custom logical column in the logical model to a custom physical column represented as a row in a physical table in real time, processing at process block210completes, and processing continues to process block215. At process block215, the processor retrieves a custom column definition for the custom physical column in real time to form an enriched dataset of custom column records from the custom column definition and the assigned column sequence identifier. In one embodiment, the processor retrieves metadata describing the custom physical column from the custom metadata table. The processor joins—for example with a left outside join—the retrieved metadata (on the left of the join) and the values of the custom physical column (on the right of the join). The join operation produces a dataset of column values from the physical table that is enriched by the retrieved metadata describing the custom physical column. In one embodiment, the processor stores the enriched dataset in memory or storage pending dynamic inclusion of the enriched dataset in a placeholder column of the logical model. In one embodiment, the enriched dataset is represented as a row in storage. In one embodiment, the steps of process block215are performed by mapping, definition, and pivot query module715of dynamic column inclusion module710as shown and described with reference toFIG.7. These actions are automatically performed in real-time by the processor in response to the completion of process block210. Once the processor has thus completed retrieving a custom column definition for the custom physical column in real time to form an enriched dataset of custom column records from the custom column definition and the assigned column sequence identifier, processing at process block215completes, and processing continues to process block220. At process block220, the processor pivots the enriched dataset into the selected custom logical column in real time to integrate the custom logical column into the logical model. In one embodiment, in response to the completion of the enriched dataset (or the last of multiple enriched datasets for multiple custom columns) the processor automatically evaluates each possible custom logical column to determine if that column is intended to receive the pivoted enriched dataset. In one embodiment, the processor compares a sequence identifier associated with the custom logical column against a sequence identifier assigned to custom physical column underlying the enriched dataset during the mapping at process block210. When the processor finds a match between these values, the processor has identified the selected custom logical column. In response to the identification of the selected custom logical column, the processor automatically pivots the enriched dataset from a row to a column form and inserts the output column form into the selected custom logical column. Thus, the processor automatically turns the unique values in the row-shaped enriched dataset into multiple logical columns in the selected custom logical table. The processor saves the logic that generates the custom logical column in the BI repository file to maintain the custom logical column. The custom logical column can now be interacted with as if it is an ordinary (that is, not dynamically generated) column in the logical model. The custom logical column is therefore successfully integrated into the logical model. Note that in one embodiment, multiple discretely mapped and enriched data sets (generated by repeated operation of process blocks210and215for unique custom physical columns and unique column sequence identifiers) may be integrated into the logical model in a single execution of process block220. So, for example, a user may provide an input that maps several custom physical columns respectively to several available custom logical columns. In response to that input with multiple mappings, the processor will automatically re-execute process blocks210and215for each of the multiple mappings. In response to the completion of process block215for the last of the multiple mappings, process block220will automatically execute once to pivot the enriched dataset for each of the multiple mappings into the custom logical columns to integrate the custom columns into the logical model. In this way, an entire data set of multiple custom physical columns is pivoted into associated logical columns. In one embodiment, the steps of process block220are performed by mapping, definition, and pivot query module715of dynamic column inclusion module710as shown and described with reference toFIG.7. These actions are automatically performed in real-time by the processor in response to the completion of process block215. Once the processor has thus completed pivoting the enriched dataset into the selected custom logical column in real time to integrate the custom logical column into the logical model, processing at process block220completes, and processing continues to process block225. At process block225, the processor presents the logical model including the mapped custom logical columns for access in a business intelligence environment. The presented custom logical columns are therefore made available for use in the business intelligence environment, for example for use in reporting and data analysis applications or functions of business intelligence system105. In one embodiment, in response to the integration of the custom columns into the logical model (the completion of process block220), the processor generates and transmits a message to the business intelligence system105instructing system105to refresh its data sources. In one embodiment, this message is generated and transmitted automatically. In response to the message, system105refreshes its data sources, making the logical model including the mapped custom logical column available for access. In one embodiment, the system presents the custom logical column accessible by text command. In one embodiment, the system presents the custom logical column in a GUI on a display screen as a data source for access through the graphical user interface. For example, the access includes access by a data visualization tool configured to generate a graphical visualization based at least in part on the custom logical column. In one embodiment, the logical model is presented for access by other systems practically immediately, in real-time following integration of the custom logical column. In one embodiment, the graphical visualization is referred to as data visualization (DV). A web-based data visualization graphical user interface is generated by the processor, for example by web interface server130, and transmitted through an HTTP connection to a computer of enterprise network115for viewing and manipulation on a browser of that computer. In one embodiment, the user can drag and drop columns related to the dimensional model, including the custom logical column, and the GUI will alter a visualization presented on the data visualization graphical user interface. In one embodiment, the steps of process block225are performed by mapping, definition, and pivot query module715of dynamic column inclusion module710as shown and described with reference toFIG.7. These actions are automatically performed in real-time by the processor in response to the completion of process block215. Once the processor has thus completed presenting the logical model including the mapped custom logical columns for access in a business intelligence environment, processing at process block225completes, and processing continues to END block230, where process200ends. On completion of process200, the selected custom logical column is visible to and available for use by business intelligence system105. Note that process200can be performed for multiple custom columns to make multiple custom columns available in business intelligence system105. Example Pre-Configuration of Placeholders in Logical Model FIG.3illustrates one embodiment of a method300for pre-configuring a logical model to dynamically accept custom columns associated with dynamic inclusion of custom columns into a logical model. In one embodiment, the steps of method300are performed by dynamic custom columns inclusion system120, for example by mapping, definition, and pivot query module715of dynamic column inclusion module710(as shown and described with reference toFIGS.1and7, and elsewhere herein). The method300may be initiated automatically based on various triggers, such as in response to receiving a signal over a network or parsing stored data indicating that (i) a user (or administrator) of dynamic custom columns inclusion system120has initiated method300, (ii) method300is scheduled to be initiated at defined times or time intervals, or (iii) a new logical model is being initiated. The method300initiates at START block305in response to parsing a signal received or stored data retrieved and determining that the signal or stored data indicates that the method300should begin. Processing continues to process block310. At process block310, the processor pre-configures the logical model with placeholder logical columns for up to a fixed number of custom logical columns. The selected custom logical column (described at method200process block210above) is selected from among the available placeholder logical columns not already mapped. In one embodiment, the logical model is configured with placeholder logical columns to receive the values of custom columns in the physical layer. Because BI repository files are static, the number of placeholder logical columns is fixed. The number of placeholders is generally set at a sufficiently high value that users of the system will be unlikely to ever include more than that number of custom columns in the logical model. In one embodiment, the number of placeholder logical columns is set to 100. In one embodiment, in response to a user input indicating that custom columns should be enabled in the logical model, the processor automatically adds a fixed number C (for example, C=100) of placeholder logical columns to each dimension of the dimensional model. For example, the processor may receive and execute a ‘create logical dimension’ method to add each of the placeholder logical columns to the logical model. This may be performed immediately in real-time, in response to user requests to enable logical columns, and without delay of waiting through the next ETL cycle before the C logical columns are available to represent custom logical columns. In one embodiment, to create the placeholder logical columns, the system defines additional metadata in (i) a table describing dimensions (or facts) and (ii) a table describing the columns of the dimension table in the BI repository (RPD) file for the logical model. Metadata indicating that a table is a custom table (or table with custom data) may also be included in the BI repository file. This metadata may be retrieved by executing a SQL SELECT statement for this information against the database catalog (which describes tables, columns, and other database objects. In one embodiment, the create logical dimension is performed as a loop repeated C times to create a series of uniquely labeled placeholder columns for user-defined attributes (UDAs), also known as custom columns. These may be labeled, for example UDA001-UDA00C. In one embodiment, the create logical dimension method includes the following steps: (1) identify a logical table associated with a dimension (a dimension table) to which the custom columns are to be added; (2) create a pivot transformer table of the placeholder columns; (3) join the pivot transformer table to the dimension table with a join, such as, in one embodiment, an outer join; (4) map all the created placeholder columns in the join; and (5) map the physical representation of the joined tables to the logical one. Once the processor has thus completed pre-configuring the logical model with placeholder logical columns for up to a fixed number of custom logical columns, processing at process block310completes, and processing continues to process block315. At process block315, the processor associates a unique column sequence identifier with each of the placeholder logical columns in the logical model. In one embodiment, the processor names each column of the tables crated by the create table statement in process block310. The processor may assign names for the columns of the table that indicate the dimension that the custom column is associated with, such as “Asset Attribute”, as well as provide a unique user defined attribute number, UDA001-UDA . . . C. This unique user defined attribute number (with or without the “UDA” prefix) may be used as a column sequence identifier for the placeholder columns. Once the processor has thus completed associating a unique column sequence identifier with each of the placeholder logical columns in the logical model, processing at process block315completes, and processing continues to END block320, where process300ends. The logical model is now configured to accept dynamically included custom columns without extract, load, and transform (ETL) operations. Note that pre-configuration method300need occur only once in order to support addition of multiple custom columns (up to a maximum of the fixed number of placeholder logical columns) without ETL operations on the logical model. GUI Input to Initiate Dynamic Custom Column Inclusion In one embodiment, a GUI may include user-selectable or user-manipulable elements for inputting information or commands. Examples of GUI elements include user-selectable graphical buttons, radio buttons, menus, checkboxes, drop-down lists, scrollbars, sliders, spinners, text boxes, icons, labels, progress bars, status bars, toolbars, windows, links or hyperlinks, and dialog boxes. In one example, GUI elements may be selected or manipulated by a user to enter information to be parsed and used by systems such as dynamic custom columns inclusion system120. In one example, GUI elements may be manipulated or interacted with by mouse clicks, mouse drag-and-drops, cursor hovers over the element, or other operations of a cursor controller or text input device. In one embodiment, prior to beginning method200, the processor accepts an input through a graphical user interface (GUI) indicating that the selected custom logical column is to be mapped to the custom physical column, wherein the mapping, retrieving, pivoting, and presenting steps are performed automatically in response to accepting the input. Thus, the input accepted through the GUI may serve as the trigger to start method200at start block205. In one embodiment, the inputs may be accepted through user-manipulable or user-selectable elements of the GUI. In one embodiment, the input accepted through the GUI may include a submission of a column sequence identifier and a custom physical column made by a GUI in response to selection of a button (for example, “update” button660as shown and described with reference toFIG.6) in the GUI. The submitted column sequence identifier and information identifying the custom physical column are input by the user, and then transmitted to system120in response to the user's selection of a button. The processor in system120receives and accepts the submitted column sequence identifier and information identifying the custom physical column for mapping. In response to accepting these items for mapping, the processor in system120automatically performs the mapping, retrieving, pivoting, and presenting steps in order for the column sequence identifier and information identifying the custom physical column to cause the custom physical column to be available as a custom logical column in the logical model. In one embodiment, a manual refresh of the data sources in the business intelligence environment may be needed prior to the presentation of the logical model including the mapped custom columns for access in the business intelligence environment. In this configuration, only the wherein the mapping, retrieving, and pivoting steps are performed automatically in response to accepting the input, and the presenting step is performed automatically in response to a manual refresh of the data sources following the completion of the mapping, retrieving, and pivoting steps. Dynamic Custom Column Removal FIG.4illustrates one embodiment of a method400for removing a dynamically included custom column associated with dynamic inclusion of custom columns into a logical model. In one embodiment, the steps of method400are performed by dynamic custom columns inclusion system120(as shown and described with reference toFIGS.1and7, and elsewhere herein). The method400initiates at START block405in response to parsing a signal received or stored data retrieved and determining that the signal or stored data indicates that the method400should begin. Processing continues to process block410. At process block410, the processor accepts an input through a graphical user interface indicating that the selected custom logical column is to be un-mapped from the custom physical column. In one embodiment, the processor receives and accepts an input from a user through a graphical user interface. The processor parses the input to determine that the input indicates that the mapping between the selected custom logical column and the custom physical column should be deleted, canceled, or otherwise removed. If so, then the processor has completed accepting an input through a graphical user interface indicating that the selected custom logical column is to be un-mapped from the custom physical column, processing at process block410completes, and processing continues to process block415. If the input does not indicate that the mapping should be removed, then the process will not continue to process block415. At process block415, in response to accepting the input the processor automatically deletes the mapping between the selected custom logical column and the custom physical column in real time. In one embodiment, in response to the determination that the mapping should be removed, the processor automatically deletes the mapping by deleting the column sequence number from its association with the physical column. If the assignment of the column sequence identifier was stored in the configuration table, the column sequence identifier will be deleted from its location in the configuration table. Once the processor has thus completed automatically deleting the mapping between the selected custom logical column and the custom physical column in real time, processing at process block415completes, and processing continues to process block420. At process block420, the processor presents the logical model without the mapped custom logical columns for access in the business intelligence environment in a manner similar to that described at process block225ofFIG.2for presenting the logical model with the mapped custom logical columns. Processing at process block420then completes, and processing continues to END block425, where process400ends. Dimensional Model Representation In a business intelligence system such as system105, a dimensional model describes data for the business intelligence system by relating facts to dimensions. The dimensional model is stored as a business intelligence platform repository file (also referred to as an RPD file or BI repository file). The BI repository file defines logical schemas, physical schemas, and physical-to-logical mappings. The BI repository file represents the dimensional model in three layers: physical layer, business model and mapping layer, and presentation layer. The physical layer defines the objects and relationships of the original data source. The business model and mapping layer defines the business or logical model of the data and specifies the mapping between the logical model and the physical schemas. The presentation layer controls the view of the logical model given to the users. FIG.5illustrates an example of a business intelligence administration tool500(a graphical user interface) showing the layers of an example dimensional model associated with dynamic inclusion of custom columns into a logical model. Tool500shows objects in the BI repository file that defines the dimensional model. Objects of the physical layer of the dimensional model are shown in physical layer viewer505. Objects of the business model and mapping layer of the dimensional model are shown in business model and mapping layer viewer510. Objects of the presentation layer of the dimensional model are shown in physical layer viewer515. A physical table is an object in a physical layer of a BI repository. The physical table represents or corresponds to a table in a data source such as a live operational database. For example, W1_ASSET_CHAR is a physical table with custom data, or a characteristic table. Table W1_ASSET CHAR is represented in the physical layer505by virtual table W1_ASSET_CHAR_A_VT520. Virtual table W1_ASSET_CHAR_A_VT520is also a characteristic table, or table of custom values, but is created automatically by a BI repository (RPD) generator tool used to create a BI repository (RPD) file describing the logical model. The table definition of virtual table W1_ASSET_CHAR_A_VT520is an inline view definition, and virtual table W1_ASSET_CHAR_A_VT520is an inline view of a data set composed of multiple physical tables. In particular, virtual table W1_ASSET_CHAR_A_VT520is defined by a query (such as a SQL query) that when executed causes the system to perform the steps of at least method200, as shown and described herein. The query forms virtual table W1_ASSET_CHAR_A_VT520from physical table W1_ASSET_CHAR, custom metadata, and mapping metadata and implementing by the mapping operation. Virtual table W1_ASSET_CHAR_A_VT520includes a key object ASSET_ID525and numerous custom physical columns UDA001-UDA . . . N530expressed as rows of virtual table520. A logical entity, such as a logical column, is an object in the business model and mapping layer that is generated from objects in the physical layer. In one embodiment, the BI repository file is used by a business intelligence system such as system105to map physical tables to logical entities that represent a star schema. The star schema logically relates dimension tables that describe items (people, places, and things) and a fact table that stores metrics (measurements, observations, or events) about the items. Custom logical columns535are one example of logical entities that are generated logically from objects in the physical layer. Assigning Custom Business Names in the Presentation Layer In one embodiment, descriptive names (if available) may be assigned to each of the custom logical columns, for example as part of or preceding the presentation steps process block225of method200. The descriptive names indicate the type of information represented by the custom logical column. This descriptive name may have a human-readable business meaning to the user, or at least be understood by the user to specify a particular type of information. In one embodiment, the descriptive name may be assigned by a user using a GUI to initially configure a mapping of a custom physical column to a custom logical column. For example, when a user creates an input to map a custom physical column to a custom logical column in the logical model, the user enters a descriptive name of what the physical column represents. In response to receiving the descriptive name, the processor stores the descriptive name in a name field for the logical column being mapped, for example in the configuration table for the logical model. Alternatively, the descriptive name is assigned to the custom physical column, and is accessed through the mapping, if the mapping has been performed. In one embodiment, metadata to define custom data such as metadata to define the descriptive (business) name is separate from the metadata for mapping map a custom physical column to a custom logical column. The descriptive name is defined as part of a custom physical column definition process when the custom physical columns are created for use in an underlying business application that the business intelligence system105draws information from. This custom column definition process is independent of the custom column mapping process to create the custom logical columns. The metadata to define custom data is used by the underlying business application and therefore the naming of custom physical columns is separate from the business-intelligence-related functionality. In one embodiment, the descriptive name may be stored in a separate metadata table to support internationalization. The definition of labels (including the descriptive name) in multiple languages is allowed and the user selects a language during login to the business intelligence service. The descriptive name (and other labels) in the selected language are integrated into the presented logical model as one of the last steps before presentation. The descriptive names for the custom logical columns in the selected language are retrieved when the user logs into the system and in response to the user's login, the descriptive names in the selected language are integrated for the specific user. Thus, when the logical model is shown, the business intelligence system merges the logical model information with the dynamically retrieved labels. In one embodiment, the separate metadata table includes codes associated with each custom column, and then a descriptive name in each available language (such as English, French, Spanish, German, Chinese, Japanese, etc.) for that custom column. The code associated with the column may serve as a key to the metadata table. In one embodiment, the processor assigns the mapped custom logical column a business name that represents the definition of the custom physical column. In order to do so, the processor automatically inserts the statement “VALUEOF(name field for the column)” into a custom display name function of the presentation level. The processor automatically evaluates this statement and retrieves the value of the stored name field for the column: the user-provided descriptive name of what the physical column represents. The processor automatically assigns the retrieved descriptive name to be the custom display name in the presentation layer. In one embodiment, the functions for automatically assigning a business name are performed by dynamic custom columns inclusion system120, for example by business name assignment module725of dynamic column inclusion module710(as shown and described with reference toFIGS.1and7, and elsewhere herein). Hiding Unmapped Custom Columns in the Presentation Layer As described herein, there are potentially hundreds or thousands of placeholder logical columns, many, or even the majority of which will not be mapped. For example, in one implemented example system, there are more than 21,000 of these custom column placeholders. If a custom logical column is unmapped—that is, not populated by the values of a custom physical column—then the column is of no use to the user of business intelligence system105. Also, the unmapped columns present visual clutter that reduces usability of the mapped columns among them. The business intelligence system105enables conditional hiding of information. Accordingly, these unmapped placeholder logical columns can be automatically made invisible in the presentation layer. While mapping a particular logical column to a custom physical column does not necessarily mean that there is a value in the logical column, the mapping makes it possible that there could be a value there, and therefore the user should have access to the logical column. A column should be hidden if it is unmapped. In one embodiment, this is enabled by leveraging the existing mapping information, and in particular, the value of the stored name field for the column. As discussed above, mapped logical columns are assigned a descriptive name of what the underlying physical column represents. (Or, alternatively, the descriptive name is assigned to the custom physical column, and is accessed through the mapping, if the mapping has been performed.) A logical column that is unmapped will not be assigned a descriptive name. Therefore, an expression determining that a column has no mapping can be based on the absence of a descriptive name. The processor evaluates the expression to control whether or not the column should be hidden. In one embodiment, the processor automatically hides all unmapped custom logical columns when presenting the logical model for access. In order to do so, the processor executes a function to conditionally hide an object on an expression that is true when there is a value for the name field of a custom logical column object, and that is false when there is no value for the name field of the custom logical column object. For example, the processor automatically inserts the statement VALUEOF(name field for the column)=“-” into a conditional “hide object if” function of the presentation level. This statement will evaluate as true if the null value “-” is found in the name field for the column. As discussed above, the name field will be null if there is no mapping. If the processor determines that the statement evaluates as true, the processor will automatically execute the hide function for the associated column object. The unnamed, unmapped logical column will not be displayed in the presentation layer. In one embodiment, the functions for automatically hiding unmapped custom logical columns are performed by dynamic custom columns inclusion system120, for example by hide unmapped columns module720of dynamic column inclusion module710(as shown and described with reference toFIGS.1and7, and elsewhere herein). Example Query for Dynamic Inclusion of Custom Columns In one embodiment, the mapping, retrieving, and pivoting steps of method200are performed as part of the execution of a single query. One example query for dynamic inclusion of custom columns is shown below in TABLE 1. (In the example query, the maximum available custom logical columns is set to 4 for convenience in showing the query. In one embodiment, the maximum available custom logical columns would be set to 100, or even higher.) TABLE 105  10  15  20  25  30  35  40  45  50  55  60  6501SELECT ASSET_ID,02min(decode(TARGET_COL, ‘1’, CASE trim(SRC_CHAR_TBL_COL)03WHEN ‘ADHOC_CHAR_VAL’ THEN ADHOC_CHAR_VAL04ELSE DEFINED_CHAR_VAL END)) UDA001,05min(decode(TARGET_COL, ‘2’, CASE trim(SRC_CHAR_TBL_COL)06WHEN ‘ADHOC_CHAR_VAL’ THEN ADHOC_CHAR_VAL07ELSE DEFINED_CHAR_VAL END)) UDA002,08min(decode(TARGET_COL, ‘3’, CASE trim(SRC_CHAR_TBL_COL)09WHEN ‘ADHOC_CHAR_VAL’ THEN ADHOC_CHAR_VAL10ELSE DEFINED_CHAR_VAL END)) UDA003,11min(decode(TARGET_COL, ‘4’, CASE trim(SRC_CHAR_TBL_COL)12WHEN ‘ADHOC_CHAR_VAL’ THEN ADHOC_CHAR_VAL13ELSE DEFINED_CHAR_VAL END)) UDA004,14FROM (15SELECT TBL.ASSET_ID,16trim(E.CHAR_SEQ_NUM)TARGET_COL,17TBL.CHAR_TYPE_CD,18E.SRC_CHAR_TBL_COL,19L.DESCR DEFINED_CHAR_VAL,20TBL.ADHOC_CHAR_VAL,21ROW_NUMBER( ) OVER(PARTITION BY TBL.ASSET_ID,22TBL.CHAR_TYPE_CD, E.CHAR_SEQ_NUM23ORDER BY TBL.EFFDT DESC)24FROM W1_ASSET_CHAR_TBL25LEFT OUTER JOIN CI_CHAR_VAL_L L ON (TBL.CHAR_TYPE_CD =26L.CHAR_TYPE_CD AND TBL.CHAR_VAL = L.CHAR_VAL)27LEFT OUTER JOIN F1_ETL_MP_CTRL E on (TBL.CHAR_TYPE_CD =28E.CHAR_TYPE_CD29AND trim(E.TARGET_TBL) = ’W1_ASSET’30AND trim(E.SRC_CHAR_TBL) = ’W1_ASSET_CHAR’)31WHERE TARGET_COL is not null32AND trim(BUS_OBJ_CD) = ‘F1-CharMapping’33AND (L.LANGUAGE_CD =34‘VALUEOF(NQ_SESSION.ASSET_LANGUAGE)’35AND TBL.EFFDT <= sysdate05  10  15  20  25  30  35  40  45  50  55  60  65 The example query is written in structured query language (SQL). In one embodiment, the example query is contained in the virtual table W1_ASSET_CHAR_A_VT520discussed above. In the example query, there are two nested select statements: an inner select statement from lines 15-30 and an outer select statement from lines 1-30. The inner select statement retrieves metadata defining custom column metadata (e.g. the column type and the associated label) and combines it with the custom column values in “W1_ASSET_CHAR TBL” at lines 24-26. The inner select statement also joins—in this case a Left Outer Join—the retrieved metadata with the mapping information at lines 27-30. The join of the retrieved metadata and the mapping information results in an enriched dataset of records. After creation of the enriched dataset of records, the outer select statement pivots the enriched dataset of records generated by the inner select statement into the appropriate target column, as shown at lines 2-13. In each decode statement, the target column is searched for each column sequence identifier in turn: “TARGET_COL, ‘N’” where N=1, 2, 3, 4, and so on to the maximum number of logical columns. The numbered UDA00N are each placeholder logical columns included in the logical model during the initial configuration (pre-configuration) of the logical model. So, the value of the target column is evaluated to determine whether it has been assigned each possible column sequence identifier in turn. If the value of the target column matches a sequence identifier, the enriched dataset of records is pivoted into the placeholder logical column associated with that column sequence identifier. Placeholder logical columns that are not indicated by target column do not receive the dataset of records. In one embodiment, BI system105may include an integrated pivot function (such as the Oracle DB SQL Pivot extension) generates a pivot result in a more efficient manner. Where an integrated pivot function is available lines 2-13 may be replaced by a call to the pivot function for the rows UDA001-UDA004. In one embodiment, the single query for mapping, retrieving, and pivoting forms an SQL-based mapping layer between logical model columns and a physical table that represents the custom columns as rows. Live Data Sources In one embodiment, the physical data sources for the logical model are operational databases that are “live,” or currently receiving and recording streams of operational data. Reporting on a live operational database provides access to real-time operational data, in contrast to reporting on a data warehouse, where the reporting provides access to data that is periodically extracted from a live operational database, but static in the time between extractions. The term reporting, as used here, refers to data retrieval from a data store. In one embodiment, the logical model describes star schemas built on top of tables in the operational database. The star schemas join logically related tables together to represent business processes to provide business intelligence. In one embodiment, the logical model is reporting on a live operational database. In one embodiment, the logical model is reporting on a live recovery mirror of a live operational database. Reporting on the recovery mirror—a parallel copy of the operational database that is more or less idle except for receiving duplicate operational information—reduces load on the live operational database while providing similar access to real time data. Example GUI—Enabling Custom Attributes Note that custom data may also be referred to herein as “characteristics.”FIG.6illustrates one embodiment of a GUI600for enabling and mapping characteristics to custom columns associated with dynamic inclusion of custom columns into a logical model. In one embodiment, business applications such as Oracle® Utilities business applications manage custom data in characteristics tables that are associated with a solution's maintenance objects. Characteristics of the type Predefined Value and Ad Hoc Value may also be exposed in subject areas of DV tools such as Oracle® Utilities Analytics Visualization so they can be used for analysis and data visualization. In one embodiment, custom values defined by characteristics types (custom column definitions) appear as columns with data type VARCHAR in dimension tables after the characteristics types have been mapped to target dimensions in the DV tool. For example, inFIG.6, the Account dimension includes the characteristic type External Account ID605. Characteristics are displayed at the end of the dimension folder610below the separator==Custom Columns==615. Example GUI—Map Characteristics to Dimensions As discussed above, each dimension's underlying object support characteristic tables has been created with a set of empty or free columns, which can be used by implementation teams to map characteristics. Each of these free columns has a unique identifier known as its Column Sequence. To map a characteristic to a target dimension in the DV tool means to select a characteristic type of a table in the transactional application, and to map it to a free dimension column identified by its Column Sequence number. In one embodiment, this may be performed in the characteristic mapping zone620of GUI600. The Column Sequence also determines the order in which characteristics are displayed in the dimension folder. During implementation, the customer may choose to change the Column Sequence number of characteristics or even remove mappings by changing or deleting the sequence number. Note that changing the Column Sequence of a characteristic may break or modify analytic canvases that use the column's previous position. In one embodiment, the mapping of characteristics in the transactional application to dimensions in the analytics application should be performed by a customer user who is an authorized application administrator, and this restriction may be enforced by an applicable user authorization policy. In one embodiment, to map a characteristic to a dimension as a customer user of the GUI600: 1. In the Search Menu of the enterprise or business application (such as Customer Cloud Service, Meter Solution Cloud Service, or Work and Asset Cloud Service for example), enter the Analytics Table. In response, an Analytics Table Search page is displayed. 2. Search for the dimension. In response, a set of results is displayed. 3. Select the description link of the table where characteristics are to be mapped. In response to selecting the description link, the Analytic Dimension page630is shown. 4. Expand the Characteristic Mapping zone620. Characteristic Mapping zone620manages characteristic mapping extensions for the dimensions. By default, only mapped characteristics are listed. 5. In the Characteristic Mapping zone, click the filter icon635to search for a characteristic. To facilitate the search, the user can use the mapping options640available in search zone645to display all mapped and unmapped characteristics (as shown), or all mapped characteristics, or all unmapped characteristics. The characteristics can further be narrowed by entering a characteristic type at650. 6. Select one or more characteristic types that is to be mapped (for example External Account ID at655) by checking the check box and click Update660. This allows mappings to be adjusted for a selected list of characteristics. 7. Specify a value for the Column Sequence to create a mapping. In the example shown, Column Sequence value 1 is specified, as shown at reference665. Since Column Sequences are unique identifiers of dimension columns, the values that specified must also be unique. For example, it is good practice to number the column sequence starting at 1 and increasing by 1. If the characteristic mapping was performed while a user was editing a project, then the user can view the new characteristics by clicking Menu670on the project toolbar and click Refresh Data Sets. In response to clicking Refresh Data Sets, the newly mapped characteristics will become visible in the project, for example displayed at the end of the dimension folder610below the separator==Custom Columns==615. To remove a characteristic mapping to a dimension: 1. In the Characteristic Mapping zone620, press the filter icon635to search for the characteristic. 2. Select the characteristic types that is to be un-mapped from among the displayed results675and click Update660. 3. Delete the Column Sequence for the mapping and click Save. Note that removing the Column Sequence of a characteristic will break the analytics canvases that used the mapped column sequence. Software Module Embodiments In general, software instructions are designed to be executed by a suitably programmed processor. These software instructions may include, for example, computer-executable code and source code that may be compiled into computer-executable code. These software instructions may also include instructions written in an interpreted programming language, such as a scripting language. In a complex system, such instructions are typically arranged into program modules with each such module performing a specific task, process, function, or operation. The entire set of modules may be controlled or coordinated in their operation by an operating system (OS) or other form of organizational platform. In one embodiment, one or more of the components described herein are configured as modules stored in a non-transitory computer readable medium. The modules are configured with stored software instructions that when executed by at least a processor accessing memory or storage cause the computing device to perform the corresponding function(s) as described herein. FIG.7illustrates a more detailed view700of one embodiment of dynamic custom columns inclusion system120. dynamic custom columns inclusion system120may include a logical model pre-configuration module705. In one embodiment, logical model pre-configuration module705includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference to process blocks310-315ofFIG.3. Dynamic custom columns inclusion system120may include a dynamic column inclusion module710. In one embodiment, dynamic column inclusion module710includes mapping, definition and pivot query module715, hide unmapped columns module720, and business name assignment module725. In one embodiment, mapping, definition and pivot query module715includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference to process blocks210-220ofFIG.2and/or the functions described with reference to the example query for dynamic inclusion of custom columns. In one embodiment, hide unmapped columns module720includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference to hiding unmapped custom columns in the presentation layer. In one embodiment, business name assignment module725includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference to assigning custom business names in the presentation layer. Dynamic custom columns inclusion system120may include a presentation module730. In one embodiment, presentation module730includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference to process block225ofFIG.2and other user interactions with business intelligence system105, or through the presentation layer. Dynamic custom columns inclusion system120may include a GUI module735. In one embodiment, GUI module735includes stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform the functions described with reference toFIG.6, interactions with users through web GUIs or other interfaces presented on computers of enterprise network115, and other user interactions described herein. Dynamic custom columns inclusion system120may include one or more additional modules745,745,750. In one embodiment, one or more of the additional modules745,745,750include stored software instructions that when executed by a processor cause dynamic custom columns inclusion system120to perform other functions described herein. Cloud or Enterprise Embodiments In one embodiment, the present system (such as business intelligence system105) is a computing/data processing system including an application or collection of distributed applications for enterprise organizations. The applications and computing system may be configured to operate with or be implemented as a cloud-based networking system, a software as a service (SaaS) architecture, or other type of networked computing solution. In one embodiment the present system is a centralized server-side application that provides at least a graphical user interface including one or more of the functions disclosed herein and that is accessed by many users via computing devices/terminals communicating with the present computing system (functioning as the server) over a computer network. Computing Device Embodiments FIG.8illustrates an example computing system800that is configured and/or programmed as a special purpose computing device with one or more of the example systems and methods described herein, and/or equivalents. The example computing device may be a computer805that includes a processor810, a memory815, and input/output ports820operably connected by a bus825. In one example, the computer805may include dynamic custom columns inclusion logic830configured to facilitate dynamic inclusion of custom columns into a logical model similar to the logic, systems, and methods shown and described with reference toFIGS.1-8. In different examples, the dynamic custom columns inclusion logic830may be implemented in hardware, a non-transitory computer-readable medium with stored instructions, firmware, and/or combinations thereof. While the dynamic custom columns inclusion logic830is illustrated as a hardware component attached to the bus825, it is to be appreciated that in other embodiments, the dynamic custom columns inclusion logic830could be implemented in the processor810, stored in memory815, or stored in disk835on computer-readable media837. In one embodiment, dynamic custom columns inclusion logic830or the computing system800is a means (such as, structure: hardware, non-transitory computer-readable medium, firmware) for performing the actions described. In some embodiments, the computing device may be a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, laptop, tablet computing device, and so on. The means may be implemented, for example, as an ASIC programmed to perform dynamic inclusion of custom columns into a logical model. The means may also be implemented as stored computer executable instructions that are presented to computer805as data840that are temporarily stored in memory815and then executed by processor810. Dynamic custom columns inclusion logic830may also provide means (e.g., hardware, non-transitory computer-readable medium that stores executable instructions, firmware) for performing dynamic inclusion of custom columns into a logical model. Generally describing an example configuration of the computer805, the processor810may be a variety of various processors including dual microprocessor and other multi-processor architectures. A memory81015may include volatile memory and/or non-volatile memory. Non-volatile memory may include, for example, ROM, PROM, EPROM, EEPROM, and so on. Volatile memory may include, for example, RAM, SRAM, DRAM, and so on. A storage disk835may be operably connected to the computer805by way of, for example, an input/output (I/O) interface (for example, a card or device)845and an input/output port820that are controlled by at least an input/output (I/O) controller847. The disk835may be, for example, a magnetic disk drive, a solid-state disk drive, a floppy disk drive, a tape drive, a Zip drive, a flash memory card, a memory stick, and so on. Furthermore, the disk835may be a CD-ROM drive, a CD-R drive, a CD-RW drive, a DVD ROM, and so on. The memory815can store a process850and/or data840formatted as one or more data structures, for example. The disk835and/or the memory815can store an operating system that controls and allocates resources of the computer805. The computer805may interact with, control, and/or be controlled by input/output (I/O) devices via the input/output (I/O) controller847, the I/O interfaces845and the input/output ports820. The input/output devices include one or more displays870, printers872(such as inkjet, laser, or3D printers), and audio output devices874(such as speakers or headphones), text input devices880(such as keyboards), a pointing and selection device882(such as mice, trackballs, touchpads, touch screens, joysticks, pointing sticks, stylus mice), audio input devices884(such as microphones), video input devices886(such as video and still cameras), video cards (not shown), disk835, network devices855, and so on. The input/output ports820may include, for example, serial ports, parallel ports, and USB ports. The computer805can operate in a network environment and thus may be connected to the network devices855via the I/O interfaces845, and/or the I/O ports820. Through the network devices855, the computer805may interact with a network860. Through the network860, the computer805may be logically connected to remote computers865. Networks with which the computer805may interact include, but are not limited to, a LAN, a WAN, and other networks. Selected Advantages Systems and methods described herein enable immediate, real-time availability of user-configured custom columns added dynamically to a static logical model. Such immediate real-time availability of user-configured custom columns in a logical model was not previously possible to computing devices because the logical model (and the underlying data structure representing the logical model) is static, and cannot be re-configured on the fly. The systems and methods described herein overcome that difficulty by pre-configuring the logical model to accept a pre-set number of placeholder logical columns that can be dynamically mapped, re-mapped, or un-mapped on the fly to user-configured custom columns. The systems and methods described herein were not previously practiced, but are a technique necessarily rooted in computer technology to overcome a problem specific to logical models. Without the systems and methods described herein, any change to the logical model must wait until the next ETL cycle is complete before the change takes effect. With the systems and methods described herein, the changes take effect immediately, in real-time. This is possible even though the logical model remains static due to the specific features of the systems and methods described herein. Thus, the BI system is able to create reports and other business intelligence using custom information that operate on a real-time system. Also, the systems and methods described herein enable automatic application of a descriptive business name to the logical model through the mapping of the custom physical column to the custom logical column in the logical model. The descriptive name for the custom column is automatically populated in the presentation layer, increasing the understanding by users as to the content of the custom column. Also, the systems and methods described herein are visually tidy because they enable automatic hiding of unmapped custom columns. Definitions and Other Embodiments In another embodiment, the described methods and/or their equivalents may be implemented with computer executable instructions. Thus, in one embodiment, a non-transitory computer readable/storage medium is configured with stored computer executable instructions of an algorithm/executable application that when executed by a machine(s) cause the machine(s) (and/or associated components) to perform the method. Example machines include but are not limited to a processor, a computer, a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, and so on). In one embodiment, a computing device is implemented with one or more executable algorithms that are configured to perform any of the disclosed methods. In one or more embodiments, the disclosed methods or their equivalents are performed by either: computer hardware configured to perform the method; or computer instructions embodied in a module stored in a non-transitory computer-readable medium where the instructions are configured as an executable algorithm configured to perform the method when executed by at least a processor of a computing device. While for purposes of simplicity of explanation, the illustrated methodologies in the figures are shown and described as a series of blocks of an algorithm, it is to be appreciated that the methodologies are not limited by the order of the blocks. Some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be used to implement an example methodology. Blocks may be combined or separated into multiple actions/components. Furthermore, additional and/or alternative methodologies can employ additional actions that are not illustrated in blocks. The methods described herein are limited to statutory subject matter under 35 U.S.C § 101. The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Both singular and plural forms of terms may be within the definitions. References to “one embodiment”, “an embodiment”, “one example”, “an example”, and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in one embodiment” does not necessarily refer to the same embodiment, though it may.API: application programming interface.ASIC: application-specific integrated circuit.BI: business intelligence.CD: compact disk.CD-R: CD recordable.CD-RW: CD rewriteable.DRAM: dynamic RAM.DV: data visualization.DVD: digital versatile disk and/or digital video disk.EEPROM: electrically erasable PROM.EPROM: erasable PROM.ETL: extract, load, and transform.GUI: graphical user interface.HTTP: hypertext transfer protocol.IAAS: infrastructure-as-a-service.LAN: local area network.OAC: Oracle® Analytics Cloud.OAS: Oracle® Application Server.OBIEE: Oracle® Business Intelligence Enterprise Edition.PAAS: platform-as-a-service.PCI: peripheral component interconnect.PCIE: PCI express.PROM: programmable ROM.RAM: random access memory.ROM: read only memory.SAAS: software-as-a-service.SQL: structured query language.SRAM: synchronous RAM.UGBU: Oracle® Utility Global Business Unit.USB: universal serial bus.XML: extensible markup language.WAN: wide area network. A “data structure”, as used herein, is an organization of data in a computing system that is stored in a memory, a storage device, or other computerized system. A data structure may be any one of, for example, a data field, a data file, a data array, a data record, a database, a data table, a graph, a tree, a linked list, and so on. A data structure may be formed from and contain many other data structures (e.g., a database includes many data records). Other examples of data structures are possible as well, in accordance with other embodiments. “Computer-readable medium” or “computer storage medium”, as used herein, refers to a non-transitory medium that stores instructions and/or data configured to perform one or more of the disclosed functions when executed. Data may function as instructions in some embodiments. A computer-readable medium may take forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media may include, for example, optical disks, magnetic disks, and so on. Volatile media may include, for example, semiconductor memories, dynamic memory, and so on. Common forms of a computer-readable medium may include, but are not limited to, a floppy disk, a flexible disk, a hard disk, a magnetic tape, other magnetic medium, an application specific integrated circuit (ASIC), a programmable logic device, a compact disk (CD), other optical medium, a random access memory (RAM), a read only memory (ROM), a memory chip or card, a memory stick, solid state storage device (SSD), flash drive, and other media from which a computer, a processor or other electronic device can function with. Each type of media, if selected for implementation in one embodiment, may include stored instructions of an algorithm configured to perform one or more of the disclosed and/or claimed functions. Computer-readable media described herein are limited to statutory subject matter under 35 U.S.C § 101. “Logic”, as used herein, represents a component that is implemented with computer or electrical hardware, a non-transitory medium with stored instructions of an executable application or program module, and/or combinations of these to perform any of the functions or actions as disclosed herein, and/or to cause a function or action from another logic, method, and/or system to be performed as disclosed herein. Equivalent logic may include firmware, a microprocessor programmed with an algorithm, a discrete logic (e.g., ASIC), at least one circuit, an analog circuit, a digital circuit, a programmed logic device, a memory device containing instructions of an algorithm, and so on, any of which may be configured to perform one or more of the disclosed functions. In one embodiment, logic may include one or more gates, combinations of gates, or other circuit components configured to perform one or more of the disclosed functions. Where multiple logics are described, it may be possible to incorporate the multiple logics into one logic. Similarly, where a single logic is described, it may be possible to distribute that single logic between multiple logics. In one embodiment, one or more of these logics are corresponding structure associated with performing the disclosed and/or claimed functions. Choice of which type of logic to implement may be based on desired system conditions or specifications. For example, if greater speed is a consideration, then hardware would be selected to implement functions. If a lower cost is a consideration, then stored instructions/executable application would be selected to implement the functions. Logic is limited to statutory subject matter under 35 U.S.C. § 101. An “operable connection”, or a connection by which entities are “operably connected”, is one in which signals, physical communications, and/or logical communications may be sent and/or received. An operable connection may include a physical interface, an electrical interface, and/or a data interface. An operable connection may include differing combinations of interfaces and/or connections sufficient to allow operable control. For example, two entities can be operably connected to communicate signals to each other directly or through one or more intermediate entities (e.g., processor, operating system, logic, non-transitory computer-readable medium). Logical and/or physical communication channels can be used to create an operable connection. “User”, as used herein, includes but is not limited to one or more persons, computers or other devices, or combinations of these. While the disclosed embodiments have been illustrated and described in considerable detail, it is not the intention to restrict or in any way limit the scope of the appended claims to such detail. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the various aspects of the subject matter. Therefore, the disclosure is not limited to the specific details or the illustrative examples shown and described. Thus, this disclosure is intended to embrace alterations, modifications, and variations that fall within the scope of the appended claims, which satisfy the statutory subject matter requirements of 35 U.S.C. § 101. To the extent that the term “includes” or “including” is employed in the detailed description or the claims, it is intended to be inclusive in a manner similar to the term “comprising” as that term is interpreted when employed as a transitional word in a claim. To the extent that the term “or” is used in the detailed description or claims (e.g., A or B) it is intended to mean “A or B or both”. When the applicants intend to indicate “only A or B but not both” then the phrase “only A or B but not both” will be used. Thus, use of the term “or” herein is the inclusive, and not the exclusive use.
81,976
11860833
DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of this application clearer, the following further describes the implementations of this application in detail with reference to the accompanying drawings. Many distributed databases (DDB) support a data redistribution technology. For example, the data redistribution technology may be applied in a scenario such as system capacity expansion, capacity reduction, or data migration. Online data redistribution refers to data redistribution without interrupting a user service. The distributed database may include a relational database. The relational database is a database that uses a relational model to organize data, and stores the data in a form of a row and a column. Generally, a row of data is a minimum unit for data reading and writing, and is also referred to as a record. In the relational database, a series of rows and columns are referred to as a data table. A data table may be regarded as a two-dimensional table. The relational model may be simply understood as a two-dimensional table model. The relational database includes one or more data tables and relationship description information between the data tables. Each data table includes table data and table information. The table data is data, in the data table, that is deployed on a data node, namely, the data stored in the form of a row and a column. The table information is information describing the data table, for example, information describing a definition and an architecture of the data table. The table information of the data table may be stored on each data node on which the table data is deployed, or may be stored on an independent node. In the relational database, the data is stored in a structured manner. Each field in each data table is defined according to a preset rule (in other words, a structure of the table is predefined), and then the data is stored based on the structure of the data table. In this way, the form and content of the data are already defined before the data is stored into the data table, so that reliability and stability of the entire data table are relatively high. In the relational database, data in the one or more data tables is deployed on a plurality of data nodes of the database. Generally, a temporary table is created to implement online data redistribution. For example, refer toFIG.1. For a first data table T1(which may be referred to as a source table) in which data needs to be redistributed, a temporary table T2is first created for the table. Then, all data (data1to data9), in the first data table, that is deployed on data nodes (inFIG.1, three data nodes, nodes1to3, are used as an example) corresponding to the first data table T1is replicated to data nodes corresponding to the temporary table (inFIG.1, four data nodes, nodes1to4, are used as an example). Full data migration refers to one-off replication of all data in a data table. After a data replication process is completed, data in the temporary table T2and the data in the first data table are exchanged. After an exchange is completed, the data in the temporary table and the data in the first data table are deleted, to complete a full data redistribution process. For example, the relational database may be a Greenplum database (gpdb) or a GaussDB. In a process of replicating data from a node on which the source table is located to a node on which the temporary table is located (also referred to as data redistribution), if data update operations such as data addition, deletion and/or modification are performed, data in the temporary table may be inconsistent with the data in the source table. In this case, the source table is locked by an exclusive lock to temporarily disable data update, and is unlocked after a data switching process. For example, in the gpdb, to avoid data update operations such as data addition, deletion, and/or modification in the data redistribution process, a table in which data replication is being performed is locked, and the data addition operation (also referred to as a data insertion operation), the data deletion operation, and the data modification operation are not allowed on data in the table. Only a data query operation on the data in the table is allowed. In the GaussDB, it is assumed that data redistribution needs to be performed on the first data table. After the temporary table is established, to allow data update in the data redistribution process, for example, the data addition, deletion, and/or modification, after receiving a data update request (for example, a data addition request or a data deletion request), the GaussDB uses a specified file to record updated data, so that after a full data migration is completed, updated data in the full data migration process can be found, and incremental data migration is performed based on the updated data. The incremental data migration process refers to checking whether there are updated records (including a deleted record, a modified record, and an inserted record in the full data migration process) in the specified file. If there are updated records, the updated data is replicated again based on the updated records. An update operation may always exist. Therefore, if there is still an updated record in the specified file after several incremental data migration processes are performed, in the last incremental data migration process, the first data table should be locked (for example, by the exclusive lock) and data replication should be performed. After the data replication, an exchange process is performed between the first data table and the temporary table. Finally, the lock is released. In the foregoing data redistribution process, data consistency between the source table and the temporary table needs to be ensured, and the data switching process also needs to be performed. Therefore, complexity of the online data redistribution is relatively high. In addition, full migration of a data table takes a long time and consumes a large quantity of resources (a plurality of resources including central processing unit (CPU) resources, memory resources, input/output (I/O) resources are greatly consumed). Another user job executed at a same time may also be affected due to insufficient resources. Refer toFIG.2.FIG.2is a schematic diagram of an application environment of a distributed database system (DDBS) to which a data redistribution method is applied according to an embodiment of this application. The DDBS may be deployed on one server or a server cluster including a plurality of servers. The DDBS includes a distributed database management system (DDBMS) and a DDB. In the distributed database system, an application can transparently operate the DDB through the DDBS. Data in the DDB is stored in different local databases, managed by one or more DDBMSs, run on different machines, supported by different operating systems, and are connected by different communications networks. A DDBS10includes: a management node (also referred to as a database engine, a coordinator data node, or a coordinator)101and a data node102. The DDBMS may be deployed on the management node101, and the DDB may be deployed on a plurality of data nodes (data node)102. The distributed database may be established based on a share-nothing architecture. In other words, all data in the database is distributed on data nodes, and the data is not shared between the data nodes. The management node101is configured to manage a corresponding data node102, and implement an operation performed by an application20on the data node102, for example, perform a data addition operation, a data deletion operation, a data modification operation, or a data query operation. In this embodiment of this application, the management node101may be an independent node, or a specified data node or an elected data node in the plurality of data nodes102. The management node101may be a server or a server cluster including a plurality of servers. Each data node represents a specified minimum processing unit of the DDBS. For example, each data node may be an application instance or a database execution process that manages and/or stores data. The DDBS may be deployed on the server or the server cluster including a plurality of servers. The distributed database may have a plurality of data tables, and data records of each data table are distributed to each data node according to a distribution rule defined by a user. The data distribution rule is usually hash distribution, namely, key-value distribution. For ease of understanding, a hash distribution principle is briefly described in this embodiment of this application. The hash distribution is a data distribution method based on a hash function. The hash function may also be referred to as a hash function. The hash function is a function that obtains a value (value, also referred to as a hash value) based on a data key (key, also referred to as a key value; and also referred to as a distribution key value in a distributed system). To be specific, value=f(key), and the function f is the hash function. Table 1 is used as an example. It is assumed that the hash function is f(key)=key mod 5, and “mod” indicates a modulo operation. In other words, the hash function is a modulo operation (Module Operation) function. If keys are 1, 2, 3, 4, 5, 6, 7, 8, and 9, corresponding values are 1, 2, 3, 4, 0, 1, 2, 3, and 4, respectively. TABLE 1key123456789value123401234 According to the preceding information, when a key is 1 or 6, a value is 1. Therefore, when the hash function is used to determine the value, different keys may correspond to a same value. This case is referred to as a hash conflict. A hash bucket algorithm is a special hash algorithm, and can resolve the hash conflict. A hash bucket is a container for placing different key linked lists (also referred to as hash tables). The hash bucket is also referred to as an f(key) set or a value set. A same hash bucket corresponds to a same value. With reference to the foregoing example, a quantity of hash buckets may be set to a value of a modulus (also referred to as a modulus), that is, 5. A plurality of values are in a one-to-one correspondence to a plurality of hash buckets. For example, a value may be used as an index or a sequence number of a hash bucket. Each hash bucket stores keys having a same value, and conflicting keys in a same hash bucket are stored in a one-way linked list. In this way, the hash conflict is resolved. When data corresponding to a key is searched for, a hash bucket of a corresponding value is indexed through the key. Then, a search is started from a node corresponding to a first address of the hash bucket. In other words, the search is based on a linked list sequence. Key values are compared until the corresponding key is found, and the corresponding data is indexed based on the found key. As shown in Table 1, when a key is 1 or 6, corresponding data is stored in a hash bucket 1; when a key is 2 or 7, corresponding data is stored in a hash bucket 2; when a key is 3 or 8, corresponding data is stored in a hash bucket 3; when a key is 4 or 9, corresponding data is stored in a hash bucket 4; and when a key is 5, corresponding data is stored in a hash bucket 0. It should be noted that the foregoing embodiment is described merely with an example in which the hash function is the modulo function. Actually, the hash function may alternatively be a function for obtaining a remainder (in this case, the hash function is a complementation function, and a quantity of hash buckets is a value of a modulus), or another function. This is not limited in this embodiment of this application. An embodiment of this application provides a data redistribution method. The method may be applied to the distributed database in the application environment shown inFIG.2, and can reduce complexity of online data redistribution. All or part of the method may be performed by the foregoing management node. As shown inFIG.3, in this embodiment of this application, it is assumed that a first data table is a to-be-migrated data table, namely, a data table in which data is to be redistributed. The method includes the following steps. Step301: The management node determines a first node set and a second node set that are in the distributed database and that are separately associated with the first data table. Operation and maintenance personnel of the distributed database adjust a data node based on information such as a database load. When a new data node is added to the distributed database (in a capacity expansion scenario), or some data nodes need to be deleted (in a capacity reduction scenario), or storage data on some data nodes needs to be adjusted (in a data migration scenario), or an inter-group data table of a data node needs to be adjusted (in an inter-group data table adjustment scenario), the operation and maintenance personnel may input a data redistribution instruction to the management node. The management node receives the data redistribution instruction, and controls, based on the data redistribution instruction, the data node to perform data redistribution. The data redistribution instruction is a structured query language (SQL) used to indicate the data redistribution, and includes one or more SQL statements. In the inter-group data table adjustment scenario, data nodes in the distributed database are grouped into different data node groups. Each data node group includes a same quantity or different quantities of data nodes. When a user wants to migrate a table created on a data node group to another data node group, table data needs to be redistributed on the new data node group, to create this scenario. In different data redistribution scenarios, data redistribution content is different. For example, in the capacity expansion scenario, data nodes after redistribution include all data nodes before redistribution, and the data redistribution instruction is a capacity expansion instruction. The capacity expansion instruction is used to indicate a data table (which is the first data table in this embodiment) related to a capacity expansion operation, and is further used to indicate a data node added in the capacity expansion operation. In the capacity reduction scenario, data nodes before redistribution include all data nodes after redistribution, and the data redistribution instruction is a capacity reduction instruction. The capacity reduction instruction is used to indicate a data table (which is the first data table in this embodiment) related to a capacity reduction operation, and is further used to indicate a data node reduced in the capacity reduction operation. In the data migration scenario, data nodes before and after redistribution may or may not overlap, and the data redistribution instruction is a data migration instruction. The data migration instruction is used to indicate a data table (which is the first data table in this embodiment) related to a data migration operation, and is further used to indicate a target data node migrated in the data migration operation. In the inter-group data table adjustment scenario, generally, no data node overlaps between data nodes after redistribution and data nodes before redistribution. The data redistribution instruction is a data migration instruction. The data migration instruction is used to indicate a data table (which is the first data table in this embodiment) related to a data migration operation, and is further used to indicate a target data node group migrated in the data migration operation. It should be noted that there may be another data redistribution scenario. These are merely examples for description in this embodiment of this application, and are not limited thereto. After the data redistribution instruction triggers a data redistribution process, to effectively identify whether the first data table is currently in the data redistribution process, the management node may add a redistribution flag to the first data table. The redistribution flag is used to indicate that the first data table is in the data redistribution process. Subsequently, after receiving a service request of the user, the management node may execute a corresponding action by querying whether a redistribution flag is added to a data table related to the service request. The management node may obtain the first node set and the second node set based on the data redistribution instruction (to be specific, by parsing the SQL statements in the data redistribution instruction). The first node set includes a data node configured to store data in the first data table before the data in the first data table is redistributed. In other words, the first node set is a set of data nodes on which the data in the first data table is currently (in other words, when the step301is performed, and before a step302) deployed. The second node set includes a data node configured to store the data in the first data table since the data in the first data table is redistributed. In other words, the second node set is a set of data nodes on which the data in the first data table is deployed after subsequent data migration (in other words, after the step302). In this embodiment of this application, both the first node set and the second node set include one or more data nodes. The first node set may be obtained in a plurality of manners. In an optional manner, the data nodes on which the data in the first data table is currently deployed may be directly queried, to obtain the first node set. In another optional manner, a current mapping relationship between data in each data table and a data node, in a node set, on which the data in each data table is deployed may be maintained in the distributed database. Each mapping relationship may be updated in real time based on a deployment location of data in a corresponding data table. Therefore, the first node set corresponding to the first data table may be obtained by querying the mapping relationship. For example, a mapping relationship between the data in the first data table and the data node in the first node set is referred to as a first mapping relationship, and the first node set may be determined by querying the first mapping relationship. In still another optional manner, the data redistribution instruction may carry an identifier of the first node set, and the first node set is obtained based on the identifier. The second node set may also be obtained in a plurality of manners. The second node set may be directly obtained based on the data redistribution instruction. For example, in the capacity expansion scenario, the first node set and the data node added in the capacity expansion operation are determined as data nodes included in the second node set. As shown inFIG.2, the capacity expansion scenario is used as an example. In this case, the first node set includes four data nodes in total, and the second node set includes six data nodes in total. In the capacity reduction scenario, data nodes in the first node set other than the data node reduced by the capacity reduction operation are determined as the second node set. In the data migration scenario, the target data node migrated in the data migration operation is determined as the second node set. In the inter-group data table adjustment scenario, the target data node group migrated in the data migration operation is determined as the second node set. It should be noted that the first node set and the second node set may alternatively be determined in another manner. These are merely examples for description in this embodiment of this application, and are not limited thereto. As shown inFIG.4, it is assumed that the first node set includes data nodes N1to N6, and the second node set includes data nodes N2to N5and N7to N9. In this case, data nodes related to this data redistribution include data nodes N1to N9. Before a data migration process of the first data table in the step302, the data nodes related to the data redistribution may be uniformly numbered and sorted in advance, and the first mapping relationship between the data in the first data table and the data node in the first node set, and a second mapping relationship between the data in the first data table and the data node in the second node set are determined according to a hash distribution rule. The first mapping relationship and the second mapping relationship may be determined according to a principle of a minimum movement quantity (also referred to as a minimum data movement principle). If a distributed system pre-stores the first mapping relationship that is between the data in the first data table and the data node in the first node set, the mapping relationship may be directly obtained, and hash calculation is not performed again. A mapping relationship of table data distribution may be organized by obtaining the first mapping relationship and the second mapping relationship. In this way, a moving direction of the data in a subsequent data migration process can be easily found. In addition, preparation for generating a distributed plan (also referred to as a distributed execution plan) can be conveniently made in a process of migrating the data in the first data table. In short, the foregoing process of determining the first node set and the second node set is a process of determining related data nodes before and after the data redistribution, and the process of determining the mapping relationship is a process of determining data nodes on which data is specifically distributed before and after the data redistribution. Step302: The management node migrates the data in the first data table from the first node set to the second node set. In this embodiment of this application, a principle of a migration action is similar to data cutting, and is an action of moving a piece of data from one node to another node. A process of migrating the data in the first data table from the first node set to the second node set is a process of moving the data in the first data table from the first node set to the second node set. Optionally, data moved from the first node set is no longer stored in the first node set. The data migration process of the first data table, namely, the data redistribution process, may have a plurality of implementations. In this embodiment of this application, the following several optional implementations are used as examples for description, but these are not limited. In a first optional implementation, all data in the first data table is directly migrated from the first node set to the second node set. In other words, all the data in the first data table is used as the to-be-migrated data. In this way, a migration process is a full migration process. In a second optional implementation, the to-be-migrated data is filtered from the data, in the first data table, that is stored in the first node set, where the to-be-migrated data is the data, in the first data table, that is not stored in the second node set before migration; and the to-be-migrated data is migrated from the first node set to the second node set. In some scenarios, for example, in the capacity expansion scenario, because some data may not need to be migrated, the data may be referred to as invalid migrated data. For example, the invalid migrated data may be data that is deployed at a same location on a data node before and after the migration and/or data that has been deleted before a migration action, and migration of the data occupies data resources, and affects migration efficiency. Therefore, the invalid migrated data may be filtered out, and data that actually needs to be migrated is used as the to-be-migrated data. In other words, the to-be-migrated data includes data other than the invalid migrated data in the first data table. In this way, partial migration of the table data may be implemented, to reduce an amount of migrated data, reduce data resource occupation, and improve migration efficiency. It should be noted that, only when a same data node exists in the first node set and the second node set (in other words, data nodes in the first node set and the second node set intersects), a case in which a location of data deployed on a data node does not change before and after the migration may occur. If the data nodes in the first node set and the second node set are totally different (this case may occur in the data migration scenario), generally, the case in which a location of data deployed on a data node does not change before and after the migration does not occur. In this case, all data, in the first data table, that is deployed on the data node in the first node set needs to be migrated to the data node in the second node set. In other words, the to-be-migrated data is all the data, in the first data table, that is deployed on the data node in the first node set. Therefore, in this embodiment of this application, before the to-be-migrated data is filtered from the data, in the first data table, that is stored in the first node set, whether a same data node exists in the first node set and the second node set may further be detected. When the same data node exists in the first node set and the second node set, the to-be-migrated data is filtered from the data, in the first data table, that is stored in the first node set. When the first node set and the second node set do not have the same data node, a filtering action is not executed. Because a calculation amount of a process of filtering the to-be-migrated data is larger than that of the foregoing detection process, additional filtering of the to-be-migrated data can be avoided, to reduce calculation complexity and improve data migration efficiency. For example, as shown inFIG.5, a process of filtering the to-be-migrated data from the data, in the first data table, that is stored in the first node set may include the following steps. Step3021: The management node obtains the first mapping relationship between the data in the first data table and the data node in the first node set. In the distributed database, data is distributed according to a load balancing principle. With reference to the foregoing description, to ensure even data distribution and implement load balancing, the hash distribution rule is usually used to distribute the data on each data node. Further, to avoid a hash conflict, a hash bucket algorithm may further be introduced to distribute the data. In a distributed database to which the hash bucket algorithm is introduced, data distributed on each data node is usually measured in a hash bucket, to achieve load balancing. Generally, data corresponding to one or more hash buckets may be deployed on one data node. When the data is distributed according to the hash distribution rule, the first mapping relationship may be represented by a mapping relationship between the hash value and an identifier of the data node in the first node set. Further, in the distributed database to which the hash bucket algorithm is applied, because in the hash bucket algorithm, hash values are in a one-to-one correspondence with hash bucket identifiers, the first mapping relationship may alternatively be represented by a mapping relationship between a hash bucket identifier and the identifier of the data node in the first node set. The identifier of the data node may include one or more characters (for example, numbers), and is used to identify the data node. The identifier of the data node may be a data node name (for example, N1or N2) or a data node number. The hash bucket identifier may include one or more characters (for example, numbers), and is used to identify the hash bucket. The hash bucket identifier may be a value of a calculated hash value, or may be a hash bucket number, for example, 1 or 2. The first mapping relationship may be calculated in real time. If the distributed database pre-records the first mapping relationship, the pre-recorded first mapping relationship may alternatively be directly obtained. The first mapping relationship may be represented in a manner of a relationship diagram, a relationship table, or a relationship index. For example, the first mapping relationship may be a relationship diagram shown inFIG.6. In the relationship diagram, it is assumed that the first mapping relationship may be represented by a mapping relationship between the hash bucket number and a name of the data node in the first node set. Therefore, as shown inFIG.6, based on the first mapping relationship, it can be learned that data whose hash bucket numbers are 1 to 6 respectively corresponds to data nodes whose data node names are N1to N6, data whose hash bucket numbers are 7 to 12 respectively corresponds to the data nodes whose data node names are N1to N6, and data whose hash bucket numbers are 13 to 17 respectively corresponds to data nodes whose names are N1to N5. It can be learned that, in the first mapping relationship, a data node N1corresponds to hash buckets whose hash bucket numbers are 1, 7, and 13; a data node N2corresponds to hash buckets whose hash bucket numbers are 2, 8, and 14; a data node N3corresponds to hash buckets whose hash bucket numbers are 3, 9, and 15; a data node N4corresponds to hash buckets whose hash bucket numbers are 4, 10, and 16; a data node N5corresponds to hash buckets whose hash bucket numbers are 5, 11, and 17; and a data node N6corresponds to hash buckets whose hash bucket numbers are 6 and 12. In the first mapping relationship shown inFIG.6, the data node name has a one-to-many relationship with the hash bucket number. Step3022: The management node obtains the second mapping relationship between the data in the first data table and the data node in the second node set. Similar to the first mapping relationship, the second mapping relationship may be represented in a plurality of manners and in a plurality of forms. When the data is distributed according to the hash distribution rule, the second mapping relationship may be represented by a mapping relationship between the hash value and an identifier of the data node in the second node set. Further, in the distributed database to which the hash bucket algorithm is applied, the second mapping relationship may alternatively be represented by a mapping relationship between the hash bucket identifier and the identifier of the data node in the second node set. The identifier of the data node may include one or more characters (for example, numbers), and is used to identify the data node. The identifier of the data node may be a data node name (for example, N1or N2) or a data node number. The hash bucket identifier may include one or more characters (for example, numbers), and is used to identify the hash bucket. The hash bucket identifier may be the value of the calculated hash value, or may be the hash bucket number, for example, 1 or 2. The second mapping relationship may be calculated in real time, for example, determined based on the first mapping relationship and the principle of the minimum movement quantity. If the distributed database pre-records the second mapping relationship, the pre-recorded second mapping relationship may alternatively be directly obtained. The second mapping relationship may be represented in a manner of a relationship diagram, a relationship table, or a relationship index. For example, the second mapping relationship may be the relationship diagram shown inFIG.6. In the relationship diagram, it is assumed that the second mapping relationship may be represented by a mapping relationship between the hash bucket number and a name of the data node in the second node set. Therefore, as shown inFIG.6, based on the second mapping relationship, it can be learned that the data whose hash bucket numbers are 1 to 6 respectively corresponds to data nodes whose data node names are N7, N2, N3, N4, N5, and N8; the data whose hash bucket numbers are 7 to 12 respectively corresponds to data nodes whose data node names are N9, N2, N3, N4, N7, and N8; and the data whose hash bucket numbers are 13 to 17 respectively corresponds to data nodes whose names are N9, N2, N3, N7, and N5. It can be learned that, in the second mapping relationship, the data node N2corresponds to the hash buckets whose hash bucket numbers are 2, 8, and 14; the data node N3corresponds to the hash buckets whose hash bucket numbers are 3, 9, and 15; the data node N4corresponds to hash buckets whose hash bucket numbers are 4 and 10; the data node N5corresponds to hash buckets whose hash bucket numbers are 5 and 17; a data node N7corresponds to hash buckets whose hash bucket numbers are 1, 11, and 16; a data node N8corresponds to hash buckets whose hash bucket numbers are 6 and 12; and a data node N9corresponds to hash buckets whose hash bucket numbers are 7 and 13. In the second mapping relationship shown inFIG.6, the data node name has a one-to-many relationship with the hash bucket number. It should be noted that the first mapping relationship and the second mapping relationship may be represented by a same relationship diagram, relationship table, or relationship index, or may be separately represented by respective relationship diagrams, relationship tables, or relationship indexes. InFIG.6, an example in which the first mapping relationship and the second mapping relationship may be represented by the same relationship diagram is used for description, but this is not limited. Step3023: The management node filters, based on the first mapping relationship and the second mapping relationship, the to-be-migrated data from the data, in the first data table, that is stored in the first node set. With reference to the foregoing content, it can be learned that the to-be-migrated data is data whose location deployed on the data node changes before and after the migration (namely, the data redistribution), namely, valid migrated data. The to-be-migrated data is the data, in the first data table, that is not stored in the second node set before the migration. In an optional example, each piece of data in the first data table may be traversed, and the to-be-migrated data is filtered, by comparing the first mapping relationship with the second mapping relationship, from the data, in the first data table, that is stored in the first node set. Specifically, for target data in the first data table, when a data node that is determined based on the first mapping relationship and that is corresponding to the target data is different from a data node that is determined based on the second mapping relationship and that is corresponding to the target data, the target data is determined as the to-be-migrated data on the data node that is determined based on the first mapping relationship and that is corresponding to the target data. FIG.6is used as an example. It is assumed that the hash value is the same as the hash bucket number. For target data X in the first data table, a hash value of the target data X is calculated. It is assumed that a hash value obtained through calculation is 1. In this case, the hash value of the target data X is stored in a hash bucket 1, and the target data X is the data whose hash bucket number is 1. It can be learned from the first mapping relationship that a data node corresponding to the target data X is N1. It can be learned from the second mapping relationship that a data node corresponding to the target data X is N7. Therefore, data nodes of the target data X before and after data migration are different, so that the target data X on the data node N1is determined as the to-be-migrated data. In another optional example, the first mapping relationship is compared with the second mapping relationship, and data that is in the two mapping relationships and that is stored on different data nodes is used as the to-be-migrated data. Specifically, the comparison process includes: for each data node in the first node set, querying the first mapping relationship to obtain a first data set corresponding to the data node; querying the second mapping relationship to obtain a second data set corresponding to the data node; and using, as to-be-migrated data corresponding to the data node, data that is in the first data set and that is different from data in the second data set. Obtained to-be-migrated data corresponding to each data node in the first node set forms final to-be-migrated data. It should be noted that, for a data node in the first node set, the data node may not exist in the second node set. If the data node does not exist in the second node set, a second data set corresponding to the data node is empty. FIG.6is used as an example. For the data node N1in the first node set, the first mapping relationship is queried to obtain that a first data set corresponding to the data node includes data whose hash bucket numbers are 1, 7, and 13; and the second mapping relationship is queried to obtain that a second data set corresponding to the data node N1is empty. In this case, to-be-migrated data corresponding to the data node N1is the data whose hash bucket numbers are 1, 7, and 13. For the data node N2in the first node set, the first mapping relationship is queried to obtain that a first data set corresponding to the data node is data whose hash bucket numbers are 2, 8, and 14; and the second mapping relationship is queried to obtain that a second data set corresponding to the data node includes the data whose hash bucket numbers are 2, 8, and 14. In this case, to-be-migrated data corresponding to the data node N2is empty. A method for obtaining to-be-migrated data on another data node is similar, and details are not described again in this embodiment of this application. Finally, to-be-migrated data corresponding to the first node set includes data whose hash bucket numbers are 1, 11, and 16 (subsequently, the data is separately migrated from the data nodes N1, N5, and N4to the data node N7), data whose hash bucket numbers are 6 and 12 (subsequently the data is migrated from the data node N6to the data node N8), and data whose hash bucket numbers are 7 and 13 (subsequently, the data is migrated from the data node N1to the data node N9). In other data redistribution processes, when data is migrated from a source table to a temporary table, an exclusive lock is added to the source table to temporarily disable data update. In a gpdb, full data migration is used, so that the source table needs to be locked in an entire migration process. If a relatively large amount of data needs to be migrated, for example, dozens of giga (G) or dozens of tera (T) of data needs to be migrated, a user service may be blocked for dozens of minutes or even several hours. In a GaussDB, an entire migration process is divided into a full migration and a plurality of incremental migrations. If a relatively large amount of data is migrated, for example, dozens of G or dozens of T of data is migrated, the user service may be blocked for dozens of minutes. However, in this embodiment of this application, although the full data migration is still used, in a scenario such as capacity expansion or capacity reduction, migration of a large amount of invalid migrated data can be reduced through the process of filtering the to-be-migrated data in the step3023, to reduce service blocking duration and improve migration efficiency. In an optional embodiment, the process of migrating the data in the first data table from the first node set to the second node set may be executed through one or more distributed transactions. All transactions in the distributed database may be referred to as distributed transactions. The distributed transactions in this embodiment of this application relate to the management node and a plurality of data nodes. A distributed transaction usually includes three phases: a transaction start phase, a transaction execution phase, and a transaction commit phase. In a process of executing the distributed transaction, in the transaction start phase, the management node needs to prepare a specific statement for a subsequent transaction execution phase; and in the transaction execution phase, the management node executes one or more actions related to the distributed transaction. The plurality of actions may be concurrently executed. In this embodiment of this application, an action included in the distributed transaction may be a scanning action, or a migration action. The migration action may relate to one or more SQL statements. The action included in the distributed transaction may alternatively be generating a distributed plan and sending the distributed plan. In the transaction commit phase, a two-phase commit (2PC) protocol or a three-phase commit (3PC) protocol is followed, to maintain consistency of the transaction executed by the management node and the plurality of data nodes. In another optional embodiment, the process of migrating the data in the first data table from the first node set to the second node set may be implemented through a plurality of distributed transactions that are serially executed. In this embodiment of this application, the management node may serially execute the plurality of distributed transactions, to control the data nodes in the first node set and the second node set to implement data migration. Specifically, when serially executing the plurality of distributed transactions, the management node selects, from unmigrated data, in the first data table, that is in the first node set through a currently executed distributed transaction, to-be-migrated data that meets a migration condition (for a manner of determining the to-be-migrated data, refer to the steps3021to3023), and migrates the selected to-be-migrated data from the first node set to the second node set. The selected to-be-migrated data is locked in a migration process. Generally, the to-be-migrated data is unlocked when a distributed transaction used to migrate the to-be-migrated data is successfully committed. The migration condition includes: an amount of to-be-migrated data that is migrated through the currently executed distributed transaction is less than or equal to a specified threshold of the amount of data, and/or migration duration of migration through the currently executed distributed transaction is less than or equal to a specified duration threshold. The amount of to-be-migrated data may be represented by a quantity of records. Data of one record is a row of data in the data table, and is a minimum unit for data migration. Correspondingly, the specified threshold of the amount of data may be represented by a specified quantity threshold. The threshold of the amount of data and the specified duration threshold each may be a fixed value or a dynamically changing value. For example, before the step302, the threshold of the amount of data may be determined based on an amount of data in the first data table and/or current load information of the distributed database; and/or the specified duration threshold may be determined based on the amount of data in the first data table and/or load information of a current resource (for example, one or more of a CPU resource, a memory resource, or an IO resource) used by the distributed database. The amount of data in the first data table is positively correlated with the threshold of the amount of data and the specified duration threshold. The current load information of the distributed database is negatively correlated with the threshold of the amount of data and the specified duration threshold. To be specific, a larger amount of data in the first data table indicates a larger threshold of the amount of data and a longer duration threshold. A larger load of the distributed database indicates a smaller threshold of the amount of data and a smaller duration threshold. After migrating, through each currently executed distributed transaction, to-be-migrated data corresponding to each currently executed distributed transaction, the management node may delete migrated data, in the first data table, that is stored on the data node in the first node set, to subsequently distinguish, during data scanning, which data has been migrated and which data is not migrated. It should be noted that, blocking duration of the user service is actually duration for which the data is locked. Because data migrated through each distributed transaction is different, duration for which each piece of migrated data is locked is duration of a migration process of a corresponding distributed transaction. In this embodiment of this application, the table data is migrated in batches through a plurality of transactions that are serially executed. An amount of migrated data and/or migration duration of each distributed transaction are/is limited, to avoid excessive resource consumption during execution of each distributed transaction, and reduce lock duration corresponding to each distributed transaction. In other data redistribution processes in the gpdb, the full data migration is used, so that duration for which each piece of migrated data is locked is equal to migration duration of an entire incremental migration process. In the GaussDB, the entire migration process is divided into the full migration and the plurality of incremental migrations, and duration for which each piece of migrated data is locked is relatively short, but overall service blocking duration is still long. However, in this embodiment of this application, the amount of migrated data and/or the migration duration of each distributed transaction are/is limited, so that duration for which each piece of migrated data is locked is far less than lock duration in the other data redistribution processes. The overall service blocking duration may be reduced to about 1 minute, usually without user awareness. Therefore, compared with other data redistribution methods, this method can effectively reduce the service blocking duration, ensure service smoothness, and enhance user experience. In addition, a lock added to the migrated data is a write lock, to avoid modification and deletion operations on the data in a process of migrating the data, but a query operation on the data can still be performed. In this embodiment of this application, the management node may sequentially initiate the plurality of serial distributed transactions based on the determined first node set and second node set, generate one or more distributed plans when each distributed transaction is executed, and instruct the data node in the first node set and/or the data node in the second node set to execute the generated distributed plans, to implement data migration of the first data table. Each distributed plan corresponds to one or more data nodes. The distributed plan includes one or more SQL statements, and is used to indicate an action executed by a corresponding data node, an execution sequence of the action, and the like. For example, the executed action may be a scanning action, or a migration action. The distributed plan may carry the foregoing migration condition or a migration subcondition determined based on the migration condition. Optionally, each time a distributed transaction is initiated, the management node may further adjust content of the distributed plan, for example, adjust the migration condition or the migration subcondition based on current system resources. The distributed plan may be implemented by executing a transaction or a task on the corresponding data node. For example, when receiving the distributed plan, a data node may initiate a transaction (also referred to as a local transaction) or a task to execute, based on a sequence indicated in the distributed plan, an action indicated in the distributed plan. In a first optional manner, the management node generates a plurality of distributed plans based on the currently executed distributed transaction, to instruct a plurality of data nodes to migrate data in the first data table. It is assumed that the first node set includes n data nodes, and n is a positive integer; and the second node set includes m data nodes, and m is a positive integer. As shown inFIG.7, the migration process includes the following steps. Step3024: The management node separately generates n distributed plans for then data nodes based on the currently executed distributed transaction, where the n data nodes are in a one-to-one correspondence with the n distributed plans; and the management node instructs the n data nodes to separately execute the n distributed plans to concurrently select to-be-migrated data that meets the migration subcondition from unmigrated data, in the first data table, that is on the n data nodes, and sends, from the n data nodes, the selected to-be-migrated data that meets the migration subcondition to the second node set. Specifically, for the currently executed distributed transaction, the management node sends each of the n distributed plans generated based on the distributed transaction to a corresponding data node. The corresponding data node executes the distributed plan. After each data node executes a corresponding distributed plan, the management node executes a next distributed transaction, generates n new distributed plans, and separately sends the n new distributed plans to corresponding data nodes, and so on. If all the data in the first data table has been migrated, the management node cancels the table redistribution flag, and prepares to migrate data in a next data table. The migration subcondition is determined based on the migration condition. Optionally, the distributed plan may further carry the migration subcondition. For example, when the migration condition is that the amount of to-be-migrated data that is migrated through the currently executed distributed transaction is less than or equal to the specified threshold of the amount of data, correspondingly, the migration subcondition is that an amount of to-be-migrated data that is migrated by executing the corresponding distributed plan is less than or equal to a subthreshold of the amount of data. The quantity subthreshold is less than the specified quantity threshold. Quantity subthresholds corresponding to the n distributed plans may be equal or unequal. For example, the quantity subthresholds corresponding to the n distributed plans may be equal to one-nth of the specified quantity threshold. When the migration condition is that the migration duration of migration through the currently executed distributed transaction is less than or equal to the specified duration threshold, correspondingly, the migration subcondition is that the migration duration of migration through the currently executed distributed transaction is less than or equal to a duration subthreshold. The duration subthreshold is less than or equal to the specified duration threshold, and a maximum value of duration subthresholds corresponding to the n distributed plans is the specified duration threshold. The duration subthresholds corresponding to the n distributed plans may be equal or may be unequal. Generally, all the duration subthresholds corresponding to the n distributed plans are equal to the specified duration threshold. For each of the n data nodes, a distributed plan obtained by each data node may be implemented by executing a transaction or a task on the data node. It is assumed that a first data node is any one of the n data nodes. An example in which the first data node executes a local transaction to implement a distributed plan is used. For example, a distributed plan generated for the first data node may include one or more SQL statements, to instruct the first data node to execute a scanning action and a migration action. The scanning action and the migration action are concurrently executed. A target data node for data migration is a second data node (namely, a data node in the second node set). In addition, the distribution plan carries the migration subcondition. Then, based on the distributed plan, the first data node may scan, through the local transaction (also called table scanning), unmigrated data, in the first data table, that is stored on the first data node, to select to-be-migrated data that meets the migration subcondition, and send the selected to-be-migrated data that meets the migration subcondition from the first data node to the second data node in the second node set. For example, when the first optional implementation is used, and all unmigrated data in the first data table is used as the to-be-migrated data, the first data node may traverse, through the local transaction, the unmigrated data, in the first data table, that is stored on the first data node. Data obtained through traversal is the to-be-migrated data. When the second optional implementation is used, and the to-be-migrated data is obtained through filtering the data in the first data table, the first data node traverses, through the local transaction, the unmigrated data, in the first data table, that is on the first data node, to obtain the to-be-migrated data that meets the migration subcondition through filtering. For the filtering process, refer to the step3023. When a distributed transaction is a distributed transaction initiated for a first time in the data redistribution process, unmigrated data obtained by scanning the n data nodes is all the data in the first data table. When a distributed transaction is a distributed transaction not initiated for the first time in the data redistribution process, the unmigrated data obtained by scanning the n data nodes is data, in the first data table, that is not migrated through a previous distributed transaction. In the first optional implementation, the first data node may scan, through the local transaction, all records that are of the first data table and that are stored on the first data node, to obtain the unmigrated data. To be specific, scanning is from top to bottom and is started from a beginning of the data in the first data table, that is stored on the first data node. In a scanning manner provided in the first optional implementation, when the management node executes each distributed transaction, the first data node is instructed to scan all the records that are of the first data table and that are stored on the first data node, to avoid missing the to-be-migrated data. Optionally, if the second optional implementation is used to scan the unmigrated data, the first data node may record, through the local transaction, a location at which this scanning ends. When the management node executes a next distributed transaction, the first data node is instructed to scan, based on a corresponding distributed plan, backward from a latest end location of the records that are of the first data table and that are stored on the first data node, to obtain the unmigrated data. In this way, a record that has been scanned previously on the first data node can be prevented from being scanned again. Optionally, if the second optional implementation is used to scan the unmigrated data, to avoid that updated data is stored in a data record scanned by a data node controlled by the management node through a distributed transaction before this distributed transaction, the management node may generate n distributed plans through a distributed transaction that is last executed. Each distributed plan instructs a corresponding data node to scan, at a time, data in the first data table, that is stored on the data node, to avoid data missing. Alternatively, the n data nodes are controlled through a plurality of distributed transactions, to scan different data in the first data table at the same time. In this embodiment of this application, when the management node executes the current distributed transaction, the step3023and the step3024may be performed in a nested manner. In other words, a specific action in the step3023is executed by a data node that is instructed by the management node through a distributed plan. Step3025: The management node separately generates m distributed plans for m data nodes based on the currently executed distributed transaction, where the m data nodes are in a one-to-one correspondence with the m distributed plans; and the management node instructs the m data nodes to separately execute the m distributed plans to concurrently receive and store data, in the first data table, that is sent from the first node set. For each of the m data nodes, a distributed plan obtained by each data node may be implemented by executing a transaction or a task on the data node. It is assumed that the second data node is any one of the m data nodes. An example in which the second data node executes a local transaction to implement a distributed plan is used. For example, a distributed plan generated for the second data node may include one or more SQL statements, to instruct the second data node to execute a receiving action and a storage action. The receiving action and the storage action are concurrently executed. A source data node of the data is the first data node. Based on the distributed plan, the second data node may receive and store, through the local transaction, the data, in the first data table, that is sent from the first node set. Optionally, for each data node in the first node set, the data node is configured to execute a local transaction of a distributed plan delivered by the management node. Specifically, the local transaction executed by the data node may include two threads, and the two threads are configured to separately perform the scanning action and the migration action. For example, each local transaction includes a scanning thread and a sending thread. The scanning thread is configured to scan unmigrated data, in the first data table, that is on a corresponding data node in the first node set (to be specific, when the data in the first data table is scanned, deleted data is skipped), to obtain the to-be-migrated data. For a process of determining the to-be-migrated data, refer to the step3023. The sending thread is configured to send the to-be-migrated data to the target data node in the second node set. The two threads may be concurrently executed to improve data redistribution efficiency. For each data node in the second node set, the data node is configured to execute a local transaction of a distributed plan delivered by the management node. Specifically, the local transaction executed by the data node may include a receiving thread, configured to receive data sent by another data node, and write the received data to a local data node. Because the data node in the first node set may also receive data from another node, the local transaction executed by each data node in the first node set may further include a receiving thread. Similarly, because the data node in the second node set may also send data to another node, the local transaction executed by each data node in the second node set may also include a sending thread. Optionally, when a data node needs to initiate a sending thread and a receiving thread at the same time, to reduce occupation of the threads, the data node may initiate a sending/receiving thread by executing a local transaction (in other words, the local transaction includes the sending/receiving thread), to complete functions of the sending thread and the receiving thread, for example, receiving and sending the data. It should be noted that, after migrating to-be-migrated data, in the first data table, that is stored on the local data node, the data node in the first node set may send a migration completion notification (also referred to as an end flag) to the target data node in the second node set to which the to-be-migrated data is migrated. For any data node in the second node set, after receiving a migration completion notification of each corresponding source data node (the source data node corresponding to the data node may be recorded in the distributed plan), the data node determines that execution of a corresponding distributed plan is completed, and stops executing the corresponding distributed plan. A plurality of distributed plans is generated based on a distributed transaction, so that a plurality of data nodes can be instructed to concurrently execute the plurality of distributed plans, to concurrently perform data migration. This can effectively reduce execution duration of each distributed transaction, and improve efficiency of executing the distributed transaction. As shown inFIG.8, it is assumed that the first node set includes data nodes N1to N3, the second node set includes a data node N4, and the management node migrates the to-be-migrated data through two distributed transactions that are serially executed. It is assumed that the two distributed transactions include a first distributed transaction and a second distributed transaction. Three distributed plans generated based on the first distributed transaction are separately implemented by transactions1ato1cof the three data nodes in the first node set. Three distributed plans generated based on the second distributed transaction are separately implemented by transactions2ato2cof the three data nodes in the first node set. It is assumed that a quantity of records is used to represent an amount of migrated data and a specified threshold of an amount of data corresponding to each distributed plan is 1. In this case, each of the transactions1ato1cis executed, to migrate data of one record after scanning data of a plurality of unmigrated records on a corresponding data node. An example in which the management node executes the first distributed transaction is used. Each data node executes a corresponding distributed plan, so that each data node scans data on the local data node through a transaction of the data node, finds the to-be-migrated data, sends the to-be-migrated data to the target data node (the data node N4inFIG.8), and deletes, at the same time, migrated data that has been migrated through the transaction from the local data node. The transactions1ato1care executed to concurrently execute the scanning action and the migration action until the migration condition is met or each data node meets a corresponding subcondition. Then, the management node commits the first distributed transaction, to complete migration of this batch of data. For a process of finding the to-be-migrated data, refer to a corresponding process in the step302. For execution processes of the transactions2ato2c, refer to execution processes of the transactions1ato1c. Details are not described in this embodiment of this application. Further, a distributed plan corresponding to the data node N4may be generated based on the first distributed transaction. The data node N4implements the distributed plan by executing a transaction (not shown inFIG.8), to receive data sent by the data nodes N1to N3, and stores the received data to the data node N4. In a second optional manner, the management node generates a distributed plan based on the currently executed distributed transaction, and instructs the data node in the first node set and the data node in the second node set to execute the distributed plan, to select the to-be-migrated data that meets the migration condition from the unmigrated data, of the first data table, in the first node set, and migrate the selected to-be-migrated data from the first node set to the second node set. The distributed plan corresponds to a plurality of data nodes in the first node set and the second node set, and may be considered as an integrated plan of the n distributed plans and the m distributed plans in the first optional manner. The distributed plan includes one or more SQL statements, and is used to indicate an action executed by each data node in the first node set and the second node set, an execution sequence of the action, and the like. For example, the executed action may include the scanning action, the migration action, the receiving action, and/or the storage action. Optionally, the distributed plan may further carry the migration condition. After receiving the distributed plan, each data node may determine an action that needs to be executed by the data node, and may further determine, based on the migration condition, a migration sub-condition corresponding to the data node. For a process of determining the migration condition, refer to the first optional manner. The distributed plan may be implemented by executing a transaction or a task on the data node. For a process in which each data node in the first node set and the second node set executes an action, in the distributed plan, that needs to be executed by the data node, refer to the process in which the data node executes a corresponding distributed plan in the first optional manner. Details are not described again in this embodiment of this application. In this embodiment of this application, the distributed database stores data through a multiversion concurrency control mechanism. In the multiversion concurrency mechanism, data deleted from a data node is not physically removed from the data node, but is stored on the data node as a historical version. For example, after performing the step3025, the management node sets a deletion flag (or controls a data node through the distributed plan to set the deletion flag) for migrated data, in the first data table, that is in the first node set. The deletion flag indicates that the migrated data is converted into data of a historical version. In this case, that the migrated data in the step3025is deleted is actually that the data is recorded on a corresponding data node as a historical version. Subsequently, when data scanning is performed by executing a distributed transaction, data of the historical version is skipped (in other words, data with the deletion flag is skipped). In this way, it can be ensured that a data query operation performed by the user on the data of the historical version is effectively executed in the data migration process. It should be noted that, in the data migration process, because data that is being migrated is locked, only a data query operation can be performed on the migrated data, and a data modification operation and a data deletion operation cannot be performed. Once migration of the data is completed, a deletion flag is set for the data in the first node set, and the data becomes data of a historical version (actually, the data is not deleted from the first node set). Data of a latest version has been migrated to a new node in the second node set. The data of the historical version can only be queried. After a distributed transaction used to migrate the data is committed, a new user transaction does not query the data of the historical version. After all concurrent transactions (for example, a data query operation used to query the data of the historical version) for the data of the historical version in the first node set end, the data of the historical version is no longer accessed, and may be physically deleted. Based on a periodic data cleaning mechanism that the distributed database runs, the distributed database cleans the data of the historical version from the data in the first data table. In other words, the data is physically removed from the distributed database (the process is an expired data cleaning process). Step303: In the process of migrating the data in the first data table, when receiving a target service request for the first data table, the management node determines, in the first node set and the second node set, a third node set configured to respond to the target service request. In the data migration process, a plurality of types of user services may be generated based on different requirements of the user. In different scenarios, there is a plurality of user services, for example, a data query service, a data addition service (also referred to as a data insertion service), a data deletion service, and a data modification service, and corresponding service requests are respectively a data query request, a data addition request (also referred to as a data insertion request), a data deletion request and a data modification request. The data query request is used to request a data query operation on the data. The data addition request is used to request a data addition operation. The data deletion request is used to request a data deletion operation. The data modification request is used to request a data modification operation. The data query service is further classified, based on association between the data query service and a data table, into a data query service associated with one data table and a data query service associated with a plurality of data tables. A data query operation indicated by a data query request corresponding to the data query service associated with one data table needs to query only data in the data table. A data query operation indicated by a data query request corresponding to the data query service associated with the plurality of data tables needs to query data in the plurality of data tables. For example, the data query request is “querying information about a female employee in a company X”. It is assumed that the information about the female employee in the company X is recorded in the first data table. In this case, a query operation involves only one data table, so that the data query request is a data query request corresponding to a data query service associated with the data table. For another example, the data query request is “querying information about a female employee of a customer company of the company X”. It is assumed that the customer company of the company X is recorded in the second data table, and information about female employees of different customer companies is recorded in different data tables. In this case, a query operation instructs to first query the second data table to obtain an identifier of the customer company of the company X, and then query a data table corresponding to the company based on the obtained identifier, to obtain the information about the female employee of the customer company of the company X. The data query request relates to a plurality of data tables, so that the data query request is a data query request corresponding to a data query service associated with the plurality of data tables. In this embodiment of this application, the data redistribution method may be applied to a plurality of scenarios. In this case, the target service request may be the data query request, the data addition request (also referred to as an insertion request), the data deletion request, or the data modification request. The target service request may be for data of one or more records. As shown inFIG.9, in the data migration process, service data for a same target service may relate to a data node before data redistribution and/or a data node after the data redistribution. For example, when the data is distributed in a hash bucket manner, data in a same hash bucket is moved through a plurality of distributed transactions that is serially executed. Therefore, in a migration process, the data in the same hash bucket is distributed on two data nodes at the same time (migrated data is distributed on a data node in the second node set, and unmigrated data is distributed on a data node in the first node set). In addition, all newly-added data corresponding to the hash bucket is directly written to the data node in the second node set). Therefore, for different target services, finally determined third node sets are different. The third node set includes one or more data nodes. In this embodiment of this application, the following several implementation scenarios are used as examples to describe a process of determining the third node set. In a first implementation scenario, when the target service request is the data addition request, the third node set configured to respond to the data addition request is determined in the second node set. For example, a hash value is calculated based on a key value of newly-added data carried in the data addition request, and the third node set configured to respond to the data addition request is determined in the second node set. For example, in the second node set, a data node corresponding to the hash value is determined as a data node in the third node set. For example, a hash bucket corresponding to the hash value may be determined, and a data node, in the second node set, that is corresponding to the hash bucket is determined as the data node in the third node set. For example, the second mapping relationship table may be queried, and a queried data node is determined as the data node in the third node set. As shown inFIG.9, it is assumed that the data addition request is received. The request indicates that a data addition operation corresponds to newly-added data D. In this case, according to the hash distributed rule, it is determined that the third node set in which the newly-added data D is stored is a data node N4, and the newly added data D is stored on the data node N4. In other data redistribution methods, due to consistency between the source table and the temporary table, if a data addition rate of the source table is greater than a data migration rate of the source table, data migration cannot be completed. If the table is forcibly locked for migration, table blocking time may be long, thereby effecting the user service. However, in this embodiment of this application, the temporary table does not need to be established, and the newly-added data is directly added to the data node (namely, the third node set) in the second node set. Therefore, in the data migration process, the newly-added data does not need to be migrated and recorded, so that the newly-added data can be quickly stored. This effectively reduces the amount of migrated data, simplifies the data migration process, improves data migration efficiency, and reduces an impact on the user service. In a second implementation scenario, when the target service request is the data deletion request, the data modification request, or a data query request associated with the first data table, a data node configured to respond to the target service request is determined in the first node set, and a data node configured to respond to the target service request is determined in the second node set. The data node determined from the first node set and the data node determined from the second node set form the third node set. In an optional manner, when the target service request includes the data deletion request, a data node configured to respond to the data deletion request (namely, a data node on which data that the data deletion request requests to delete is located) is queried in the first node set, and a data node configured to respond to the data deletion request is queried in the second node set. The queried data nodes are combined to form the third node set. As shown inFIG.9, a data node on which data B that the data deletion request requests to delete is located is queried in the first node set (including data nodes N1to N3), to obtain the data node N2. A data node on which the data B is located is queried in the second node set (including the data node N4), to obtain the data node N4. The third node set formed by the queried data nodes includes the data nodes N2and N4. For example, for the data deletion request, if deletion may be performed based on a key value, after a hash value is calculated based on the key value, a fourth node set is determined in the first node set based on the first mapping relationship table, and a fifth node set is determined in the second node set based on the second mapping relationship table. Deleted data may exist in both node sets. Therefore, a union set of the fourth node set and the fifth node set is determined as the third node set. In other words, the third node set includes the fourth node set and the fifth node set. The fourth node set and the fifth node set each include one or more data nodes. In another optional manner, when the target service request includes the data modification request, a data node configured to respond to the data modification request (namely, a data node on which data that the data modification request requests to modify is located) is queried in the first node set, and a data node configured to respond to the data modification request is queried in the second node set. The queried data nodes are combined to form the third node set. As shown inFIG.9, a data node on which data C that the data modification request requests to modify is located is queried in the first node set (including the data nodes N1to N3), to obtain the data node N3. A data node on which the data C is located is queried in the second node set (including the data node N4), to obtain the data node N4. The third node set formed by the queried data nodes includes the data nodes N3and N4. For example, for the data modification request, if deletion may be performed based on the key value, after the hash value is calculated based on the key value, a sixth node set is determined in the first node set based on the first mapping relationship table, and a seventh node set is determined in the second node set based on the second mapping relationship table. Modified data may exist in both node sets. Therefore, a union set of the sixth node set and the seventh node set is determined as the third node set. The sixth node set and the seventh node set each include one or more data nodes. In still another optional manner, when the data query request includes the data query request associated with the first data table, a data node configured to respond to the data query request (namely, a data node on which data that the data query request requests to query is located) is queried in the first node set, and a data node configured to respond to the data query request is queried in the second node set. The queried data nodes are combined to form the third node set. As shown inFIG.9, a data node on which data A that the data modification request requests to modify is located is queried in the first node set (including the data nodes N1to N3), to obtain the data node N1, and a data node on which the data A is located is queried in the second node set (including the data node N4), to obtain the data node N4. In this case, the third node set formed by the queried data nodes includes the data nodes N1and N4. For example, for the data query request, if a query may be performed based on the key value, after the hash value is calculated based on the key value, an eighth node set is determined in the first node set based on the first mapping relationship table, and a ninth node set is determined in the second node set based on the second mapping relationship table. Queried data may exist in both node sets. Therefore, a union set of the eighth node set and the ninth node set is determined as the third node set. The eighth node set and the ninth node set each include one or more data nodes. It should be noted that the data query request associated with the first data table may be a data query request associated only with the first data table, or a data query request associated with a plurality of data tables including the first data table. When the query request is the data query request associated with the plurality of data tables including the first data table, for each data table associated with the query request, for a manner of obtaining a third node set that is corresponding to the data table and that is configured to respond to the query request, refer to a manner of obtaining the third node set corresponding to the first data table when the data query request is the data query request associated only with the first data table. Details are not described in this embodiment of this application. Subsequently, the data query request needs to be sent to the third node set corresponding to the plurality of data tables. For a sending process, refer to a subsequent step304. In the second implementation scenario, an operation of querying the data node is performed, to reduce a quantity of data nodes in the third node set, an amount of information subsequently exchanged with the third node set, and communication overheads. As described above, data corresponding to the target service request may be data of one or more records. When the target service request corresponds to data of one record, because the data of the record cannot exist on two data nodes at the same time, the same record can be successfully processed only on one of the data nodes. If the third node set is not determined based on the key value, the target service request needs to be sent to all related data nodes before and after the data redistribution, because in the data migration process, all the data nodes may have a record that meets a condition requested by the target service request. It can be learned that in the second implementation scenario, the operation of querying the data node may not be performed, and a union set of the first node set and the second node set is directly determined as the third node set. For example, the target service request is the data query request. The data query request is used to request to query data in a specified data range or a specified time range in the first data table. The specified data range may be a range of data that meets a specified condition. The specified time range may be a time range that is earlier or later than a specified time point. In the data migration process of the first data table, a part of data corresponding to the data query request may be located on a data node before the data redistribution, and another part the data may be located on a data node after the data redistribution. Therefore, the data node before the data redistribution and the data node after the data redistribution usually need to be traversed, to avoid missing of queried data. In this case, the union set of the first node set and the second node set may be directly determined as the third node set. In addition, that the union set of the first node set and the second node set is directly determined as the third node set may also reduce a delay of querying the data node, and improve service execution efficiency. Step304: The management node sends the target service request to the data node in the third node set. The target service request is used by each data node in the third node set to process a service based on the target service request. After receiving the target service request, each data node in the third node set processes a corresponding service. For example, it is assumed that the first data node is any data node in the third node set. In this case, the first data node performs the following process. When receiving the data query request, the first data node detects whether the first data node stores the data that the data query request requests to query. If the first data node stores the data that the data query request requests to query, information about the data is obtained, and a data query response is sent to the management node. The data query response includes found data. If the first data node does not store the data that the data query request requests to query, a query action is stopped, or a data query response is sent to the management node. The data query response indicates that the requested data is not found. When receiving the data addition request, the first data node directly adds the newly-added data to the first data node. Optionally, the first data node may send an addition success response to the management node. When receiving the data modification request, the first data node detects whether the first data node stores the data that the data modification request requests to modify. If the first data node stores the data that the data modification request requests to modify, the data is modified based on the data modification request. Optionally, a data modification response is sent to the management node. The data modification response includes modified data or indicates successful modification. If the first data node does not store the data that the data modification request requests to modify, a modification action is stopped, or a data modification response is sent to the management node. The data modification response indicates that the requested data does not exist. When receiving the data deletion request, the first data node detects whether the first data node stores the data that the data deletion request requests to delete. If the first data node stores the data that the data deletion request requests to delete, the data is deleted based on the data deletion request. Optionally, a data deletion response is sent to the management node. The data deletion response indicates successful deletion. If the first data node does not store the data that the data deletion request requests to delete, a deletion action is stopped, or a data deletion response is sent to the management node. The data deletion response indicates that the requested data does not exist. As described above, in the data redistribution process in this embodiment of this application, after the migration, data is no longer stored on a data node on which the data is stored before the migration. Therefore, it is ensured that data of a same record is stored on only one data node in the distributed database, but not on two data nodes. In this way, it is ensured that there is no conflict response to the target service request. Step305: In the process of migrating the data in the first data table, if a rollback trigger event is detected, the management node rolls back data that has been migrated through the plurality of distributed transactions. The rollback trigger event may be that a data node that is associated with the first data table and that is in the second node set is faulty (for example, breaks down), or a data transmission error occurs in the data node that is associated with the first data table and that is in the second node set, or a network error occurs in the data node that is associated with the first data table and that is in the second node set, or the data node that is associated with the first data table and that is in the second node set receives a rollback instruction, or a distributed transaction associated with the first data table fails to be committed, or the like. In a possible implementation, after the rollback trigger event is detected in the distributed database, the data that has been migrated through the plurality of distributed transactions is rolled back, so that the distributed database may be restored to a previous state in which the distributed database can normally run. In this way, after an end condition of the rollback trigger event is met in a subsequent process, the distributed database can still normally perform an online service and another service such as data redistribution. In a possible implementation, the step305may be replaced with: in the process of migrating the data in the first data table, if the rollback trigger event is detected, data that has been migrated through the currently executed distributed transaction is rolled back. In other distributed databases, data in a data table is migrated through one distributed transaction. If a rollback trigger event is detected, all currently migrated data is rolled back. In other words, all executed actions corresponding to the distributed transaction are canceled. An amount of data of a rollback is large, and all the migrated data is invalid. After a migration condition is met again, the data needs to be migrated again. Therefore, data is migrated repeatedly, so that resources are wasted, and fault tolerance of the database is poor. However, in this embodiment of this application, the distributed transactions ensure data consistency and persistence in the migration process. When there is a plurality of distributed transactions, an overall data migration process is split into migration processes in which the plurality of distributed transactions is serially executed. If the rollback trigger event is detected, only all operations corresponding to a currently executed distributed transaction need to be rolled back. After the migration condition is met again, a new distributed transaction may be initiated to perform data migration. This reduces a data granularity and the amount of data of a rollback, an amount of repeatedly migrated data, and an impact of the rollback on the overall data migration process, avoids resource waste, and improves fault tolerance of the database. It should be noted that, in the process of migrating the data in the first data table, in addition to data manipulation language (DML) services such as the data query service and the data addition service, another type of user service may further be generated, for example, a data definition language (DDL) service. The DDL service includes services such as creating table information, modifying the table information, and deleting the table information. An operation object requested by the DDL service is the table information, namely, a definition and an architecture of a table. In other data redistribution methods, data consistency between the source table and the temporary table needs to be ensured. Therefore, the DDL service is not allowed in the data migration process. However, in this embodiment of this application, no temporary table needs to be established, and the data migration process is in the data table instead of between the source table and the temporary table. Therefore, the DDL service is supported in the data migration process. For example, modification of table metainformation is supported, and modification of a table name, and addition or deletion of a field in the data table are allowed. It should be noted that the foregoing embodiment is described based on an example in which data in one data table needs to be redistributed. In an actual implementation of this embodiment of this application, the data redistribution process may be performed on a plurality of data tables at the same time, to improve data redistribution efficiency and increase concurrency. In conclusion, according to the data redistribution method provided in this embodiment of this application, a target task may be executed without establishing the temporary table, to implement online data redistribution. In this way, inter-table data migration is not necessary and only intra-table data migration needs to be performed. This reduces complexity of the online data redistribution. In addition, because the data is migrated through the plurality of distributed transactions that are serially executed, a single migration takes shorter time, consumes less resources, and reduces an impact on another user job executed at the same time. Further, because the newly-added data is directly written into the data node after the data redistribution, the amount of migrated data is effectively reduced, thereby reducing resource consumption and reducing the impact on another user job executed at the same time. For example,FIG.6is used as an example. If another data redistribution method is used, all data whose hash bucket numbers are 1 to 17 needs to be migrated from the first node set to the second node set. However, in this embodiment of this application, the data whose hash bucket number is 1 needs to be moved from the data node N1to the data node N7; the data whose hash bucket number is 2 does not need to be moved; the data whose hash bucket number is 7 needs to be moved from the data node N1to the data node N9, and the like. In general, only data whose hash bucket numbers are 1, 6, 7, 11, 12, 13, and 16 needs to be migrated (inFIG.6, data nodes N7, N8, and N9, in the second node set, that need to receive the migrated data are represented by shadows). This effectively reduces the amount of migrated data. In this embodiment of this application, in a scenario in which there is a concurrent user job, the data migration is implemented through the intra-table data migration and a distributed multiversion concurrency control technology. Data increment caused by insertion and deletion operations of the concurrent job does not need to be considered. Data can be migrated in batches based on an amount of data and execution time, to ensure that system resource consumption caused by the data redistribution is controllable. This can effectively control resource consumption and lock conflict impact of the migration and greatly reduce an impact on the user job. Online capacity expansion of the distributed database is implemented through the embodiments of this application. This can avoid long-time service blocking caused by shut-down capacity expansion, so that an online job is slightly affected. Even when a data node and a network are faulty, a redistribution operation can be easily restored, so that the data migration is slightly affected. An embodiment of this application provides a data redistribution apparatus40. The data redistribution apparatus40may be deployed on a management node. As shown inFIG.10, the data redistribution apparatus40includes: a first determining module401, configured to perform the step301; a migration module402, configured to perform the step302; a second determining module403, configured to perform the step303; and a sending module404, configured to perform the step304. In conclusion, according to the data redistribution apparatus provided in this embodiment of this application, a target task may be executed without establishing a temporary table, to implement online data redistribution. In this way, inter-table data migration is not necessary and only intra-table data migration needs to be performed. This reduces complexity of the online data redistribution. Optionally, as shown inFIG.11, the second determining module403includes: a determining submodule4031, configured to: when the target service request is a data addition request, determine, in the second node set, the third node set configured to respond to the data addition request. Optionally, the determining submodule4031is configured to: calculate a hash value based on a key value of newly-added data carried in the data addition request; and determine, in the second node set, a data node corresponding to the hash value, where the determined data node belongs to the third node set. Optionally, the second determining module403is configured to: when the target service request is a data deletion request, a data modification request, or a data query request associated with the first data table, determine, in the first node set, a data node configured to respond to the target service request, and determine, in the second node set, a data node configured to respond to the target service request, where the data node determined from the first node set and the data node determined from the second node set form the third node set. Optionally, as shown inFIG.12, the migration module402includes: a filtering submodule4021, configured to filter to-be-migrated data from the data, in the first data table, that is stored in the first node set, where the to-be-migrated data is the data, in the first data table, that is not stored in the second node set before migration; and a migration submodule4022, configured to migrate the to-be-migrated data from the first node set to the second node set. Optionally, the filtering submodule4021is configured to: obtain a first mapping relationship between the data in the first data table and the data node in the first node set; obtain a second mapping relationship between the data in the first data table and the data node in the second node set; and for target data in the first data table, when a data node that is determined based on the first mapping relationship and that is corresponding to the target data is different from a data node that is determined based on the second mapping relationship and that is corresponding to the target data, determine, on the data node that is determined based on the first mapping relationship and that is corresponding to the target data, the target data as the to-be-migrated data. Optionally, the migration submodule4022is configured to: separately migrate different data in the first data table from the first node set to the second node set through a plurality of distributed transactions that are serially executed. Optionally, the migration submodule4022is configured to: when the plurality of distributed transactions are serially executed, select, from unmigrated data, in the first data table, that is in the first node set through a currently executed distributed transaction, to-be-migrated data that meets a migration condition, and migrate the selected to-be-migrated data from the first node set to the second node set, where the selected to-be-migrated data is locked in a migration process. The migration condition includes: an amount of to-be-migrated data that is migrated through the currently executed distributed transaction is less than or equal to a specified threshold of the amount of data, and/or migration duration of migration through the currently executed distributed transaction is less than or equal to a specified duration threshold. Optionally, the migration submodule4022is configured to: separately generate n distributed plans for n data nodes based on the currently executed distributed transaction, where the first node set includes the n data nodes, the n data nodes are in a one-to-one correspondence with the n distributed plans, and n is a positive integer; and instruct the n data nodes to separately execute the n distributed plans to concurrently select to-be-migrated data that meets a migration subcondition from unmigrated data, in the first data table, that is on the n data nodes, and send, from the n data nodes, the selected to-be-migrated data that meets the migration subcondition to the second node set, where the migration subcondition is determined based on the migration condition. Optionally, as shown inFIG.13, the apparatus40further includes: a rollback module405, configured to: in a process of migrating the data in the first data table, if it is detected that the distributed database reaches a rollback trigger event, roll back data migrated through a currently working distributed transaction; or a rollback module405, configured to: in a process of migrating the data in the first data table, if a rollback trigger event is detected, roll back data that has been migrated through the currently executed distributed transaction. Optionally, as shown inFIG.14, the apparatus40further includes: a setting module406, configured to set a deletion flag for migrated data, in the first data table, that is in the first node set. Optionally,FIG.15is a schematic diagram of a possible basic hardware architecture of the computing device in this application. Refer toFIG.15. A computing device500includes a processor501, a memory502, a communications interface503, and a bus504. In the computing device500, there may be one or more processors501.FIG.15shows only one of the processors501. Optionally, the processor501may be a central processing unit (CPU). If the computing device500includes a plurality of processors501, the plurality of processors501may be of a same type or different types. Optionally, the plurality of processors501of the computing device500may be integrated into a multi-core processor. The memory502stores a computer instruction and data. The memory502may store a computer instruction and data that are required for implementing the data redistribution method provided in this application. For example, the memory502stores an instruction used to implement the steps of the data redistribution method. The memory502may be any one or any combination of the following storage media: a nonvolatile memory (for example, a read-only memory (ROM), a solid-state drive (SSD), a hard disk drive (HDD), or an optical disc) and a volatile memory. The communications interface503may be any one or any combination of the following components with a network access function, such as a network interface (for example, an Ethernet interface) and a wireless network interface card. The communications interface503is configured to perform data communication between the computing device500and another computing device or terminal. The processor501, the memory502, and the communications interface503may be connected through the bus504. In this way, through the bus504, the processor501may access the memory502, and may further exchange data with another computing device or terminal through the communications interface503. In this application, the computing device500executes the computer instruction in the memory502, so that the computing device500is enabled to implement the data redistribution method provided in this application, or the computing device500is enabled to deploy a data redistribution apparatus. In an example embodiment, a non-transitory computer-readable storage medium including an instruction is further provided, for example, a memory including an instruction. The instruction may be executed by a processor of a server to complete the emotion picture recommendation method shown in the embodiments of the present application. For example, the non-transitory computer-readable storage medium may be a ROM, a random access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, or the like. An embodiment of this application provides a distributed database system, including a management node and a data node. The management node includes the data redistribution apparatus40or the computing device500. All or some of the foregoing embodiments may be implemented through software, hardware, firmware, or any combination thereof. When the software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedure or functions according to the embodiments of the present application are all or partially generated. The computer may be a general-purpose computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another web site, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, a semiconductor medium (for example, a solid-state drive), or the like. The foregoing descriptions are merely optional embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application should fall within the protection scope of this application. In this application, the terms “first” and “second” are merely intended for description, and shall not be understood as an indication or implication of relative importance. The term “a plurality of” means two or more, unless otherwise expressly limited. A refers to B, which means that A is the same as B or A is a simple variant of B. A person of ordinary skill in the art may understand that all or some of the steps of the embodiments may be implemented by hardware or a program instructing related hardware. The program may be stored in a computer-readable storage medium. The storage medium may include: a read-only memory, a magnetic disk, or an optical disc. The foregoing descriptions are merely example embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application should fall within the protection scope of this application.
105,375
11860834
DETAILED DESCRIPTION Illustrative embodiments of the present disclosure will be described herein with reference to exemplary communication, storage and processing devices. It is to be appreciated, however, that the disclosure is not restricted to use with the particular illustrative configurations shown. Aspects of the disclosure provide methods and apparatus for reporting space savings due to pattern matching in storage systems. Pattern matching detection is an efficiency feature that allows users to store information using less storage capacity than storage capacity used without pattern matching. To this end, it should be understood that pattern matching detection may consist of recognizing patterns as they are written in the data storage system. For example, a write IO data may be compared with a set of buffers in memory, whereby these buffers are either statically pre-defined (e.g. all zeroes) or dynamic buffers reflecting most frequently used patterns of actual data in a file system (FS). In one or more embodiments, the pattern detection may be done at a defined granularity (e.g., 8 KB) and on aligned FS block, and if a predefined pattern is detected as part of a write operation, a flag may be set in the associated metadata to indicate the detected pattern. The data itself does not need to be stored in such an event. In addition, in some embodiments, at least one pattern counter may be incremented each time a predefined pattern is detected to produce a pattern matching count. The count may in turn be used to determine data reduction attributed to pattern matching. Details regarding space accounting and space savings reporting due to pattern matching can be found in U.S. patent application Ser. No. 15/664,255, filed Jul. 31, 2017, “REPORTING OF SPACE SAVINGS DUE TO PATTERN MATCHING IN STORAGE SYSTEMS”, incorporated by reference herein in its entirety. As will be understood by those skilled in the art, the said Patent Application discusses distinguishing pattern matching savings from snap savings resulting from block sharing between snaps and the primary in order to enable reporting of a more accurate picture of the savings from pattern matching. However, the said Patent Application only reports a best known lower bound of space savings attributed to pattern matching as the exact space savings value may not be known and/or may be difficult to determine due to the difficulty in separating the pattern savings from the snap savings. As discussed in the said Patent Application, the ‘S’ (‘Shared’) bit in the mapping pointer is utilized such that the ‘S’ bit is set in the mapping pointer (MP) of snap when snap is taken. Furthermore, the pattern matched counter is decremented only when pattern MP without the ‘S’ bit set is overwritten on the primary. As a result of this approach, the counter may be decremented prematurely resulting in the counter value being lower than the actual number of patterns matched (i.e., the lower bound of space savings). It should be understood that by prematurely it is meant that the counter may be decremented unnecessarily while there are still patterns. The techniques as discussed herein enhance the disclosure in the said Patent Application by tracking upper and lower bounds that define a range of data reduction attributed to pattern matching such that the space savings may be represented by the range and/or a single value chosen within the range (e.g. the midpoint of the range). In at least one embodiment, two counters are disclosed to track write substituted by pattern per FileSystem. For example, a patternZeroMatched counter may count data block matching with pattern zero and a patternNonZeroMatched counter may count data block matching with pattern non-zero. As a result of said counters, the upper bound of the range of data reduction attributed to pattern saving may be determined by summing the counts in the respective patternZeroMatched and patternNonZeroMatched counters. Additionally, the lower bound of the range of data reduction attributed to pattern saving may be determined by dividing the upper bound by the number of total user files (nFiles), wherein the nFiles corresponds to a total number of primary and replica files or the sum of primary and replica files. It should be noted that there may be one primary file and the rest are replica files so nFiles may in such cases be summarized as nFiles=nReplicas+1. Furthermore, the average of the upper and lower bounds of pattern saving may be determined such that the said average can be used for reporting of space savings. So, as should be appreciated from the foregoing, it is possible to either report to the user both lower and upper bounds (recognizing that the exact value cannot be known) or report some value in between two bounds (e.g. midpoint). Thus, in light of the above, and following on from the description, the determination of the upper and lower bounds and the average may be summarized as follows: Upper Bound Pattern Saving=patternZeroMatched+patternNonZeroMatched Lower Bound Pattern Saving=(patternZeroMatched+patternNonZeroMatched)/nFiles Average Pattern Saving=(Upper Bound Pattern Saving+Lower Bound Pattern Saving)/2 It should be understood that the sum of the pattern counters is considered as the upper bound of pattern saving as the snap saving contributes some pattern counters due to duplicated pattern MPs during write split. For example, a leaf indirect block may have multiple pattern mapping pointers. If the block is split (i.e., un-sharing), the pattern matched counter is increased by the number of pattern MP found in that block. As a result, the said sum of the pattern counters is deemed to the upper bound of the range of data reduction attributed to pattern saving. Additionally, it should be understood that the lower bound is considered to equate to the upper bound divided by the number of total user files (nFiles), which equals to the number of primary and replica files. If there is only one primary file (no snaps), then the pattern matched counter is exact (upper bound=lower bound). In a generic case, an upper bound is divided by nFiles and the resulting number is deemed the lower bound because the actual number of patterns matched can't be lower than the resulting number. Only when there is no sharing whatsoever (all blocks are split), the actual number can be equal to the lower bound. For example, this may occur in the self-evident case when there is only 1 file (no snaps). Also, it may occur if pattern mapping pointers are distributed evenly among files and all those mapping pointers are not shared (they don't reside on shared leaf indirect blocks). Suppose a file has 10 pattern mapping pointers, all reside in the same leaf indirect block. Then a snap is taken. Then indirect block is split by overwriting some real data (not the pattern). This will cause PatternsMatch counter to become 20. Dividing 20 by the number of files results in 10 which is also the actual number of patterns (excluding snap savings). So, this is an example of an extreme case where the lower bound equals to the actual number. In most other cases, the lower bound will lower than the actual number. In general, only user I/O can change pattern matched counters but asynchronous snap delete may also be treated in at least one embodiment as another type of user I/O. Snap delete is a long operation that can take hours to complete so it's done asynchronously in background. That is, snap delete has two stages: synchronous and asynchronous. During the synchronous stage, the client is notified about snap deletion, and during the asynchronous part blocks are de-allocated. It's similar to user I/Os because during some user I/Os blocks can be de-allocated as well (e.g. punch holes). Furthermore, FSR, FS snap creation will never change pattern matched counter. FSR is a FS Reorganizer, it's an underlying technology used by defragmentation and volume optimizers. Snap creation is just taking a snap of the storage object. Snap preserves the state of the object at the time it was taken. Those don't change space-savings counters due to design decisions. It should also be understood that pattern matching detection may also co-exist with other data reduction techniques (e.g., inline compression, deduplication, etc.) and can be considered to be part of these data reduction techniques from the user's perspective or a stand-alone method. In at least one embodiment, pattern counters and compression counter will be handled separately such that pattern MPs will not be counted in nAUsNeededIfNoCompression counter. However, in some embodiments, pattern saving from ILPD will be reported as part of compression savings, as follows: Compression Saving=Compression Saving+Pattern Saving. But, in general, as mentioned above, pattern matching savings can be included in compression or deduplication savings or reported on their own. Furthermore, and similar to normal MPs, Pattern MPs will be counted in di_blocks in dinode64 structure. As di_uniqueDataBlocks is counted in thick vVols, Pattern MPs will never impact di_uniqueDataBlocks counter. Additionally, and alternatively, the lower bound can be calculated as described in the said Patent Application and the max can be taken between the two. For example, any of the lower bound should be guaranteed to be smaller than the actual value. Therefore, the bigger lower bound may be considered the better as it will be closer to the actual value. Thus, if there is two lower bounds (calculated differently) and the max is taken, then this should be the better lower bound. FIG.1depicts an example embodiment of a system that may be used in connection with performing the techniques described herein. Here, multiple host computing devices (“hosts”)110, shown as devices110(1) through110(N), access a data storage system116over a network114. The data storage system116includes a storage processor, or “SP,”120and storage180. In one example, the storage180includes multiple disk drives, such as magnetic disk drives, electronic flash drives, optical drives, and/or other types of drives. Such disk drives may be arranged in RAID (Redundant Array of Independent/Inexpensive Disks) groups, for example, or in any other suitable way. In an example, the data storage system116includes multiple SPs, like the SP120(e.g., a second SP,120a). The SPs may be provided as circuit board assemblies, or “blades,” that plug into a chassis that encloses and cools the SPs. The chassis may have a backplane for interconnecting the SPs, and additional connections may be made among SPs using cables. No particular hardware configuration is required, however, as any number of SPs, including a single SP, may be provided and the SP120can be any type of computing device capable of processing host IOs. The network114may be any type of network or combination of networks, such as a storage area network (SAN), a local area network (LAN), a wide area network (WAN), the Internet, and/or some other type of network or combination of networks, for example. The hosts110(1-N) may connect to the SP120using various technologies, such as Fibre Channel, iSCSI (Internet Small Computer Systems Interface), NFS (Network File System), SMB (Server Message Block) 3.0, and CIFS (Common Internet File System), for example. Any number of hosts110(1-N) may be provided, using any of the above protocols, some subset thereof, or other protocols besides those shown. As is known, Fibre Channel and iSCSI are block-based protocols, whereas NFS, SMB 3.0, and CIFS are file-based protocols. The SP120is configured to receive10requests112(1-N) according to block-based and/or file-based protocols and to respond to such IO requests112(1-N) by reading and/or writing the storage180. As further shown inFIG.1, the SP120includes one or more communication interfaces122, a set of processing units124, compression hardware126, and memory130. The communication interfaces122may be provided, for example, as SCSI target adapters and/or network interface adapters for converting electronic and/or optical signals received over the network114to electronic form for use by the SP120. The set of processing units124includes one or more processing chips and/or assemblies. In a particular example, the set of processing units124includes numerous multi-core CPUs. The compression hardware126includes dedicated hardware, e.g., one or more integrated circuits, chipsets, sub-assemblies, and the like, for performing data compression and decompression in hardware. The hardware is “dedicated” in that it does not perform general-purpose computing but rather is focused on compression and decompression of data. In some examples, compression hardware126takes the form of a separate circuit board, which may be provided as a daughterboard on SP120or as an independent assembly that connects to the SP120over a backplane, midplane, or set of cables, for example. A non-limiting example of compression hardware126includes the Intel® QuickAssist Adapter, which is available from Intel Corporation of Santa Clara, CA. The memory130includes both volatile memory (e.g., RAM), and non-volatile memory, such as one or more ROMs, disk drives, solid state drives, and the like. The set of processing units124and the memory130together form control circuitry, which is constructed and arranged to carry out various methods and functions as described herein. Also, the memory130includes a variety of software constructs realized in the form of executable instructions. When the executable instructions are run by the set of processing units124, the set of processing units124are caused to carry out the operations of the software constructs. Although certain software constructs are specifically shown and described, it is understood that the memory130typically includes many other software constructs, which are not shown, such as an operating system, various applications, processes, and daemons. As further shown inFIG.1, the memory130“includes,” i.e., realizes by execution of software instructions, a cache132, an inline compression (ILC) engine140, an inline decompression (ILDC) engine150, and a data object170. A compression policy142provides control input to the ILC engine140. A decompression policy (not shown) provides control input to the ILDC engine150. Both the compression policy142and the decompression policy receive performance data160, that describes a set of operating conditions in the data storage system116. In an example, the data object170is a host-accessible data object, such as a LUN, a file system, or a virtual machine disk (e.g., a VVol (Virtual Volume), available from VMWare, Inc. of Palo Alto, CA). The SP120exposes the data object170to hosts110for reading, writing, and/or other data operations. In one particular, non-limiting example, the SP120runs an internal file system and implements the data object170within a single file of that file system. In such an example, the SP120includes mapping (not shown) to convert read and write requests from hosts110(e.g., IO requests112(1-N)) to corresponding reads and writes to the file in the internal file system. As further shown inFIG.1, ILC engine140includes a software component (SW)140aand a hardware component (HW)140b. The software component140aincludes a compression method, such as an algorithm, which may be implemented using software instructions. Such instructions may be loaded in memory and executed by processing units124, or some subset thereof, for compressing data directly, i.e., without involvement of the compression hardware126. In comparison, the hardware component140bincludes software constructs, such as a driver and API (application programmer interface) for communicating with compression hardware126, e.g., for directing data to be compressed by the compression hardware126. In some examples, either or both components140aand140bsupport multiple compression algorithms. The compression policy142and/or a user may select a compression algorithm best suited for current operating conditions, e.g., by selecting an algorithm that produces a high compression ratio for some data, by selecting an algorithm that executes at high speed for other data, and so forth. For decompressing data, the ILDC engine150includes a software component (SW)150aand a hardware component (HW)150b. The software component150aincludes a decompression algorithm implemented using software instructions, which may be loaded in memory and executed by any of processing units124for decompressing data in software, without involvement of the compression hardware126. The hardware component150bincludes software constructs, such as a driver and API for communicating with compression hardware126, e.g., for directing data to be decompressed by the compression hardware126. Either or both components150aand150bmay support multiple decompression algorithms. In some examples, the ILC engine140and the ILDC engine150are provided together in a single set of software objects, rather than as separate objects, as shown. In one example operation, hosts110(1-N) issue IO requests112(1-N) to the data storage system116to perform reads and writes of data object170. SP120receives the10requests112(1-N) at communications interface(s)122and passes them to memory130for further processing. Some IO requests112(1-N) specify data writes112W, and others specify data reads112R, for example. Cache132receives write requests112W and stores data specified thereby in cache elements134. In a non-limiting example, the cache132is arranged as a circular data log, with data elements134that are specified in newly-arriving write requests112W added to a head and with further processing steps pulling data elements134from a tail. In an example, the cache132is implemented in DRAM (Dynamic Random Access Memory), the contents of which are mirrored between SPs120and120aand persisted using batteries. In an example, SP120may acknowledge writes112W back to originating hosts110once the data specified in those writes112W are stored in the cache132and mirrored to a similar cache on SP120a. It should be appreciated that the data storage system116may host multiple data objects, i.e., not only the data object170, and that the cache132may be shared across those data objects. When the SP120is performing writes, the ILC engine140selects between the software component140aand the hardware component140bbased on input from the compression policy142. For example, the ILC engine140is configured to steer incoming write requests112W either to the software component140afor performing software compression or to the hardware component140bfor performing hardware compression. In an example, cache132flushes to the respective data objects, e.g., on a periodic basis. For example, cache132may flush a given uncompressed element134U1to data object170via ILC engine140. In accordance with compression policy142, ILC engine140selectively directs data in element134U1to software component140aor to hardware component140b. In this example, compression policy142selects software component140a. As a result, software component140areceives the data of element134U1and applies a software compression algorithm to compress the data. The software compression algorithm resides in the memory130and is executed on the data of element134U1by one or more of the processing units124. Software component140athen directs the SP120to store the resulting compressed data134C1(the compressed version of the data in element134U1) in the data object170. Storing the compressed data134C1in data object170may involve both storing the data itself and storing any metadata structures required to support the data134C1, such as block pointers, a compression header, and other metadata. It should be appreciated that this act of storing data134C1in data object170provides the first storage of such data in the data object170. For example, there was no previous storage of the data of element134U1in the data object170. Rather, the compression of data in element134U1proceeds “inline,” in one or more embodiments, because it is conducted in the course of processing the first write of the data to the data object170. Continuing to another write operation, cache132may proceed to flush a given element134U2to data object170via ILC engine140, which, in this case, directs data compression to hardware component140b, again in accordance with policy142. As a result, hardware component140bdirects the data in element134U2to compression hardware126, which obtains the data and performs a high-speed hardware compression on the data. Hardware component140bthen directs the SP120to store the resulting compressed data134C2(the compressed version of the data in element134U2) in the data object170. Compression of data in element134U2also takes place inline, rather than in the background, as there is no previous storage of data of element134U2in the data object170. In an example, directing the ILC engine140to perform hardware or software compression further entails specifying a particular compression algorithm. The algorithm to be used in each case is based on compression policy142and/or specified by a user of the data storage system116. Further, it should be appreciated that compression policy142may operate ILC engine140in a pass-through mode, i.e., one in which no compression is performed. Thus, in some examples, compression may be avoided altogether if the SP120is too busy to use either hardware or software compression. In some examples, storage180is provided in the form of multiple extents, with two extents E1 and E2 particularly shown. In an example, the data storage system116monitors a “data temperature” of each extent, i.e., a frequency of read and/or write operations performed on each extent, and selects compression algorithms based on the data temperature of extents to which writes are directed. For example, if extent E1 is “hot,” meaning that it has a high data temperature, and the data storage system116receives a write directed to E1, then compression policy142may select a compression algorithm that executes at a high speed for compressing the data directed to E1. However, if extent E2 is “cold,” meaning that it has a low data temperature, and the data storage system116receives a write directed to E2, then compression policy142may select a compression algorithm that executes at high compression ratio for compressing data directed to E2. When SP120performs reads, the ILDC engine150selects between the software component150aand the hardware component150bbased on input from the decompression policy and also based on compatible algorithms. For example, if data was compressed using a particular software algorithm for which no corresponding decompression algorithm is available in hardware, the ILDC engine150may steer the compressed data to the software component150a, as that is the only component equipped with the algorithm needed for decompressing the data. However, if both components150aand150bprovide the necessary algorithm, then selection among components150aand150bmay be based on decompression policy. To process a read request112R directed to compressed data136C, the ILDC engine150accesses metadata of the data object170to obtain a header for the compressed data136C. The compression header specifies the particular algorithm that was used to compress the data136C. The ILDC engine150may then check whether the algorithm is available to software component150a, to hardware component150b, or to both. If the algorithm is available only to one or the other of components150aand150b, the ILDC engine150directs the compressed data136C to the component that has the necessary algorithm. However, if the algorithm is available to both components150aand150b, the ILDC engine150may select between components150aand150bbased on input from the decompression policy. If the software component150ais selected, the software component150aperforms the decompression, i.e., by executing software instructions on one or more of the set of processors124. If the hardware component150bis selected, the hardware component150bdirects the compression hardware126to decompress the data136C. The SP120then returns the resulting uncompressed data136U to the requesting host110. It should be appreciated that the ILDC engine150is not required to use software component150ato decompress data that was compressed by the software component140aof the ILC engine140. Nor is it required that the ILDC engine150use hardware component150bto decompress data that was compressed by the hardware component140b. Rather, the component150aor150bmay be selected flexibly as long as algorithms are compatible. Such flexibility may be especially useful in cases of data migration. For example, consider a case where data object170is migrated to a second data storage system (not shown). If the second data storage system does not include compression hardware126, then any data compressed using hardware on data storage system116may be decompressed on the second data storage system using software. With the arrangement ofFIG.1, the SP120intelligently directs compression and other data reduction tasks to software or to hardware based on operating conditions in the data storage system116. For example, if the set of processing units124are already busy but the compression hardware126is not, the compression policy142can direct more compression tasks to hardware component140b. Conversely, if compression hardware126is busy but the set of processing units124are not, the compression policy142can direct more compression tasks to software component140a. Decompression policy may likewise direct decompression tasks based on operating conditions, at least to the extent that direction to hardware or software is not already dictated by the algorithm used for compression. In this manner, the data storage system116is able to perform inline compression using both hardware and software techniques, leveraging the capabilities of both while applying them in proportions that result in best overall performance. In such an embodiment in which element120ofFIG.1is implemented using one or more data storage systems, each of the data storage systems may include code thereon for performing the techniques as described herein. Servers or host systems, such as110(1)-110(N), provide data and access control information through channels to the storage systems, and the storage systems may also provide data to the host systems also through the channels. The host systems may not address the disk drives of the storage systems directly, but rather access to data may be provided to one or more host systems from what the host systems view as a plurality of logical devices or logical volumes (LVs). The LVs may or may not correspond to the actual disk drives. For example, one or more LVs may reside on a single physical disk drive. Data in a single storage system may be accessed by multiple hosts allowing the hosts to share the data residing therein. An LV or LUN may be used to refer to the foregoing logically defined devices or volumes. The data storage system may be a single unitary data storage system, such as single data storage array, including two storage processors or compute processing units. Techniques herein may be more generally used in connection with any one or more data storage systems each including a different number of storage processors than as illustrated herein. The data storage system116may be a data storage array, such as a Unity™, a VNX™ or VNXe™ data storage array by EMC Corporation of Hopkinton, Massachusetts, including a plurality of data storage devices116and at least two storage processors120a. Additionally, the two storage processors120amay be used in connection with failover processing when communicating with a management system for the storage system. Client software on the management system may be used in connection with performing data storage system management by issuing commands to the data storage system116and/or receiving responses from the data storage system116over a connection. In one embodiment, the management system may be a laptop or desktop computer system. The particular data storage system as described in this embodiment, or a particular device thereof, such as a disk, should not be construed as a limitation. Other types of commercially available data storage systems, as well as processors and hardware controlling access to these particular devices, may also be included in an embodiment. In some arrangements, the data storage system116provides block-based storage by storing the data in blocks of logical storage units (LUNs) or volumes and addressing the blocks using logical block addresses (LBAs). In other arrangements, the data storage system116provides file-based storage by storing data as files of a file system and locating file data using inode structures. In yet other arrangements, the data storage system116stores LUNs and file systems, stores file systems within LUNs, and so on. As further shown inFIG.1, the memory130includes a file system and a file system manager162. A file system is implemented as an arrangement of blocks, which are organized in an address space. Each of the blocks has a location in the address space, identified by FSBN (file system block number). Further, such address space in which blocks of a file system are organized may be organized in a logical address space where the file system manager162further maps respective logical offsets for respective blocks to physical addresses of respective blocks at specified FSBNs. In some cases, data to be written to a file system are directed to blocks that have already been allocated and mapped by the file system manager162, such that the data writes prescribe overwrites of existing blocks. In other cases, data to be written to a file system do not yet have any associated physical storage, such that the file system must allocate new blocks to the file system to store the data. Further, for example, FSBN may range from zero to some large number, with each value of FSBN identifying a respective block location. The file system manager162performs various processing on a file system, such as allocating blocks, freeing blocks, maintaining counters, and scavenging for free space. In at least one embodiment of the current technique, an address space of a file system may be provided in multiple ranges, where each range is a contiguous range of FSBNs (File System Block Number) and is configured to store blocks containing file data. In addition, a range includes file system metadata, such as inodes, indirect blocks (IBs), and virtual block maps (VBMs), for example, as discussed further below in conjunction withFIG.2. As is known, inodes are metadata structures that store information about files and may include pointers to IBs. IBs include pointers that point either to other IBs or to data blocks. IBs may be arranged in multiple layers, forming IB trees, with leaves of the IB trees including block pointers that point to data blocks. Together, the leaf IB's of a file define the file's logical address space, with each block pointer in each leaf IB specifying a logical address into the file. Virtual block maps (VBMs) are structures placed between block pointers of leaf IBs and respective data blocks to provide data block virtualization. The term “VBM” as used herein describes a metadata structure that has a location in a file system that can be pointed to by other metadata structures in the file system and that includes a block pointer to another location in a file system, where a data block or another VBM is stored. However, it should be appreciated that data and metadata may be organized in other ways, or even randomly, within a file system. The particular arrangement described above herein is intended merely to be illustrative. Further, in at least one embodiment of the current technique, ranges associated with an address space of a file system may be of any size and of any number. In some examples, the file system manager162organizes ranges in a hierarchy. For instance, each range may include a relatively small number of contiguous blocks, such as 16 or 32 blocks, for example, with such ranges provided as leaves of a tree. Looking up the tree, ranges may be further organized in CG (cylinder groups), slices (units of file system provisioning, which may be 256 MB or 1 GB in size, for example), groups of slices, and the entire file system, for example. Although ranges as described above herein apply to the lowest level of the tree, the term “ranges” as used herein may refer to groupings of contiguous blocks at any level. In at least one embodiment of the technique, hosts110(1-N) issue IO requests112(1-N) to the data storage system116. The SP120receives the10requests112(1-N) at the communication interfaces122and initiates further processing. Such processing may include, for example, performing read and write operations on a file system, creating new files in the file system, deleting files, and the like. Over time, a file system changes, with new data blocks being allocated and allocated data blocks being freed. In addition, the file system manager162also tracks freed storage extents. In an example, storage extents are versions of block-denominated data, which are compressed down to sub-block sizes and packed together in multi-block segments. Further, a file system operation may cause a storage extent in a range to be freed, e.g., in response to a punch-hole or write-split operation. Further, a range may have a relatively large number of freed fragments but may still be a poor candidate for free-space scavenging if it has a relatively small number of allocated blocks. With one or more candidate ranges identified, the file system manager162may proceed to perform free-space scavenging on such range or ranges. Such scavenging may include, for example, liberating unused blocks from segments (e.g., after compacting out any unused portions), moving segments from one range to another to create free space, and coalescing free space to support contiguous writes and/or to recycle storage resources by returning such resources to a storage pool. Thus, file system manager162may scavenge free space, such as by performing garbage collection, space reclamation, and/or free-space coalescing. As shown inFIG.1, the data storage system116further comprises a pattern matching module152that implements the pattern matching techniques described herein. As discussed further below in conjunction withFIGS.3,5and6, in one or more embodiments, the exemplary pattern matching module152compares a given allocation unit to a pattern matching list300identifying one or more predefined patterns, such as an all-zero pattern. In addition, as discussed further below in conjunction withFIGS.4to6, when a given allocation unit matches one or more predefined patterns, at least one pattern flag is set in the mapping pointer of the allocation unit, and one or more pattern counters400are incremented in a super block (SB) or other file system metadata. Referring now toFIG.2, shown is a more detailed representation of components that may be included in an embodiment using the techniques herein. As shown inFIG.2, a segment250that stores data of a file system is composed from multiple data blocks260. Here, exemplary segment250is made up of at least ten data blocks260(1) through260(10); however, the number of data blocks per segment may vary. In an example, the data blocks260are contiguous, meaning that they have consecutive FSBNs in a file system address space for the file system. Although segment250is composed from individual data blocks260, the file system treats the segment250as one continuous space. Compressed storage extents252, i.e., Data-A through Data-D, etc., are packed inside the segment250. In an example, each of storage extents252is initially a block-sized set of data, which has been compressed down to a smaller size. An 8-block segment may store the compressed equivalent of 12 or 16 blocks or more of uncompressed data, for example. The amount of compression depends on the compressibility of the data and the particular compression algorithm used. Different compressed storage extents252typically have different sizes. Further, for each storage extent252in the segment250, a corresponding weight is maintained, the weight arranged to indicate whether the respective storage extent252is currently part of any file in a file system by indicating whether other block pointers in the file system point to that block pointer. The segment250has an address (e.g., FSBN241) in the file system, and a segment VBM (Virtual Block Map)240points to that address. For example, segment VBM240stores a segment pointer241, which stores the FSBN of the segment250. By convention, the FSBN of segment250may be the FSBN of its first data block, i.e., block260(1). Although not shown, each block260(1)-260(10) may have its respective per-block metadata (BMD), which acts as representative metadata for the respective, block260(1)-260(10), and which includes a backward pointer to the segment VBM240. As further shown inFIG.2, the segment VBM240stores information regarding the number of extents243in the segment250and an extent list244. The extent list244acts as an index into the segment250, by associating each compressed storage extent252, identified by logical address (e.g., LA values A through D, etc.), with a corresponding location within the segment250(e.g., Location values Loc-A through Loc-D, etc., which indicate physical offsets) and a corresponding weight (e.g., Weight values WA through WD, etc.). The weights provide indications of whether the associated storage extents are currently in use by any files in the file system. For example, a positive number for a weight may indicate that at least one file in the file system references the associated storage extent252. Conversely, a weight of zero may mean that no file in the file system currently references that storage extent252. It should be appreciated, however, that various numbering schemes for reference weights may be used, such that positive numbers could easily be replaced with negative numbers and zero could easily be replaced with some different baseline value. The particular numbering scheme described herein is therefore intended to be illustrative rather than limiting. In an example, the weight (e.g., Weight values WA through WD, etc.) for a storage extent252reflects a sum, or “total distributed weight,” of the weights of all block pointers in the file system that point to the associated storage extent. In addition, the segment VBM240may include an overall weight242, which reflects a sum of all weights of all block pointers in the file system that point to extents tracked by the segment VBM240. Thus, in general, the value of overall weight242should be equal to the sum of all weights in the extent list242. Various block pointers212,222, and232are shown to the left inFIG.2. In an example, each block pointer is disposed within a leaf IB (Indirect Block), also referred to herein as a mapping pointer, which performs mapping of logical addresses for a respective file to corresponding physical addresses in the file system. Here, leaf IB210is provided for mapping data of a first file (F1) and contains block pointers212(1) through212(3). Also, leaf IB220is provided for mapping data of a second file (F2) and contains block pointers222(1) through222(3). Further, leaf IB230is provided for mapping data of a third file (F3) and contains block pointers232(1) and232(2). Each of leaf IBs210,220, and230may include any number of block pointers, such as 1024 block pointers each; however, only a small number are shown for ease of illustration. Although a single leaf IB210is shown for file-1, the file-1 may have many leaf IBs, which may be arranged in an IB tree for mapping a large logical address range of the file to corresponding physical addresses in a file system to which the file belongs. A “physical address” is a unique address within a physical address space of the file system. Each of block pointers212,222, and232has an associated pointer value and an associated weight. For example, block pointers212(1) through212(3) have pointer values PA1through PC1 and weights WA1 through WC1, respectively, block pointers222(1) through222(3) have pointer values PA2 through PC2 and weights WA2 through WC2, respectively, and block pointers232(1) through232(2) have pointer values PD through PE and weights WD through WE, respectively. Regarding files F1 and F2, pointer values PA1 and PA2 point to segment VBM240and specify the logical extent for Data-A, e.g., by specifying the FSBN of segment VBM240and an offset that indicates an extent position. In a like manner, pointer values PB1 and PB2 point to segment VBM240and specify the logical extent for Data-B, and pointer values PC1 and PC2 point to segment VBM240and specify the logical extent for Data-C. It can thus be seen that block pointers212and222share compressed storage extents Data-A, Data-B, and Data-C. For example, files F1 and F2 may be snapshots in the same version set. Regarding file F3, pointer value PD points to Data-D stored in segment250and pointer value PE points to Data-E stored outside the segment250. File F3 does not appear to have a snapshot relationship with either of files F1 or F2. If one assumes that data block sharing for the storage extents252is limited to that shown, then, in an example, the following relationships may hold: WA=WA1+WA2; WB=WB1+WB2; WC=WC1+WC2; WD=WD; and Weight242=ΣWi(fori=athroughd, plus any additional extents252tracked by extent list244). The detail shown in segment450indicates an example layout252of data items. In at least one embodiment of the current technique, each compression header is a fixed-size data structure that includes fields for specifying compression parameters, such as compression algorithm, length, CRC (cyclic redundancy check), and flags. In some examples, the header specifies whether the compression was performed in hardware or in software. Further, for instance, Header-A can be found at Loc-A and is immediately followed by compressed Data-A. Likewise, Header-B can be found at Loc-B and is immediately followed by compressed Data-B. Similarly, Header-C can be found at Loc-C and is immediately followed by compressed Data-C. For performing writes, the ILC engine140generates each compression header (Header-A, Header-B, Header-C, etc.) when performing compression on data blocks260, and directs a file system to store the compression header together with the compressed data. The ILC engine140generates different headers for different data, with each header specifying a respective compression algorithm. For performing data reads, a file system looks up the compressed data, e.g., by following a pointer212,222,232in the leaf IB210,220,230to the segment VBM240, which specifies a location within the segment250. A file system reads a header at the specified location, identifies the compression algorithm that was used to compress the data, and then directs the ILDC150to decompress the compressed data using the specified algorithm. In at least one embodiment of the current technique, for example, upon receiving a request to overwrite and/or update data of data block (Data-D) pointed to by block pointer232(a), a determination is made as to whether the data block (Data-D) has been shared among any other file. Further, a determination is made as to whether the size of the compressed extent (also referred to herein as “allocation unit”) storing contents of Data-D in segment250can accommodate the updated data. Based on the determination, the updated data is written in a compressed format to the compressed extent for Data-D in the segment250instead of allocating another allocation unit in a new segment. For additional details regarding the data storage system ofFIGS.1and2, see, for example, U.S. patent application Ser. No. 15/393,331, filed Dec. 29, 2016, “Managing Inline Data Compression in Storage Systems,” incorporated by reference herein in its entirety. FIG.3is a sample table illustrating an exemplary implementation of the predefined pattern matching list300ofFIG.1, in further detail, according to one embodiment of the disclosure. As shown inFIG.3, the exemplary predefined pattern matching list300comprises two representative patterns, all-zeroes and all-ones. In one or more embodiments, the representative patterns have an assigned pattern number identifier. Additional patterns can be added to the exemplary predefined pattern matching list300, as would be apparent to a person of ordinary skill in the art. As noted above, the exemplary pattern matching module152implements the pattern matching techniques described herein, and compares a given allocation unit to the predefined patterns specified in the pattern matching list300, such as the all-zero pattern. FIG.4illustrates an exemplary implementation of the space savings counters400ofFIG.1in further detail, according to one embodiment of the disclosure. As shown inFIG.4, the exemplary space savings counters400comprise a first counter410that tracks the number of allocation units having an all-zero pattern, and a second counter420that tracks the number of allocation units having all other predefined patterns from the exemplary predefined pattern matching list300ofFIG.3. In one or more embodiments, discussed further below in conjunction withFIGS.5and6, one or more of the exemplary space savings counters400are incremented when a given allocation unit matches one or more predefined patterns. In some embodiments, exemplary space savings counters400are maintained on-disk by the space saving module152, for example, in the super block (SB) or other file system metadata. FIG.5comprises exemplary pseudo code500illustrating one or more processes that may be used in connection with the techniques described herein, according to an embodiment of the disclosure. As shown inFIG.5, the exemplary pseudo code500is initiated upon an 10 update operation, such as write, punch-hole and/or deallocate operations. Initially, the PFDC aggregation logic of the file system manager162aggregates a set of allocation units (e.g., “data fragments;” “storage extents” or “blocks”) during step510for pattern matching detection and optionally other data reduction techniques. A test is performed during step520to determine if the current IO update operation being processed comprises a snapshot creation operation. If the current IO update operation comprises a snapshot creation operation, then a snapshot flag is set in the mapping pointer of the new allocation unit corresponding to the snapshot. If it is determined during step530that the current IO update operation being processed comprises an overwrite or deallocation operation, then the appropriate pattern counter is decremented in the Super Block (or other file system metadata) unless the mapping pointer (e.g., leaf IB210) of the allocation unit has a snapshot flag set. If it is determined during step540that the current IO update operation being processed comprises a write operation, then the pattern matching module152determines if each allocation unit matches a predefined pattern from the predefined pattern matching list300(FIG.3). If a pattern match is found during step550, then the appropriate pattern flag is set in the mapping pointer (e.g., leaf IB210) of the allocation unit and the appropriate pattern counter410,420is incremented in the Super Block. FIG.6shows an example method600that may be carried out in connection with the system100. The method600typically performed, for example, by the software constructs described in connection withFIG.1, which reside in the memory130of the storage processor120and are run by the processing unit(s)124. The various acts of method600may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in orders different from that illustrated, which may include performing some acts simultaneously. At610, when a given allocation unit in a storage system matches one or more predefined patterns, then (i) setting a corresponding pattern flag for said given allocation unit, and (ii) incrementing at least one pattern counter. At step620, generating at least one snapshot of at least a portion of a file comprising said given allocation unit. At step630, determining, using at least one processing device (e.g., processing unit(s)124), a range of data reduction attributed to pattern matching based on said at least one pattern counter, wherein one extreme of said range of data reduction attributed to pattern matching excludes said one or more predefined patterns in said at least one snapshot. As discussed above, and in at least one embodiment, the one extreme of the range represents a lower bound of the range and determining the range of data reduction attributed to pattern matching comprises summing the respective counts in the at least one pattern counter to produce a summed total of counts. The said determination also comprises determining a total number of primary and replica files and dividing the summed total of counts by the total number of primary and replica files to produce the lower bound of the range. Additionally, the range comprises another extreme representing an upper bound of the range and determining the range of data reduction attributed to pattern matching comprises summing the respective counts in the at least one pattern counter to produce a summed total of counts that represents the upper bound of the range. Furthermore, the method600further comprises determining a pattern saving value representative of data reduction attributed to pattern matching such that the value is located within the range. For example, the pattern saving value is a midpoint between respective extremes that define the range or the average of the upper and lower bounds. The said at least one pattern counter discussed above with respect to the method600may comprise an all zeroes pattern counter and a second counter for one or more additional predefined patterns. In various embodiments, the space savings attributed to pattern matching detection can be reported as part of the compression space savings, as part of the deduplication space savings, or separately, by reporting only the pattern matching detection space savings. For a more detailed discussions of suitable techniques for reporting space savings due to compression and/or deduplication, see, for example, U.S. patent application Ser. No. 15/664,253, filed Jul. 31, 2017, entitled “Data Reduction Reporting in Storage Systems,” incorporated by reference herein in its entirety. One or more embodiments of the disclosure provide methods and apparatus for reporting space savings due to pattern matching detection. In one or more embodiments, space savings reporting techniques are provided that improve the accuracy of the space savings reporting attributable to pattern matching detection. The foregoing applications and associated embodiments should be considered as illustrative only, and numerous other embodiments can be configured using the techniques disclosed herein, in a wide variety of different applications. It should also be understood that the disclosed techniques for reporting space savings due to pattern matching detection, as described herein, can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer. As mentioned previously, a memory or other storage device having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.” The disclosed techniques for pattern matching space savings reporting may be implemented using one or more processing platforms. One or more of the processing modules or other components may therefore each run on a computer, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” As noted above, illustrative embodiments disclosed herein can provide a number of significant advantages relative to conventional arrangements. It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated and described herein are exemplary only, and numerous other arrangements may be used in other embodiments. In these and other embodiments, compute services can be offered to cloud infrastructure tenants or other system users as a PaaS offering, although numerous alternative arrangements are possible. Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprises cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system. These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as data storage system116, or portions thereof, are illustratively implemented for use by tenants of such a multi-tenant environment. Cloud infrastructure as disclosed herein can include cloud-based systems such as AWS, GCP and Microsoft Azure™. Virtual machines provided in such systems can be used to implement at least portions of data storage system116in illustrative embodiments. The cloud-based systems can include object stores such as Amazon™ S3, GCP Cloud Storage, and Microsoft Azure™ Blob Storage. In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. For example, a given container of cloud infrastructure illustratively comprises a Docker container or other type of LXC. The containers may run on virtual machines in a multi-tenant environment, although other arrangements are possible. The containers may be utilized to implement a variety of different types of functionality within the pattern matching space saving reporting devices. For example, containers can be used to implement respective processing devices providing compute services of a cloud-based system. Again, containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor. Illustrative embodiments of processing platforms will now be described in greater detail with reference toFIGS.7and8. These platforms may also be used to implement at least portions of other information processing systems in other embodiments. Referring now toFIG.7, one possible processing platform that may be used to implement at least a portion of one or more embodiments of the disclosure comprises cloud infrastructure700. The cloud infrastructure700in this exemplary processing platform comprises virtual machines (VMs)702-1,702-2, . . .702-L implemented using a hypervisor704. The hypervisor704runs on physical infrastructure705. The cloud infrastructure700further comprises sets of applications710-1,710-2, . . .710-L running on respective ones of the virtual machines702-1,702-2, . . .702-L under the control of the hypervisor704. The cloud infrastructure700may encompass the entire given system or only portions of that given system, such as one or more of client, servers, controllers, or computing devices in the system. Although only a single hypervisor704is shown in the embodiment ofFIG.7, the system may of course include multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system. An example of a commercially available hypervisor platform that may be used to implement hypervisor704and possibly other portions of the system in one or more embodiments of the disclosure is the VMware® vSphere™ which may have an associated virtual infrastructure management system, such as the VMware® vCenter™. As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure such as VxRail™, VxRack™, VxBlock™, or Vblock® converged infrastructure commercially available from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC of Hopkinton, Massachusetts. The underlying physical machines may comprise one or more distributed processing platforms that include storage products, such as VNX™ and Symmetrix VMAX™, both commercially available from Dell EMC. A variety of other storage products may be utilized to implement at least a portion of the system. In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. For example, a given container of cloud infrastructure illustratively comprises a Docker container or other type of LXC. The containers may be associated with respective tenants of a multi-tenant environment of the system, although in other embodiments a given tenant can have multiple containers. The containers may be utilized to implement a variety of different types of functionality within the system. For example, containers can be used to implement respective compute nodes or cloud storage nodes of a cloud computing and storage system. The compute nodes or storage nodes may be associated with respective cloud tenants of a multi-tenant environment of system. Containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor. As is apparent from the above, one or more of the processing modules or other components of the disclosed pattern matching space saving reporting systems may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure700shown inFIG.7may represent at least a portion of one processing platform. Another example of a processing platform is processing platform800shown inFIG.8. The processing platform800in this embodiment comprises at least a portion of the given system and includes a plurality of processing devices, denoted802-1,802-2,802-3, . . .802-K, which communicate with one another over a network804. The network804may comprise any type of network, such as a wireless area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as WiFi or WiMAX, or various portions or combinations of these and other types of networks. The processing device802-1in the processing platform800comprises a processor810coupled to a memory812. The processor810may comprise a microprocessor, a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements, and the memory812, which may be viewed as an example of a “processor-readable storage media” storing executable program code of one or more software programs. Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used. Also included in the processing device802-1is network interface circuitry814, which is used to interface the processing device with the network804and other system components, and may comprise conventional transceivers. The other processing devices802of the processing platform800are assumed to be configured in a manner similar to that shown for processing device802-1in the figure. Again, the particular processing platform800shown in the figure is presented by way of example only, and the given system may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, storage devices or other processing devices. Multiple elements of system may be collectively implemented on a common processing platform of the type shown inFIG.7or8, or each such element may be implemented on a separate processing platform. For example, other processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines. Such virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs. As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure such as VxRail™, VxRack™, VxBlock™, or Vblock® converged infrastructure commercially available from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC. It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform. Also, numerous other arrangements of computers, servers, storage devices or other components are possible in the information processing system. Such components can communicate with other elements of the information processing system over any type of network or other communication media. As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality of pseudo code shown inFIG.5or the method ofFIG.6are illustratively implemented in the form of software running on one or more processing devices. It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems, compute services platforms, and pattern matching space savings reporting platforms. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
64,441
11860835
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION The techniques described herein may implement efficient drop column requests in a non-relational database, according to some embodiments. Because the size of data stored in a non-relational database table can grow large, operations that can quickly reduce the amount of data stored can decrease storage utilization and improve non-relational database performance for client applications. In some embodiments, a “drop column” request may be implemented, which can remove the same attributes from items in table of a table, as if those attributes were stored in a some column in a relational database table even though the attributes may not be stored in such a fashion (or may not be present at all for some items) in a non-relational database table. Instead of scanning a table to identify which items to remove for performing a drop column request in a non-relational database, efficient techniques may be implemented, in various embodiments as discussed below, saving computational resources and storage resources for client or system using, which in turn improves the performance of the non-relational database system overall. FIG.1is a logical block diagram illustrating efficient drop column requests in a non-relational database, according to some embodiments. A non-relational database system110may store a table, such as table120on behalf of one or more clients. Table120may be a collection of items, such as items122a,122b,122c,122d,122e,122f,122g, and so on. Items in a table in a non-relational database100may not be required to adhere to a strict schema that requires, for instance, values for every column in a table (even if that value is a NULL value), in some embodiments. Nor, in some embodiments, may a non-relational database system110require a particular ordering for storing items (e.g., on disk), such that each item with a same attribute may have the attribute stored in the same order for that item. Instead an item122and a table120may be different groupings of attributes (e.g., a table is a group of items where each item may be a group of one or more attributes). In some embodiments, a column may be items that have a same named attribute (e.g., a “postal code” attribute). For example, as illustrated in scene102, table120has column130which is not present in every item (e.g., not in122b,122eand122f). As the non-relational database system110may not have the column present in every item, a column's values can be costly to locate as a scan of each item may have to be performed, in some embodiments. For operations like drop column request140, which would need to access each item with the column, efficient drop column techniques may be implemented to improve performance of the drop column without sacrificing performance of other requests. For example, drop column request140may target column130. As illustrated in scene104, a backup150of table120may be created. Instead of evaluating the table directly, which may interfere with other read/write requests160, an evaluation of backup150can look for those items with the column130to be dropped (e.g.,122a,122c,122d,122g). For handling read/write requests160, a schema indicating the dropped column162may be enforced, as discussed in detail below with regard toFIGS.5,7, and8. As indicated at scene106, operations170to delete the column from identified items may be performed while continuing to handle read/write requests. As discussed below with regard toFIGS.4A and4B, different techniques for allocating or ordering delete operations to limit or remove interference with read/write requests160can be implemented. Moreover, as the operations to delete the column from items are specific to those items in which the column attribute is present, unnecessary operations to access items without the column attribute are not performed, in some embodiments. Please note that previous descriptions of a non-relational database, table, and backup are not intended to be limiting, but are merely provided as logical examples. Various other types of non-relational database systems, collections of items, or other types of database systems that do not implement strict column requirements, for example, may implement efficient drop column techniques. This specification begins with a general description of a provider network that may implement a non-relational database service that may implement efficient drop column requests in a non-relational database. Then various examples of a non-relational database service are discussed, including different components/modules, or arrangements of components/module, that may be employed as part of implementing the non-relational database service, in some embodiments. A number of different methods and techniques to implement efficient drop column requests in a non-relational database are then discussed, some of which are illustrated in accompanying flowcharts. Finally, a description of an example computing system upon which the various components, modules, systems, devices, and/or nodes may be implemented is provided. Various examples are provided throughout the specification. FIG.2is a logical block diagram illustrating a provider network offering a database service that may implement different types of index structures for storing database data in a replica group, according to some embodiments. Provider network200may be a private or closed system, in some embodiments, or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to clients270, in another embodiment. In some embodiments, provider network200may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system1000described below with regard toFIG.9), needed to implement and distribute the infrastructure and storage services offered by the provider network200. In some embodiments, provider network200may implement various computing resources or services, such as non-relational database service210(e.g., a NoSQL database, key-value or other non-relational database service that may utilize collections of items (e.g., tables that include items)), storage services240(e.g., an object storage service, block-based storage service, or data storage service that may store different types of data for centralized access), and other services (not illustrated), such as data flow processing service, and/or other large scale data processing techniques), virtual compute services, and/or any other type of network-based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services). In various embodiments, the components illustrated inFIG.2may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components ofFIG.2may be implemented by a system that includes a number of computing nodes (or simply, nodes), in some embodiments, each of which may be similar to the computer system embodiment illustrated inFIG.9and described below. In some embodiments, the functionality of a given system or service component (e.g., a component of key value non-relational database service210) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one data store component). Non-relational database service210may be implemented various types of distributed database services, in some embodiments, for storing, accessing, and updating data in tables hosted in key-value database. Such services may be enterprise-class database systems that are highly scalable and extensible. In some embodiments, access requests (e.g., requests to get/obtain items, put/insert items, delete items, update or modify items, scan multiple items) may be directed to a table in non-relational database service210that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. In some embodiments, clients/subscribers may submit requests in a number of ways, e.g., interactively via graphical user interface (e.g., a console) or a programmatic interface to the database system. In some embodiments, non-relational database service210may provide a RESTful programmatic interface in order to submit access requests (e.g., to get, insert, delete, or scan data). In some embodiments, clients270may encompass any type of client configurable to submit network-based requests to provider network200via network260, including requests for non-relational database service210(e.g., to access item(s) in a table in non-relational database service210). For example, in some embodiments a given client270may include a suitable version of a web browser, or may include a plug-in module or other type of code module that executes as an extension to or within an execution environment provided by a web browser. Alternatively in a different embodiment, a client270may encompass an application such as a database client/application (or user interface thereof), a media application, an office application or any other application that may make use of a database in non-relational database service210to store and/or access the data to implement various applications. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client270may be an application that interacts directly with provider network200, in some embodiments. In some embodiments, client270may generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. Note that in some embodiments, clients of non-relational database service210may be implemented within provider network200(e.g., applications hosted on a virtual compute service). In some embodiments, clients of non-relational database service210may be implemented on resources within provider network200(not illustrated). For example, a client application may be hosted on a virtual machine or other computing resources implemented as part of another provider network service that may send access requests to non-relational database service210via an internal network (not illustrated). In some embodiments, a client270may provide access to provider network200to other applications in a manner that is transparent to those applications. For example, client270may integrate with a database on non-relational database service210. In such an embodiment, applications may not need to be modified to make use of a service model that utilizes non-relational database service210. Instead, the details of interfacing to the non-relational database service210may be coordinated by client270. Client(s)270may convey network-based services requests to and receive responses from provider network200via network260, in some embodiments. In some embodiments, network260may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients270and provider network200. For example, network260may encompass the various telecommunications networks and service providers that collectively implement the Internet. In some embodiments, network260may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client270and provider network200may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network260may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client(s)270and the Internet as well as between the Internet and provider network200. It is noted that in some embodiments, client(s)270may communicate with provider network200using a private network rather than the public Internet. Database service210may implement request routing nodes250, in some embodiments. Request routing nodes250may receive and parse access requests, in various embodiments in order to determine various features of the request, to parse, authenticate, throttle and/or dispatch access requests, among other things, in some embodiments. In some embodiments, non-relational database service210may implement control plane220to implement one or more administrative components, such as automated admin instances which may provide a variety of visibility and/or control functions). In various embodiments, control plane220may direct the performance of different types of control plane operations among the nodes, systems, or devices implementing non-relational database service210, in some embodiments. Control plane220may provide visibility and control to system administrators via administrator console226, in some embodiment. Admin console226may allow system administrators to interact directly with non-relational database service210(and/or the underlying system). In some embodiments, the admin console226may be the primary point of visibility and control for non-relational database service210(e.g., for configuration or reconfiguration by system administrators). For example, the admin console may be implemented as a relatively thin client that provides display and control functionally to system administrators and/or other privileged users, and through which system status indicators, metadata, and/or operating parameters may be observed and/or updated. Control plane220may provide an interface or access to information stored about one or more detected control plane events, such as data backup or other management operations for a table, at non-relational database service210, in some embodiments. Storage node management224may provide resource allocation, in some embodiments, for storing additional data in table submitted to database key-value service210. For instance, control plane220may communicate with processing nodes to initiate the performance of various control plane operations, such as moves of multi-table partitions, splits of multi-table partitions, update tables, delete tables, and create indexes, among others. In some embodiments, control plane220may include a node recovery feature or component that handles failure events for storage nodes230, and request routing nodes250(e.g., adding new nodes, removing failing or underperforming nodes, deactivating or decommissioning underutilized nodes, etc). Various durability, resiliency, control, or other operations may be directed by control plane220. For example, storage node management224may detect split, copy, or move events for multi-table partitions at storage nodes in order to ensure that the storage nodes maintain satisfy a minimum performance level for performing access requests. For instance, in various embodiments, there may be situations in which a partition (or a replica thereof) may need to be copied, e.g., from one storage node to another. For example, if there are three replicas of a particular partition, each hosted on a different physical or logical machine, and one of the machines fails, the replica hosted on that machine may need to be replaced by a new copy of the partition on another machine. In another example, if a particular machine that hosts multiple partitions of one or more tables experiences heavy traffic, one of the heavily accessed partitions may be moved (using a copy operation) to a machine that is experiencing less traffic in an attempt to more evenly distribute the system workload and improve performance. In some embodiments, storage node management224may perform partition moves using a physical copying mechanism (e.g., a physical file system mechanism, such as a file copy mechanism) that copies an entire partition from one machine to another, rather than copying a snapshot of the partition data row by. While the partition is being copied, write operations targeting the partition may be logged. During the copy operation, any logged write operations may be applied to the partition by a catch-up process at periodic intervals (e.g., at a series of checkpoints). Once the entire partition has been copied to the destination machine, any remaining logged write operations (i.e. any write operations performed since the last checkpoint) may be performed on the destination partition by a final catch-up process. Therefore, the data in the destination partition may be consistent following the completion of the partition move, in some embodiments. In this way, storage node management224can move partitions amongst storage nodes230while the partitions being moved are still “live” and able to accept access requests. In some embodiments, the partition moving process described above may be employed in partition splitting operations by storage node management224in response to the detection of a partition split event. For example, a partition may be split because it is large, e.g., when it becomes too big to fit on one machine or storage device and/or in order to keep the partition size small enough to quickly rebuild the partitions hosted on a single machine (using a large number of parallel processes) in the event of a machine failure. A partition may also be split when it becomes too “hot” (i.e. when it experiences a much greater than average amount of traffic as compared to other partitions). For example, if the workload changes suddenly and/or dramatically for a given partition, the system may be configured to react quickly to the change. In some embodiments, the partition splitting process described herein may be transparent to applications and clients/users, which may allow the data storage service to be scaled automatically (i.e. without requiring client/user intervention or initiation). In some embodiments, each database partition234may be identified by a partition ID, which may be a unique number (e.g., a GUID) assigned at the time the partition is created. A partition234may also have a version number that is incremented each time the partition goes through a reconfiguration (e.g., in response to adding or removing replicas, but not necessarily in response to a master failover). When a partition is split, two new partitions may be created, each of which may have a respective new partition ID, and the original partition ID may no longer be used, in some embodiments. In some embodiments, a partition may be split by the system using a split tool or process in response to changing conditions. Split or move events may be detected by storage node management224in various ways. For example, partition size and heat, where heat may be tracked by internally measured metrics (such as IOPS), externally measured metrics (such as latency), and/or other factors may be evaluated with respect to various performance thresholds. System anomalies may also trigger split or move events (e.g., network partitions that disrupt communications between replicas of a partition in a replica group, in some embodiments. Storage node management224may detect storage node failures, or provide other anomaly control, in some embodiments. If the partition replica hosted on the storage node on which a fault or failure was detected was the master for its replica group, a new master may be elected for the replica group (e.g., from amongst remaining storage nodes in the replica group). Storage node management224may initiate creation of a replacement partition replica while the source partition replica is live (i.e. while one or more of the replicas of the partition continue to accept and service requests directed to the partition), in some embodiments. In various embodiments, the partition replica on the faulty storage node may be used as the source partition replica, or another replica for same partition (on a working machine) may be used as the source partition replica, e.g., depending type and/or severity of the detected fault. Control plane220may implement table creation and management222to manage the creation (or deletion) of database tables hosed in non-relational database service210, in some embodiments. For example, a request to create a table may be submitted via administrator console226which may initiate performance of a workflow to generate appropriate system metadata (e.g., a table identifier that is unique with respect to all other tables in non-relational database service210, table performance or configuration parameters, etc.). Because tables may be stored in multi-table partitions, resource allocation for a table to be created may be avoided as multi-partition tables may be updated to handle additional data according to storage node management224, or other partition management features, in some embodiments. Table creation/management222may also implement features to handle a drop column, as indicated at223, and discussed below with regard toFIG.3. Backup management228may handle the creation of backup requests to make copies as of a version or point-in-time of a database, as backup partitions242in storage service240. In some embodiments, non-relational database service210may also implement a plurality of storage nodes230, each of which may manage one or more partitions of a database table on behalf of clients/users or on behalf of non-relational database service210which may be stored in database storage234(on storage devices attached to storage nodes230or in network storage accessible to storage nodes230). Storage nodes230may implement item request processing232, in some embodiments. Item request processing232may perform various operations (e.g., read/get, write/update/modify/change, insert/add, or delete/remove) to access individual items stored in tables in non-relational database service210, in some embodiments. In some embodiments, item request processing232may support operations performed as part of a transaction, including techniques such as locking items in a transaction and/or ordering requests to operate on an item as part of transaction along with other requests according to timestamps (e.g., timestamp ordering) so that storage nodes230can accept or reject the transaction-related requests. In some embodiments, item request processing232may maintain database partitions234according to a database model (e.g., a non-relational, NoSQL, or other key-value database model). Item request processing may include processing for sub-tables, as discussed below with regard toFIG.4. In addition to dividing or otherwise distributing data (e.g., database tables) across storage nodes230in separate partitions, storage nodes230may also be used in multiple different arrangements for providing resiliency and/or durability of data as part of larger collections or groups of resources. A replica group, for example, may be composed of a number of storage nodes maintaining a replica of particular portion of data (e.g., a partition) for the non-relational database service210. Moreover, different replica groups may utilize overlapping nodes, where a storage node230may be a member of multiple replica groups, maintaining replicas for each of those groups whose other storage node230members differ from the other replica groups. Different models, schemas or formats for storing data for database tables in non-relational database service210may be implemented, in some embodiments. For example, in some embodiments, non-relational, NoSQL, semi-structured, or other key-value data formats may be implemented. In at least some embodiments, the data model may include tables containing items that have one or more attributes. In such embodiments, each table maintained on behalf of a client/user may include one or more items, and each item may include a collection of one or more attributes. The attributes of an item may be a collection of one or more name-value pairs, in any order, in some embodiments. In some embodiments, each attribute in an item may have a name, a type, and a value. In some embodiments, the items may be managed by assigning each item a primary key value (which may include one or more attribute values), and this primary key value may also be used to uniquely identify the item. In some embodiments, a large number of attributes may be defined across the items in a table, but each item may contain a sparse set of these attributes (with the particular attributes specified for one item being unrelated to the attributes of another item in the same table), and all of the attributes may be optional except for the primary key attribute(s). In other words, the tables maintained by the non-relational database service210(and the underlying storage system) may have no pre-defined schema other than their reliance on the primary key. Metadata or other system data for tables may also be stored as part of database partitions using similar partitioning schemes and using similar indexes, in some embodiments. Database service210may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs. The control plane APIs provided by non-relational database service210(and/or the underlying system) may be used to manipulate table-level entities, such as tables and indexes and/or to re-configure various tables These APIs may be called relatively infrequently (when compared to data plane APIs). In some embodiments, the control plane APIs provided by the service may be used to create tables or secondary indexes for tables at separate storage nodes, import tables, export tables, delete tables or secondary indexes, explore tables or secondary indexes (e.g., to generate various performance reports or skew reports), modify table configurations or operating parameter for tables or secondary indexes, and/or describe tables or secondary indexes. In some embodiments, control plane APIs that perform updates to table-level entries may invoke asynchronous workflows to perform a requested operation. Methods that request “description” information (e.g., via a describeTables API) may simply return the current known state of the tables or secondary indexes maintained by the service on behalf of a client/user. The data plane APIs provided by non-relational database service210(and/or the underlying system) may be used to perform item-level operations, such as requests for individual items or for multiple items in one or more tables table, such as queries, batch operations, and/or scans. The APIs provided by the service described herein may support request and response parameters encoded in one or more industry-standard or proprietary data exchange formats, in different embodiments. For example, in various embodiments, requests and responses may adhere to a human-readable (e.g., text-based) data interchange standard, (e.g., JavaScript Object Notation, or JSON), or may be represented using a binary encoding (which, in some cases, may be more compact than a text-based representation). In various embodiments, the system may supply default values (e.g., system-wide, user-specific, or account-specific default values) for one or more of the input parameters of the APIs described herein. Database service210may include support for some or all of the following operations on data maintained in a table (or index) by the service on behalf of a storage service client: delete (or drop) a column (as discussed above with regard toFIG.1and below with regard toFIGS.3-8), perform a transaction (inclusive of one or more operations on one or more items in one or more tables), put (or store) an item, get (or retrieve) one or more items having a specified primary key, delete an item, update the attributes in a single item, query for items using an index, and scan (e.g., list items) over the whole table, optionally filtering the items returned, or conditional variations on the operations described above that are atomically performed (e.g., conditional put, conditional get, conditional delete, conditional update, etc.). For example, the non-relational database service210(and/or underlying system) described herein may provide various data plane APIs for performing item-level operations, such as a TransactItems API, PutItem API, a GetItem (or GetItems) API, a DeleteItem API, and/or an UpdateItem API, as well as one or more index-based seek/traversal operations across multiple items in a table, such as a Query API and/or a Scan API. Storage service240may be file, object-based, or other type of storage service that may be used to store backups242for evaluating to delete a column as discussed below with regard toFIGS.3-6. Storage service240may implement striping, sharding, or other data distribution techniques so that different portions of a partition backup242are stored across multiple locations (e.g., at separate nodes). FIG.3is a sequence diagram illustrating dropping a column in a database service, according to some embodiments. Request router310may receive drop column request332. Request router310may forward drop column request334to control plane220. Control plane220may update the schema to remove the column, as indicated at336, from storage node(s)320. In this way, storage node(s)320may no longer accept requests directed to the dropped column attribute and may instead return an error response (e.g., item not present). In other embodiments, schema information may be stored in another location. In such embodiments, a similar request or instruction may be sent (e.g. similar to336) to the other location. Control plane220may acknowledge the drop column request, as indicated at338. Request router310may then acknowledge the drop column request to the client, as indicated at340. In some embodiments, a limited number of drop column requests may be permitted. Therefore, a table may be locked against subsequent drop column requests if the permitted number is exceeded (e.g., only 1 drop column request at a time). If a table is locked against drop column requests, then a denial or other error response may be returned for the drop column request. Control plane220may then create backup partitions342of the table to be stored in storage service240. Control plane220may then scan the backup partitions for item(s) with the column, as indicated at344. A list of items to be updated may be determined and then used to send delete column requests from items, as indicated at346to storage nodes320. Note, in some embodiments (not illustrated) request router310may perform the scan of backup partitions, as indicated at344and/or the deletion column from items. In order to prevent drop column processing from interfering with client application requests, different schemes for ordering and or prioritizing operations may be implemented.FIGS.4A and4Bare logical block diagrams illustrating performance of deletion operations for a dropped column, according to some embodiments. For example, item request handling410, which may be similar to item request processing232inFIG.2, may implement different types of queues with different priority. For example, high priority queue402may be to accept and perform client requests422. Low priority queue404may be implemented to handle column delete requests, as indicated at424. Request selection406may select according to various weighting schemes or rotations handling operations in the respective queues to perform selected requests426(e.g., waiting till high priority queue402is empty, or selecting from high-priority queue 9 times out of 10 according to a weighted round-robin scheme, etc.). InFIG.4B, item request handling410implements different allocations of resources for performing different types of requests. For example, client-request allocated throughput442may be implemented to provide a performance454of requests from a client, as indicated at452, with an allocated throughput (e.g., IOPs). Similarly, system request allocate throughput444, may provide a set of implemented to provide a performance454of requests for column deletion, as indicated at462, with an allocated throughput (e.g., IOPs), which may be different than and/or not interfere with the allocation to client requests442, in some embodiments. FIG.5is a logical block diagram illustrating read and write handling for a non-relational database that implements efficient drop column requests, according to some embodiments. Item request processing510, which may be similar to item request processing410and232inFIG.2, may implement both write handling520and read handling530that rely upon a table schema502in order to handle different scenarios involving a dropped column. For example, a write request550may be received at write handling520, which may determine from schema502whether or not the write is to a dropped column504in the item. If the write is to a dropped column and would be denied because of a size limitation, then as discussed in detail below with regard toFIG.8, the write request550may be allowed, as indicated at552in order to perform the write at the table partition540at the storage node. If the write request550is directed to a column of an item that no longer exists because the dropped column504indication, then the write may be denied, as indicated554, by sending an error response. Allowed writes552may have write acknowledgements sent. In another example, read handling530may utilize table schema502to determine how to handle some read requests, like read request560. For example, read request560may be targeted to an item that includes a dropped column. As the column may or may not have been deleted yet, read handling530may read the item, as indicated562, and receive the item564from table partition540. Read handling530may then filter the item and return a filtered response566that removes the column attribute value from the response, in some embodiments. The examples of a database that implements different types of index structures for storing database data in a replica group as discussed inFIGS.2-5above have been given in regard to a non-relational database service (e.g., document database, NoSQL database, etc.). However, various other types of non-relational database systems can advantageously efficient drop column requests, in other embodiments.FIG.6is a high-level flowchart illustrating various methods and techniques to implement efficient drop column requests in a non-relational database, according to some embodiments. These techniques, as well as the techniques discussed with regard toFIGS.7-8, may be implemented using components or systems as described above with regard toFIGS.2-5, as well as other types of databases or storage systems, and thus the following discussion is not intended to be limiting as to the other types of systems that may implement the described techniques. As indicated at610, a request to drop a column from a table stored in a non-relational database may be received, in some embodiments. A non-relational database table may be implemented according to various non-relational schemas or formats that do not, for instance, require each item in a table (e.g., a collection of items) to store the same attributes. For those items that do store the same attributes (e.g., by a same attribute name or key), a request, such as drop column request, may treat the same attributes in the different items as a “column” (e.g., though no such schema may be enforced). In some embodiments, however, a schema may be defined that includes a column. In some embodiments, column values may be null or not stated. The request to drop the column may specify the column according to a name or other identifier that can be mapped to each item to determine if that item stores that column. As indicated at620, a schema for the table may be updated to filter the column from responses to requests to read from the table, in some embodiments. For example, table metadata maintained along with the data of the table may describe various aspects of the table, including as data organization or schema, or may indicate that item attributes that can be associated as a column (e.g., with a same attribute name) may be filtered out of requests, as discussed in detail below with regard toFIG.7. As indicated at630, a backup of the table may be created in a separate data store from the table, in some embodiments. For example, as discussed above with regard toFIG.3, an archive or other copy of the table may be created. The backup may be associated with version or point-in-time of the table that corresponds to when the drop table request is to be applied (e.g., at a timestamp or logical sequence number (LSN) associated with the drop table request), in some embodiments. As indicated at640, the backup may be evaluated to identify item(s) that include the column, in some embodiments. For example, the backup may be scanned item by item in order to determine whether a column value is present in an item. In some embodiments, the data may be stored in column-oriented format so that all column values (if existing) are stored together in data blocks, pages, and/or chunks of storage. When an item with an attribute associated with the column to be dropped is identified, it may be added to a list of items to be updated, in some embodiments. As indicated at650, the column may be deleted from the identified item(s) in the table, in some embodiments. For example, respective delete operations or requests may be performed to update individual items. In some embodiments, different schemes for scheduling or allocating resources to performance of delete requests in order to limit or prevent impact on client application requests may be performed, such as different priority queues and/or different throughput allocations, as discussed above with regard toFIGS.4A and4B. The non-relational database system may continue to accept requests that cause reads and writes tot the database while a table is being dropped, in some embodiments.FIG.7is a high-level flowchart illustrating various methods and techniques to handle read requests for a non-relational database that implements efficient drop column requests, according to some embodiments. Such techniques may be applied during (maybe after) deletion of a column to be dropped, in some embodiments. As indicated at710, a request may be received that causes a read to item(s) of a table in a non-relational database, in some embodiments. For example, a query, get, scan, or other type of read causing request may be received. As indicated at720, the item(s) to perform the request may be read from the table, in some embodiments. As indicated at730, a determination may be made as to whether the item(s) have a column identified by schema for deletion, in some embodiments. For example, the item attributes returned from the read may be compared with the schema which may indicate an attribute name for a dropped column. As the state of the table may be in transition, with some items having the column requested, and others not, the data read from the item may be updated in order to present a result consistent with a dropped column. Therefore, as indicated at740, the column indicated as dropped in items, may be filtered out, in some embodiments. Then, a response to the request may be sent, as indicated at750. For those read requests that do not touch items with dropped columns, no filtering may need to be applied, as indicated by the negative exit from730. FIG.8is a high-level flowchart illustrating various methods and techniques to handle write requests for a non-relational database that implements efficient drop column requests, according to some embodiments. As indicated at810, a request to write to an item in a table of a non-relational database may be received, in some embodiments. A determination may be made as to whether the write is directed to a deleted column attribute, as indicated at820. If so, then as indicated by the negative exit from820, then the write to the requested item may be denied. If not, then as indicated at830, a determination may be made as to whether the item (regardless of the attribute targeted by the write) has a deleted column attribute. If not, then the requested write may be performed, as indicated at840. If so, then the requested write to the item may be denied, as indicated at850. In some embodiments, instead of denying requests to items with a deleted attribute, other techniques may be implemented. For example, the deleted attribute(s) may be removed and the write performed to the item. In some embodiments, size checks may be performed. For example, the total size of the item with the write applied may be used to determine whether or not a size limitation is exceeded, in some embodiments. If the item does have a deleted column attribute, then the size of the dropped column attribute may be subtracted from the total item size, in some embodiments. This modified total item size may then be evaluated with respect to the impact of the write request on the size limitation (e.g., if an item has a size of 4.5 MB and a deleted column attribute of 0.5 MB, then the total size 4.0 MB may be combined with a write of 1.0 MB and compared with the size limitation<=5.0 MB). In the above example, without removing the dropped column attribute value the size would exceed the size limitation (e.g., 5.5 MB is >5.0 MB), but with the modified total size, the total size with the write would be 5.0 MB which satisfies <=5.0 MB size limitation). If the size limitation is not exceeded, then the requested write may be performed. If the size limitation is exceeded, then the requested write to the item may be denied. As discussed above, other write handling scenarios for dropped columns may occur. For example, a column that is being dropped may be identified in a write request to change a column value. An error may be returned as the column no longer exists. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in some embodiments, the methods may be implemented by a computer system (e.g., a computer system as inFIG.9) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein (e.g., the functionality of various servers and other components that implement the distributed systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Embodiments to implement efficient drop column requests in a non-relational database as described herein may be executed on one or more computer systems, which may interact with various other devices. One such computer system is illustrated byFIG.9. In different embodiments, computer system1000may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or compute node, computing device or electronic device. In the illustrated embodiment, computer system1000includes one or more processors1010coupled to a system memory1020via an input/output (I/O) interface1030. Computer system1000further includes a network interface1040coupled to I/O interface1030, and one or more input/output devices1050, such as cursor control device, keyboard, and display(s). Display(s) may include standard computer monitor(s) and/or other display systems, technologies or devices, in some embodiments. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system1000, while in other embodiments multiple such systems, or multiple nodes making up computer system1000, may host different portions or instances of embodiments. For example, in some embodiments some elements may be implemented via one or more nodes of computer system1000that are distinct from those nodes implementing other elements. In various embodiments, computer system1000may be a uniprocessor system including one processor1010, or a multiprocessor system including several processors1010(e.g., two, four, eight, or another suitable number). Processors1010may be any suitable processor capable of executing instructions, in some embodiments. For example, in various embodiments, processors1010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors1010may commonly, but not necessarily, implement the same ISA. In some embodiments, at least one processor1010may be a graphics processing unit. A graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device, in some embodiments. Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms. For example, a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU). In various embodiments, graphics rendering may, at least in part, be implemented by program instructions for execution on one of, or parallel execution on two or more of, such GPUs. The GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s), in some embodiments. System memory1020may store program instructions1025and/or data accessible by processor1010to implement different types of index structures for storing database data in a replica group, in some embodiments. In various embodiments, system memory1020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those described above are shown stored within system memory1020as program instructions1025and data storage1035, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory1020or computer system1000. A computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system1000via I/O interface1030. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface1040, in some embodiments. In some embodiments, I/O interface1030may be coordinate I/O traffic between processor1010, system memory1020, and any peripheral devices in the device, including network interface1040or other peripheral interfaces, such as input/output devices1050. In some embodiments, I/O interface1030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory1020) into a format suitable for use by another component (e.g., processor1010). In some embodiments, I/O interface1030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface1030may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of I/O interface1030, such as an interface to system memory1020, may be incorporated directly into processor1010. Network interface1040may allow data to be exchanged between computer system1000and other devices attached to a network, such as other computer systems, or between nodes of computer system1000, in some embodiments. In various embodiments, network interface1040may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. Input/output devices1050may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system1000, in some embodiments. Multiple input/output devices1050may be present in computer system1000or may be distributed on various nodes of computer system1000, in some embodiments. In some embodiments, similar input/output devices may be separate from computer system1000and may interact with one or more nodes of computer system1000through a wired or wireless connection, such as over network interface1040. As shown inFIG.9, memory1020may include program instructions1025, that implement the various embodiments of the systems as described herein, such as techniques to perform efficient drop column operations and request handling, and data store1035, comprising various data accessible by program instructions1025, in some embodiments. In some embodiments, program instructions1025may include software elements of embodiments as described herein and as illustrated in the Figures. Data storage1035may include data that may be used in embodiments. In other embodiments, other or different software elements and data may be included. Those skilled in the art will appreciate that computer system1000is merely illustrative and is not intended to limit the scope of the embodiments as described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including a computer, personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, network device, internet appliance, PDA, wireless phones, pagers, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. Computer system1000may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-readable medium separate from computer system1000may be transmitted to computer system1000via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. This computer readable storage medium may be non-transitory. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent example embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
56,548
11860836
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that other alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION System Overview FIG.1shows a system environment including content management system100, collaborative content management system130, and client devices120a,120b, and120c(collectively or individually “120”). Content management system100provides functionality for sharing content items with one or more client devices120and synchronizing content items between content management system100and one or more client devices120. The content stored by content management system100can include any type of content items, such as documents, spreadsheets, collaborative content items, text files, audio files, image files, video files, webpages, executable files, binary files, placeholder files that reference other content items, etc. In some implementations, a content item can be a portion of another content item, such as an image that is included in a document. Content items can also include collections, such as folders, namespaces, playlists, albums, etc., that group other content items together. The content stored by content management system100may be organized in one configuration in folders, tables, or in other database structures (e.g., object oriented, key/value etc.). In one embodiment, the content stored by content management system100includes content items created by using third party applications, e.g., word processors, video and image editors, database management systems, spreadsheet applications, code editors, and so forth, which are independent of content management system100. In some embodiments, content stored by content management system100includes content items, e.g., collaborative content items, created using a collaborative interface provided by collaborative content management system130. In various implementations, collaborative content items can be stored by collaborative content item management system130, with content management system100, or external to content management system100. A collaborative interface can provide an interactive content item collaborative platform whereby multiple users can simultaneously create and edit collaborative content items, comment in the collaborative content items, and manage tasks within the collaborative content items. Users may create accounts at content management system100and store content thereon by sending such content from client device120to content management system100. The content can be provided by users and associated with user accounts that may have various privileges. For example, privileges can include permissions to: see content item titles, see other metadata for the content item (e.g. location data, access history, version history, creation/modification dates, comments, file hierarchies, etc.), read content item contents, modify content item metadata, modify content of a content item, comment on a content item, read comments by others on a content item, or grant or remove content item permissions for other users. Client devices120communicate with content management system100and collaborative content management system130through network110. The network may be any suitable communications network for data transmission. In one embodiment, network110is the Internet and uses standard communications technologies and/or protocols. Thus, network110can include links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, digital subscriber line (DSL), asynchronous transfer mode (ATM), InfiniBand, PCI Express Advanced Switching, etc. Similarly, the networking protocols used on network110can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the User Datagram Protocol (UDP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc. The data exchanged over network110can be represented using technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), JavaScript Object Notation (JSON), etc. In addition, all or some of links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), transport layer security (TLS), virtual private networks (VPNs), Internet Protocol security (IPsec), etc. In another embodiment, the entities use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above. In some embodiments, content management system100and collaborative content management system130are combined into a single system. The system may include one or more servers configured to provide the functionality discussed herein for the systems100and130. Client Device FIG.2shows a block diagram of the components of a client device120according to one embodiment. Client devices120generally include devices and modules for communicating with content management system100and a user of client device120. Client device120includes display210for providing information to the user, and in certain client devices120includes a touchscreen. Client device120also includes network interface220for communicating with content management system100via network110. There are additional components that may be included in client device120but that are not shown, for example, one or more computer processors, local fixed memory (RAM and ROM), as well as optionally removable memory (e.g., SD-card), power sources, and audio-video outputs. In certain embodiments, client device120includes additional components such as camera230and location module240. Location module240determines the location of client device120, using, for example, a global positioning satellite signal, cellular tower triangulation, or other methods. Location module240may be used by client application200to obtain location data and add the location data to metadata about a content item. Client devices120maintain various types of components and modules for operating the client device and accessing content management system100. The software modules can include operating system250or a collaborative content item editor270. Collaborative content item editor270is configured for creating, viewing and modifying collaborative content items such as text documents, code files, mixed media files (e.g., text and graphics), presentations or the like. Operating system250on each device provides a local file management system and executes the various software modules such as content management system client application200and collaborative content item editor270. A contact directory290stores information on the user's contacts, such as name, telephone numbers, company, email addresses, physical address, website URLs, and the like. Client devices120access content management system100and collaborative content management system130in a variety of ways. Client device120may access these systems through a native application or software module, such as content management system client application200. Client device120may also access content management system100through web browser260. As an alternative, the client application200may integrate access to content management system100with the local file management system provided by operating system250. When access to content management system100is integrated in the local file management system, a file organization scheme maintained at the content management system is represented at the client device120as a local file structure by operating system250in conjunction with client application200. Client application200manages access to content management system100and collaborative content management system130. Client application200includes user interface module202that generates an interface to the content accessed by client application200and is one means for performing this function. The generated interface is provided to the user by display210. Client application200may store content accessed from a content storage at content management system100in local content204. While represented here as within client application200, local content204may be stored with other data for client device120in non-volatile storage. When local content204is stored this way, the content is available to the user and other applications or modules, such as collaborative content item editor270, when client application200is not in communication with content management system100. Content access module206manages updates to local content204and communicates with content management system100to synchronize content modified by client device120with content maintained on content management system100, and is one means for performing this function. Client application200may take various forms, such as a stand-alone application, an application plug-in, or a browser extension. Content Management System FIG.3shows a block diagram of the content management system100according to one embodiment. To facilitate the various content management services, a user can create an account with content management system100. The account information can be maintained in user account database316, and is one means for performing this function. User account database316can store profile information for registered users. In some cases, the only personal information in the user profile is a username and/or email address. However, content management system100can also be configured to accept additional user information, such as password recovery information, demographics information, payment information, and other details. Each user is associated with a userID and a username. For purposes of convenience, references herein to information such as collaborative content items or other data being “associated” with a user are understood to mean an association between a collaborative content item and either of the above forms of user identifier for the user. Similarly, data processing operations on collaborative content items and users are understood to be operations performed on derivative identifiers such as collaborativeContentItemID and userIDs. For example, a user may be associated with a collaborative content item by storing the information linking the userID and the collaborativeContentItemID in a table, file, or other storage formats. For example, a database table organized by collaborativeContentItemIDs can include a column listing the userID of each user associated with the collaborative content item. As another example, for each userID, a file can list a set of collaborativeContentItemID associated with the user. As another example, a single file can list key values pairs such as <userID, collaborativeContentItemID> representing the association between an individual user and a collaborative content item. The same types of mechanisms can be used to associate users with comments, threads, text elements, formatting attributes, and the like. User account database316can also include account management information, such as account type, e.g. free or paid; usage information for each user, e.g., file usage history; maximum storage space authorized; storage space used; content storage locations; security settings; personal configuration settings; content sharing data; etc. Account management module304can be configured to update and/or obtain user account details in user account database316. Account management module304can be configured to interact with any number of other modules in content management system100. An account can be used to store content items, such as collaborative content items, audio files, video files, etc., from one or more client devices associated with the account. Content items can be shared with multiple users and/or user accounts. In some implementations, sharing a content item can include associating, using sharing module310, the content item with two or more user accounts and providing for user permissions so that a user that has authenticated into one of the associated user accounts has a specified level of access to the content item. That is, the content items can be shared across multiple client devices of varying type, capabilities, operating systems, etc. The content items can also be shared across varying types of user accounts. Individual users can be assigned different access privileges to a content item shared with them, as discussed above. In some cases, a user's permissions for a content item can be explicitly set for that user. A user's permissions can also be set based on: a type or category associated with the user (e.g., elevated permissions for administrator users or manager), the user's inclusion in a group or being identified as part of an organization (e.g., specified permissions for all members of a particular team), and/or a mechanism or context of a user's accesses to a content item (e.g., different permissions based on where the user is, what network the user is on, what type of program or API the user is accessing, whether the user clicked a link to the content item, etc.). Additionally, permissions can be set by default for users, user types/groups, or for various access mechanisms and contexts. In some implementations, shared content items can be accessible to a recipient user without requiring authentication into a user account. This can include sharing module310providing access to a content item through activation of a link associated with the content item or providing access through a globally accessible shared folder. The content can be stored in content storage318, which is one means for performing this function. Content storage318can be a storage device, multiple storage devices, or a server. Alternatively, content storage318can be a cloud storage provider or network storage accessible via one or more communications networks. The cloud storage provider or network storage may be owned and managed by the content management system100or by a third party. In one configuration, content management system100stores the content items in the same organizational structure as they appear on the client device. However, content management system100can store the content items in its own order, arrangement, or hierarchy. Content storage318can also store metadata describing content items, content item types, and the relationship of content items to various accounts, folders, or groups. The metadata for a content item can be stored as part of the content item or can be stored separately. In one configuration, each content item stored in content storage318can be assigned a system-wide unique identifier. Content storage318can decrease the amount of storage space required by identifying duplicate files or duplicate segments of files. Instead of storing multiple copies of an identical content item, content storage318can store a single copy and then use a pointer or other mechanism to link the duplicates to the single copy. Similarly, content storage318stores files using a file version control mechanism that tracks changes to files, different versions of files (such as a diverging version tree), and a change history. The change history can include a set of changes that, when applied to the original file version, produces the changed file version. Content storage318may further decrease the amount of storage space required by deleting content items based on expiration time of the content items. An expiration time for a content item may indicate that the content item is no longer needed after the expiration time and may therefore be deleted. Content storage318may periodically scan through the content items and compare expiration time with current time. If the expiration time of a content item is earlier than the current time, content storage318may delete the content item from content storage318. Content management system100automatically synchronizes content from one or more client devices, using synchronization module312, which is one means for performing this function. The synchronization is platform agnostic. That is, the content is synchronized across multiple client devices120of varying type, capabilities, operating systems, etc. For example, client application200synchronizes, via synchronization module312at content management system100, content in client device120's file system with the content in an associated user account on system100. Client application200synchronizes any changes to content in a designated folder and its sub-folders with the synchronization module312. Such changes include new, deleted, modified, copied, or moved files or folders. Synchronization module312also provides any changes to content associated with client device120to client application200. This synchronizes the local content at client device120with the content items at content management system100. Conflict management module314determines whether there are any discrepancies between versions of a content item located at different client devices120. For example, when a content item is modified at one client device and a second client device, differing versions of the content item may exist at each client device. Synchronization module312determines such versioning conflicts, for example by identifying the modification time of the content item modifications. Conflict management module314resolves the conflict between versions by any suitable means, such as by merging the versions, or by notifying the client device of the later-submitted version. A user can also view or manipulate content via a web interface generated by user interface module302. For example, the user can navigate in web browser260to a web address provided by content management system100. Changes or updates to content in content storage318made through the web interface, such as uploading a new version of a file, are synchronized back to other client devices120associated with the user's account. Multiple client devices120may be associated with a single account and files in the account are synchronized between each of the multiple client devices120. Content management system100includes communications interface300for interfacing with various client devices120, and with other content and/or service providers via an Application Programming Interface (API), which is one means for performing this function. Certain software applications access content storage318via an API on behalf of a user. For example, a software package, such as an app on a smartphone or tablet computing device, can programmatically make calls directly to content management system100, when a user provides credentials, to read, write, create, delete, share, or otherwise manipulate content. Similarly, the API can allow users to access all or part of content storage318through a web site. Content management system100can also include authenticator module306, which verifies user credentials, security tokens, API calls, specific client devices, etc., to determine whether access to requested content items is authorized, and is one means for performing this function. Authenticator module306can generate one-time use authentication tokens for a user account. Authenticator module306assigns an expiration period or date to each authentication token. In addition to sending the authentication tokens to requesting client devices, authenticator module306can store generated authentication tokens in authentication token database320. After receiving a request to validate an authentication token, authenticator module306checks authentication token database320for a matching authentication token assigned to the user. Once the authenticator module306identifies a matching authentication token, authenticator module306determines if the matching authentication token is still valid. For example, authenticator module306verifies that the authentication token has not expired or was not marked as used or invalid. After validating an authentication token, authenticator module306may invalidate the matching authentication token, such as a single-use token. For example, authenticator module306can mark the matching authentication token as used or invalid, or delete the matching authentication token from authentication token database320. In some embodiments, content management system100includes a content item management module308for maintaining a content directory that identifies the location of each content item in content storage318, and allows client applications to request access to content items in the storage318, and which is one means for performing this function. A content entry in the content directory can also include a content pointer that identifies the location of the content item in content storage318. For example, the content entry can include a content pointer designating the storage address of the content item in memory. In some embodiments, the content entry includes multiple content pointers that point to multiple locations, each of which contains a portion of the content item. In addition to a content path and content pointer, a content entry in some configurations also includes user account identifier that identifies the user account that has access to the content item. In some embodiments, multiple user account identifiers can be associated with a single content entry indicating that the content item has shared access by the multiple user accounts. In another embodiment, the content item management module308consolidates content items, which may also be referred to as objects, into a batch object, which may also be referred to as a batch, and stores the batch object to content storage318. The content item management module308may receive multiple objects from clients120to store in content storage318. The content item management module308may create a batch object consolidating the objects and issue a single write request to store the batch object to content storage318. The determination of which objects are to be consolidated may be based on information associated with the objects and the batches. For example, objects with same namespaces that arrive in a same time interval (e.g., within a predefined time interval of each other) may be grouped into a batch object. Additionally, responsive to detecting that the size of a group of incoming objects reaches a size limit or the group of objects have been waited for a certain amount of time that exceeds a time limit, the content item management module may consolidate the group of objects into a batch object without adding additional objects. The content item management module308may store a data structure including metadata associated with the objects and the batch objects. The metadata may contain information such as mappings that map objects to their respective batches. The metadata may additionally, or alternatively, include information describing length and location associated with the objects and the batches. Any other information describing the object and/or the batch may be included within the metadata. Further details about the metadata are described in further detail in accordance withFIG.6. For each request to access an object in content storage318, the content item management module308may first check in the data structure for information such as expiration time for the object, before accessing content storage318. The content item management module308may also perform batch compaction which consolidates batch objects containing both expired and non-expired objects into one batch object. Functionalities of the content item management module308are discussed in further detail below in accordance withFIG.5. In some embodiments, the content management system100can include a mail server module322. The mail server module322can send (and receive) collaborative content items to (and from) other client devices using the collaborative content management system100. The mail server module can also be used to send and receive messages between users in the content management system. Collaborative Content Management System FIG.4shows a block diagram of the collaborative content management system130, according to one embodiment. Collaborative content items can be files that users can create and edit using a collaborative content items editor270and can contain collaborative content item elements. Collaborative content item elements may include any type of content such as text; images, animations, videos, audio, or other multi-media; tables; lists; references to external content; programming code; tasks; tags or labels; comments; or any other type of content. Collaborative content item elements can be associated with an author identifier, attributes, interaction information, comments, sharing users, etc. Collaborative content item elements can be stored as database entities, which allows for searching and retrieving the collaborative content items. As with other types of content items, collaborative content items may be shared and synchronized with multiple users and client devices120, using sharing310and synchronization312modules of content management system100. Users operate client devices120to create and edit collaborative content items, and to share collaborative content items with other users of client devices120. Changes to a collaborative content item by one client device120are propagated to other client devices120of users associated with that collaborative content item. In the embodiment ofFIG.1, collaborative content management system130is shown as separate from content management system100and can communicate with it to obtain its services. In other embodiments, collaborative content management system130is a subsystem of the component of content management system100that provides sharing and collaborative services for various types of content items. User account database316and authentication token database320from content management system100are used for accessing collaborative content management system130described herein. Collaborative content management system130can include various servers for managing access and edits to collaborative content items and for managing notifications about certain changes made to collaborative content items. Collaborative content management system130can include proxy server402, collaborative content item editor404, backend server406, and collaborative content item database408, access link module410, copy generator412, collaborative content item differentiator414, settings module416, metadata module418, revision module420, notification server422, and notification database424. Proxy server402handles requests from client applications200and passes those requests to the collaborative content item editor404. Collaborative content item editor404manages application level requests for client applications200for editing and creating collaborative content items, and selectively interacts with backend servers406for processing lower level processing tasks on collaborative content items, and interfacing with collaborative content items database408as needed. Collaborative content items database408contains a plurality of database objects representing collaborative content items, comment threads, and comments. Each of the database objects can be associated with a content pointer indicating the location of each object within the CCI database408. Notification server422detects actions performed on collaborative content items that trigger notifications, creates notifications in notification database424, and sends notifications to client devices. Client application200sends a request relating to a collaborative content item to proxy server402. Generally, a request indicates the userID (“UID”) of the user, and the collaborativeContentItemID (“NID”) of the collaborative content item, and additional contextual information as appropriate, such as the text of the collaborative content item. When proxy server402receives the request, the proxy server402passes the request to the collaborative content item editor404. Proxy server402also returns a reference to the identified collaborative content items proxy server402to client application200, so the client application can directly communicate with the collaborative content item editor404for future requests. In an alternative embodiment, client application200initially communicates directly with a specific collaborative content item editor404assigned to the userID. When collaborative content item editor404receives a request, it determines whether the request can be executed directly or by a backend server406. When the request adds, edits, or otherwise modifies a collaborative content item the request is handled by the collaborative content item editor404. If the request is directed to a database or index inquiry, the request is executed by a backend server406. For example, a request from client device120to view a collaborative content item or obtain a list of collaborative content items responsive to a search term is processed by backend server406. The access module410receives a request to provide a collaborative content item to a client device. In one embodiment, the access module generates an access link to the collaborative content item, for instance in response to a request to share the collaborative content item by an author. The access link can be a hyperlink including or associated with the identification information of the CCI (i.e., unique identifier, content pointer, etc.). The hyperlink can also include any type of relevant metadata within the content management system (i.e., author, recipient, time created, etc.). In one embodiment, the access module can also provide the access link to user accounts via the network110, while in other embodiments the access link can be provided or made accessible to a user account and is accessed through a user account via the client device. In one embodiment, the access link will be a hyperlink to a landing page (e.g., a webpage, a digital store front, an application login, etc.) and activating the hyperlink opens the landing page on a client device. The landing page can allow client devices not associated with a user account to create a user account and access the collaborative content item using the identification information associated with the access link. Additionally, the access link module can insert metadata into the collaborative content item, associate metadata with the collaborative content item, or access metadata associated with the collaborative content item that is requested. The access module410can also provide collaborative content items via other methods. For example, the access module410can directly send a collaborative content item to a client device or user account, store a collaborative content item in a database accessible to the client device, interact with any module of the collaborative content management system to provide modified versions of collaborative content items (e.g., the copy generator412, the CCI differentiator414, etc.), sending content pointer associated with the collaborative content item, sending metadata associated with the collaborative content item, or any other method of providing collaborative content items between devices in the network. The access module can also provide collaborative content items via a search of the collaborative content item database (i.e., search by a keyword associated with the collaborative content item, the title, or a metadata tag, etc.). The copy generator412can duplicate a collaborative content item. Generally, the copy generator duplicates a collaborative content item when a client device selects an access link associated with the collaborative content item. The copy generator412accesses the collaborative content item associated with the access link and creates a derivative copy of the collaborative content item for every request received. The copy generator412stores each derivative copy of the collaborative content item in the collaborative content item database408. Generally, each copy of the collaborative content item that is generated by the copy generator412is associated with both the client device from which the request was received and the user account associated with the client device requesting the copy. When the copy of the collaborative content item is generated it can create a new unique identifier and content pointer for the copy of the collaborative content item. Additionally, the copy generator412can insert metadata into the collaborative content item, associate metadata with the copied collaborative content item, or access metadata associated with the collaborative content item that was requested to be copied. The collaborative content item differentiator414determines the difference between two collaborative content items. In one embodiment, the collaborative content item differentiator414determines the difference between two collaborative content items when a client device selects an access hyperlink and accesses a collaborative content item that the client device has previously used the copy generator412to create a derivative copy. The content item differentiator can indicate the differences between the content elements of the compared collaborative content items. The collaborative content item differentiator414can create a collaborative content item that includes the differences between the two collaborative content items, i.e. a differential collaborative content item. In some embodiments, the collaborative content item differentiator provides the differential collaborative content item to a requesting client device120. The differentiator414can store the differential collaborative content item in the collaborative content item database408and generate identification information for the differential collaborative content item. Additionally, the differentiator414can insert metadata into the accessed and created collaborative content items, associate metadata with the accessed and created collaborative content item, or access metadata associated with the collaborative content items that were requested to be differentiated. The settings and security module416can manage security during interactions between client devices120, the content management system100, and the collaborative content management system130. Additionally, the settings and security module416can manage security during interactions between modules of the collaborative content management system. For example, when a client device120attempts to interact within any module of the collaborative content management system100, the settings and security module416can manage the interaction by limiting or disallowing the interaction. Similarly, the settings and security module416can limit or disallow interactions between modules of the collaborative content management system130. Generally, the settings and security module416accesses metadata associated with the modules, systems100and130, devices120, user accounts, and collaborative content items to determine the security actions to take. Security actions can include: requiring authentication of client devices120and user accounts, requiring passwords for content items, removing metadata from collaborative content items, preventing collaborative content items from being edited, revised, saved or copied, or any other security similar security action. Additionally, settings and security module can access, add, edit or delete any type of metadata associated with any element of content management system100, collaborative content management system130, client devices120, or collaborative content items. The metadata module418manages metadata within with the collaborative content management system. Generally, metadata can take three forms within the collaborative content management system: internal metadata, external metadata, and device metadata. Internal metadata is metadata within a collaborative content item, external metadata is metadata associated with a CCI but not included or stored within the CCI itself, and device metadata is associated with client devices. At any point the metadata module can manage metadata by changing, adding, or removing metadata. Some examples of internal metadata can be: identifying information within collaborative content items (e.g., email addresses, names, addresses, phone numbers, social security numbers, account or credit card numbers, etc.); metadata associated with content elements (e.g., location, time created, content element type; content element size; content element duration, etc.); comments associated with content elements (e.g., a comment giving the definition of a word in a collaborative content item and its attribution to the user account that made the comment); or any other metadata that can be contained within a collaborative content item. Some examples of external metadata can be: content tags indicating categories for the metadata; user accounts associated with a CCI (e.g., author user account, editing user account, accessing user account etc.); historical information (e.g., previous versions, access times, edit times, author times, etc.); security settings; identifying information (e.g., unique identifier, content pointer); collaborative content management system130settings; user account settings; or any other metadata that can be associated with the collaborative content item. Some examples of device metadata can be: device type; device connectivity; device size; device functionality; device sound and display settings; device location; user accounts associated with the device; device security settings; or any other type of metadata that can be associated with a client device120. The collaborative content item revision module420manages application level requests for client applications200for revising differential collaborative content items and selectively interacts with backend servers406for processing lower level processing tasks on collaborative content items, and interfacing with collaborative content items database408as needed. The revision module can create a revised collaborative content item that is some combination of the content elements from the differential collaborative content item. The revision module420can store the revised collaborative content item in the collaborative content item database or provide the revised collaborative content item to a client device120. Additionally, the revision module420can insert metadata into the accessed and created collaborative content items, associate metadata with the accessed and created collaborative content item, or access metadata associated with the collaborative content items that were requested to be differentiated. Content management system100and collaborative content management system130may be implemented using a single computer, or a network of computers, including cloud-based computer implementations. The operations of content management system100and collaborative content management system130as described herein can be controlled through either hardware or through computer programs installed in computer storage and executed by the processors of such server to perform the functions described herein. These systems include other hardware elements necessary for the operations described here, including network interfaces and protocols, input devices for data entry, and output devices for display, printing, or other presentations of data, but which are not described herein. Similarly, conventional elements, such as firewalls, load balancers, collaborative content items servers, failover servers, network management tools and so forth are not shown so as not to obscure the features of the system. Finally, the functions and operations of content management system100and collaborative content management system130are sufficiently complex as to require implementation on a computer system, and cannot be performed in the human mind simply by mental steps. Content Item Management Module FIG.5illustrates an example embodiment of content item management module308. The content item management module308includes a batch object datastore510that stores metadata associated with objects and batch objects, a batch object generation module520that generates batch objects (or batches), a batch object management module530that handles various operations associated with batch objects, a garbage collection module540that deletes expired batches and consolidates partially expired batches, an object encryption/decryption module550that encrypts and decrypts objects, and a verification module560that verifies metadata associated with objects and batches. The modules shown inFIG.5are non-limiting and are for illustrative purposes only; more or fewer modules may be used to achieve the functionality described herein. Batch object datastore510is a data structure that stores metadata associated with objects and batch objects. In one embodiment, the batch object datastore510stores metadata associated with a batch object and the objects that the batch contains when the batch object is created. The metadata may be used to reference an object during a read operation as an object may be located by using metadata such as a batch identifier, location of the object in the batch, and length of the object. The metadata may also be referenced to perform maintenance and keep track of information such as expiration time and compliance identifier for each object and each batch object. Batch object datastore510and the exemplary metadata are discussed in further detail below. FIG.6illustrates exemplary particulars of batch object data store510in further detail.FIG.6illustrates an example object metadata structure610and batch metadata structure620. In one embodiment, the information associated with each object or each batch object may be referred to as an entry. For example, in the object metadata structure610, the row of information associated with object 1 may be referred to as an entry for object 1. In one embodiment, the fields for object metadata structure610are as follows: Object ID: As used herein, the term Object Identifier (ID) may refer to a unique identifier assigned by the content item management module308to identify a particular object. Batch ID: As used herein, the term Batch Identifier (ID) may refer to a unique identifier that indicates the batch object containing the respective object. Length: As used herein, the term length may refer to length of the content of an object measured in a number of units. Examples of the units include but are not limited to: number of characters, bytes, megabytes, gigabytes, etc. Offset: As used herein, the term offset may refer to length of content in a batch object to skip before the content for the object starts. For example, as illustrated inFIG.6, object 1 and object 2 are both in batch 1. Object 1 has 0 offset units, which indicates that object 1 is located at the beginning of the batch. Because object 1 has a length of 1 unit, the content of object 2 may be stored in the batch starting from the 2ndunit. Therefore, object 2 has a 1-unit offset, indicating that the content of object 2 starts from the 2ndunit. Expiration time: As used herein, the term expiration time may refer to a time stamp indicating that an object expires if current time is after the time stamp. In some embodiments, instead of storing an expiration time for an object, the entry may store a time interval (e.g., a time-to-live (TTL) interval) representing the length of time between the time that the object was created and the time that the object should expire. Compliance identifier: As used herein, the term compliance Identifier (ID) may refer to an identifier that identifies compliance entity for an object. A compliance entity may be a namespace or a logical grouping of objects. A compliance entity may be a compliance category that is associated with compliance requirements such as policies for managing objects and access control that restricts users from accessing or modifying objects. In one embodiment, each object is associated with a compliance identifier and objects with the same compliance identifier may be grouped into a same batch. Discussion of how objects are grouped into a same batch appears in further detail below with respect to the description of batch object generation module520. Checksum: As used herein, the term checksum may refer to a string of letters and numbers generated from a checksum function. The checksum represents a small-sized datum derived from content of an object for the purpose of detecting errors that may have been introduced during its transmission or storage. KEK Version: As used herein, the term key-encryption key (KEK) version may refer to current version number of key-encryption key (KEK). A key-encryption-key is the key that encrypts encryption keys. KEKs are rotated periodically to different versions and the field KEK version indicates the current version of KEK. EEK: As used herein, the term encrypted encryption-key (EEK) may refer to current version of encrypted encryption-key (EEK). Each encrypted object is associated with an encryption key. An encryption key may be further encrypted with a KEK. which may be stored in an external secret repository. EEKs are rotated with KEK periodically. The field EEK stores a current version of the EEK. In one embodiment, batch metadata structure620includes the following information: Batch ID: As used herein, the term Batch Identifier (ID) may refer to a unique identifier assigned by the content item management module308that identifies a particular batch. Length: As used herein, the term length may refer to the length of the batch measured in a number of units. The length of the batch may be the sum of the length of the objects in the batch. Compliance identifier: As used herein, the term Compliance ID (Identifier) may refer to a compliance entity for objects in the batch. In one embodiment, each batch is associated with one compliance identifier as objects grouped into the same batch have the same compliance entity. Status: As used herein, the term status may refer to a status identifier that indicates if the batch object is successfully stored to content storage318. A status “closed” indicates that the batch object is successfully stored to content storage318and may be removed from the content item management module308. An “open” status indicates that the batch is not yet fully transmitted to content storage318. A batch object may be initially assigned a status “open” when created and the status may be changed to “closed” responsive to the batch object being successfully stored to content storage318. Returning to the description ofFIG.5, the batch object generation module520identifies objects to consolidate and generates one or more batch objects containing the identified objects. In one embodiment, the batch object generation module520identifies incoming objects that are associated with a same namespace (e.g. same compliance entity) and arrive within a same time interval. In one embodiment, the batch object generation module520may assign a default time interval when creating a batch object, such as 5 minutes, 10 minutes, an hour, etc. In another embodiment, the batch object generation module520may determine a time interval based on incoming objects. For example, the batch object generation module520may assign a short time interval for objects that arrive frequently and may assign a long time interval for objects that arrive sporadically. Additionally or alternatively, the generation of each batch object may be based on a size limit and/or a wait time limit. For example, responsive to detecting that the size of a group of incoming objects (e.g., in terms of the number of objects and/or the total size of the objects) reaches a certain size limit or that the group of objects have been waiting for additional incoming objects longer than a certain time limit, the batch object generation module520may create a batch object containing the group of objects identified thus far and store the batch to content storage318. In another embodiment, the batch object generation module520may queue a group of incoming objects and consolidate the group of objects to a batch in a specific order based on metadata of objects. The batch object generation module520may determine different ordering rules based on different objects to be consolidated into a batch. The batch object generation module520may then issue a write request to content storage318to store the generated batch object. The batch object generation module520may issue a single write request to content storage318to store the batch object that contains a group of identified objects, instead of issuing a write request for each object of the group of objects. In one embodiment, the batch object generation module520may save a temporary copy of the content of the batch object in the batch object generation module520to avoid loss of information in case of potential issues that may happen while data is being transferred. The content storage318may notify the batch object generation module520if the batch object is successfully stored. Responsive to a receipt that the batch object is successfully stored to content storage318, the batch object generation module520may generate and store information associated with the batch object and the objects it contains in a data structure in the batch object datastore510, such as the data structure containing object metadata structure610and batch metadata structure620illustrated inFIG.6. The data structure stores information such as the batch (i.e. field “Batch ID”) that contains the object and information such as length and location (e.g., offset) of the object within the batch, which may serve as a mapping that connects objects with their respective batches and improves efficiency when accessing objects in content storage318. Additionally, responsive to the batch being successfully stored to content storage318, the batch object generation module520may update this batch's “Status” as stored in the data structure to “closed,” indicating that the batch is stored in content storage, and subsequently delete the temporary batch object from the batch object generation module520. The batch object management module530may perform various functionalities such as managing reading requests, segmenting and storing large objects, and determining an object storage system to store a batch or an object. The various functionalities are discussed in detail below. In some embodiments, the batch object management module530may receive read requests to read objects stored in content storage318. Responsive to receiving a read request, the batch object management module530may identify metadata of the batch object in order to process the read request. In the case where the batch object management module530receives a request from client to access an object (e.g., a read request), the batch object management module530may check in the metadata if the object is stored in content storage318and if the object is expired based on the field “Expiration Time.” Responsive to determining that the object is stored in content storage318and is not expired, the batch object management module530may issue a request to content storage318to read the object based on metadata associated with the object. The batch object management module530may access the object by identifying the batch ID of the batch that contains the object and locating the object in the batch using offset and length of the object. In one embodiment, batch object management module530may send a read request to content storage318, where the read request may specify the batch ID to read from and the location (i.e. offset) of the batch to read from. The read request may further comprise a length to read starting from the offset. On the other hand, responsive to the object being detected as expired, the batch object management module530may not access content storage318to avoid waste in bandwidth and time. The batch object management module530may then return a message to the content management system100indicating that the object is expired, and the message may be further passed on to client120through network110. In another embodiment, the batch object management module530may segment a large object into multiple smaller objects that are within a size limit, responsive to detecting that the large object is over the size limit. The metadata associated with the large object may further consist a field indicating a list of batch IDs representing the batches that each stores a part of the large object. In one embodiment, the list of batch IDs may be a linked list, with pointers connecting the list of batch IDs in a certain order that represents the order of the content in the original large object. In another embodiment, content storage318may include multiple object storage systems. In such an embodiment, the batch object management module530may determine in which object storage system to store an object and/or a batch object based on parameters associated with the batch object to be stored and parameters associated with the different object storage systems. For example, the batch object parameters may include, but are not limited to, size of the objects, size of the batches, geographic location(s) of the owner/requestor(s) associated with the objects within the batch, compliance requirements associated with the batch, etc. The object storage system parameters may include, but are not limited to, capacity of the storage systems, geographic location of servers, minimum, maximum, or optimal object size supported, storage types (e.g., hard drives or solid-state drives), performance parameters (e.g., read and/or write latency), security parameters (e.g., whether the system supports encryption or not), costs associated with storing an object of a particular size on a particular storage system, etc. For example, some object storage systems may be more efficient in maintaining smaller objects (e.g. smaller than 4 MB) in larger quantities while other object storage systems may be more efficient in maintaining larger objects. The different behaviors of different object storage systems may be attributed, for example, to different file formats that the object storage systems use to store objects. As another example, for a client who is located in Australia, if two object storage systems are comparable in other parameters but the servers for the two object storage systems are located in Australia and the U.S., respectively, the object storage system with server located in Australia may be determined to be a better option to store the object because a closer distance between the object and the server may result in a faster speed for data transferring. In some embodiments, the garbage collection module540may delete expired objects and consolidate batches containing both expired and non-expired objects into a new batch. As discussed above, each object may be associated with an expiration time, which may be determined by clients120or compliance rules associated with the object's respective compliance entity. In one embodiment, expired objects are deleted from content storage318, while in another embodiment, clients120may extend the current expiration time to a later time to keep the object alive longer in content storage318. Garbage collection module540may delete a batch object responsive to detecting all the objects in the batch are expired. In one embodiment, content storage318may store expiration time on a per-item basis. As objects are stored as batches in content storage318, the smallest unit stored in content storage318is on a batch level. Therefore, for batch objects stored in content storage318, an expiration time is associated with each batch object in content storage318(instead of an expiration time associated with each individual object). In some embodiments, batches stored in content storage318may be assigned or associated with a batch expiration time. For example, if all objects in the batch have the same expiration time, the batch may be assigned the same expiration time, and content storage318may automatically delete the batch at that expiration time. As another example, in some embodiments if objects in the batch have different expiration times, the batch may be assigned a batch expiration time equal or greater than the greatest expiration times of its objects, and content storage318may automatically delete the object at that expiration time. Alternatively, in some embodiments, garbage collection module540may not assign a batch expiration time to the batch. Instead, garbage collection module540may periodically scan through metadata maintained in batch object datastore510, compare current time with expiration time, and identify expired objects. Responsive to garbage collection module540detecting that all the objects in a batch are expired, garbage collection module540may send content storage318a request to delete the batch object from content storage318, and delete the respective entries for the batch and the objects from batch object datastore510. In one embodiment where a batch object contains both expired and non-expired objects, garbage collection module540may create a new batch object containing the non-expired objects and delete the old batch object. In another embodiment, garbage collection module540may identify, within the content storage318, one or more batch objects with the same compliance ID and create a new batch object that consolidates the non-expired objects in the old batches. For example, a first batch object may include a first object that is expired and a second object that is not expired, and the first object may be associated with a compliance rule that requires removal of the first object as soon as it is expired. In such a case, garbage collection module540may identify a second batch object that may contain both expired and non-expired object, and create a new batch that consolidates the non-expired objects from the two batches and delete the old batches, which contain the expired objects. In the illustrated example, two batches are consolidated, but any number of one or more batches may be identified and consolidated. In one embodiment, garbage collection module540may identify batches that, when consolidated, have a size that is close to the size limit of a batch object. In another embodiment, garbage collection module540may identify batches that contain objects with similar expiration time. Then, garbage collection module540may store the new batch object in content storage318and store metadata associated with the new batch object in batch object datastore510. Garbage collection module540may then send content storage318a request to delete the old batches from content storage318and delete metadata associated with the old batches from batch object datastore510. Object encryption/decryption module550encrypts and decrypts objects and rotates encryption keys periodically. In one embodiment, an object may be encrypted using an encryption key. The encryption key may be further encrypted with a key-encryption key (KEK) and, as a result, the encryption key is encrypted into an encrypted encryption-key (EEK). Object encryption/decryption module550may generate a new version of the set of KEK and EEK periodically to replace the old version to minimize potential exposure of the encryption keys to attackers. The process of replacing an old key by generating a new key periodically may be referred to as key rotation. Current version number associated with KEK and current version of EEK are stored in the metadata maintained in batch object datastore510. Verification module560checks validity of metadata maintained in batch object datastore510by performing various checks. For example, verification module560may check if the offset and length associated with objects are valid. Referring to the example metadata inFIG.6, assume object 2 has an offset value of 0 (instead of 1), which indicates that object 1 and object 2 overlap because object 1 is also located in the batch with 0 unit offset and has a length of 1 unit. As another example of invalidity in metadata, assume that batch 1 in batch metadata structure620has a length of 1 (instead of 2), the metadata is also invalid because object metadata structure610shows that both object 1 and object 2 are stored in batch 1 and the total length for both the objects is 2, which contradicts the metadata indicating batch 1 has length 1. Responsive to detecting invalidity in metadata, verification module560may access the object or the batch object, retrieve correct metadata information, and update their respective metadata. FIG.7is a flow chart that illustrates an example process of storing objects as batches to content storage318. The content item (i.e. object) management system (e.g. using content item management module308) receives702objects to be stored and batch object generation module520identifies704a subset of the objects associated with a same time period (i.e. objects arriving within a same time period) and with a same namespace (e.g. compliance entity). Then the object management system (e.g. using batch object generation module520) generates706a batch object containing the subset of objects and issues708a request to content storage318to store the batch object. The object management system (e.g. using batch object generation module520) may generate and store a data structure to batch object datastore510, where the data structure comprises identifier of the batch object and position (i.e. offset) of the object within the batch. FIG.8is a flow chart that illustrates an example process of accessing objects in content storage318. Upon receiving802a request to read an object, the object management system (e.g. using batch object management module530) may determine804whether the object is stored in the object storage system based on the data structure stored in batch object datastore510. Responsive to determining that the object is stored in content storage318, the object management system (e.g. using batch object management module530) may issue a read request to access the object in the object storage system. Examples Use Cases of the Content Item Management System FIGS.9-11illustrate example use cases of various embodiments of content item management module308(which may be referred to as object management system). FIG.9illustrates one example embodiment of the object management system (e.g. content item management module308), in accordance with one embodiment. Object management store910may receive objects 1-3 that arrive within a same time interval and are associated with the same namespace (e.g. compliance entity). The batch object generation module520may then consolidate912the objects into batch object 1 (as shown in920) and store metadata of the batch object and objects 1-3 in the batch object datastore510. Batch object generation module520may then send a request to an object storage system940and store932the batch object in the object storage system. The object management system as illustrated in930may maintain metadata stored in batch object datastore510and may delete922the objects if the batch object is successfully stored in the object storage system. FIG.10illustrates another example of the object management system, in accordance with one embodiment. In this embodiment, size of the received object 1 is larger than the size of a batch object. The batch object management module530may segment object 1 into multiple batch objects such as batch objects 1 and batch object 2 shown in1020. The batch object management module530may then store1032batch object 1 and batch object 2 to the object storage system. Batch object management module530may store, in batch object datastore510, metadata for object 1 with a data entry such as “Batch object 1→Batch object 2,” which indicates that object 1 is segmented and stored in the order of batch 1 and then batch 2. FIG.11illustrates another example embodiment of the object management system, in accordance with another embodiment. InFIG.11, multiple object storage systems such as object storage system 1-3 are available for storage. The batch object generation module520may consolidate objects 1 and 2 into batch object 1 and consolidate objects 3 and 4 into batch object 2. Batch object management module530may then determine, for each batch object, an object storage system to store the batch. In the example illustrated inFIG.11, batch object management module530may determine to send batch object 1 to object storage system 1 and send batch object 2 to object storage system 2 for storage. The batch object datastore may store, in the data structure, metadata for each batch indicating the respective object storage system that stores the batch objects. Additional Considerations Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. In this description, the term “module” refers to a physical computer structure of computational logic for providing the specified functionality. A module can be implemented in hardware, firmware, and/or software. In regards to software implementation of modules, it is understood by those of skill in the art that a module comprises a block of code that contains the data structure, methods, classes, header and other code objects appropriate to execute the described functionality. Depending on the specific implementation language, a module may be a package, a class, or a component. It will be understood that any computer programming language may support equivalent structures using a different terminology than “module.” It will be understood that the named modules described herein represent one embodiment of such modules, and other embodiments may include other modules. In addition, other embodiments may lack modules described herein and/or distribute the described functionality among the modules in a different manner. Additionally, the functionalities attributed to more than one module can be incorporated into a single module. Where the modules described herein are implemented as software, the module can be implemented as a standalone program, but can also be implemented through other means, for example as part of a larger program, as a plurality of separate programs, or as one or more statically or dynamically linked libraries. In any of these software implementations, the modules are stored on the computer readable persistent storage devices of a system, loaded into memory, and executed by the one or more processors of the system's computers. The operations herein may also be performed by an apparatus. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including optical disks, CD-ROMs, read-only memories (ROMs), random access memories (RAMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. The algorithms presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description above. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references above to specific languages are provided for disclosure of enablement and best mode of the present invention. While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention. As used herein, the word “or” refers to any possible permutation of a set of items. Moreover, claim language reciting ‘at least one of’ an element or another element refers to any possible permutation of the set of elements. Although this description includes a variety of examples and other information to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements these examples. This disclosure includes specific embodiments and implementations for illustration, but various modifications can be made without deviating from the scope of the embodiments and implementations. For example, functionality can be distributed differently or performed in components other than those identified herein. This disclosure includes the described features as non-exclusive examples of systems components, physical and logical structures, and methods within its scope. Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
70,668
11860837
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION OF THE INVENTION As described above, incident records often include different types of data (evidence) and inconsistencies sometimes exist between these types of data. Studies have shown that inconsistencies in incident records cause users (for example, law enforcement officers) to spend a large amount of time reviewing, for example, video and comparing video records to their reports to discover inconsistencies. Eliminating inconsistencies is important to ensure that cases are not dismissed or delayed and that downstream users in a chain of evidence are allowed timely access to data without having to redo work due to inconsistencies. Thus, it would be useful to automatically discover and notify users of inconsistencies in evidence to improve the accuracy and functioning of record management systems and databases. Among other things, reducing inconsistencies may reduce the amount of time users spend reviewing evidence and redoing work, which in turn may reduce the processing power and memory resources consumed by the record management system. Additionally, reducing inconsistencies increases the speed of providing accurate information. Inconsistencies may arise due to differences between witness accounts. For example, an inconsistency occurs when a distraught victim unintentionally provides an incorrect description of a suspect that differs from the description provided by other witnesses. In another example, an inconsistency occurs when an officer's incident report indicates that contraband was discovered in the back seat of a car, but video footage shows that the contraband was retrieved from the front seat of the car. In yet another example, an inconsistency occurs when an officer's incident report indicates that a suspect was wearing a red shirt, video footage from a body-worn camera footage indicates that the suspect was wearing a blue shirt. The evidence, or more broadly, information or data collected is part of a digital chain of evidence (which may include various databases and data stores). As noted, inconsistencies in the evidence may be difficult to detect and resolve. Additionally, inconsistencies have different priorities based on the potential impact an inconsistency has on a case (for example, a legal case prosecuting an alleged criminal), the type of incident the inconsistency is associated with (for example, homicide as opposed to petty theft), and the like. For example, an inconsistency regarding the color of a shirt a suspect was wearing while committing a crime is a minor inconsistency but an inconsistency regarding who began a physical assault is an important inconsistency. When an inconsistency in the evidence in an incident record is detected, it is important that appropriate actions are taken to ensure that the inconsistency is resolved to improve accuracy, improve the speed of retrieving accurate information, and reduce potential negative consequences to a case. For example, if the inconsistency does not have a high priority it may be desirable to allow a detective access to the incident record but if the inconsistency has a high priority it may be desirable to block a detective from accessing the incident record until the inconsistency is resolved. To ensure that an inconsistency is resolved, it is important to notify the appropriate users of the inconsistency, for example, users who created or uploaded the inconsistent data (the originators of the inconsistent data), users who are identified in the inconsistent data, or users who accessed the inconsistent data. Therefore, a system which not only detects inconsistencies in data included in an incident record but also determines actions to take based on the determined importance (i.e., priority) of the detected inconsistency and the appropriate users to inform to resolve the detected inconsistency would be beneficial. Embodiments described herein provide, among other things, a method and system for prioritizing and resolving inconsistencies in digital evidence. One example embodiment provides a system for prioritizing and resolving inconsistencies in digital evidence. The system includes a database containing a first type of data including electronically stored multimedia data related to an incident record and a second type of data including electronically stored first responder notes or reports related to the incident record. The system also includes an electronic computing device including an electronic processor. The electronic processor is configured to receive the first type of data and the second type of data from the database, determine an inconsistency between the first type of data and the second type of data, and determine an incident type from the incident record. The electronic processor is also configured to determine, by accessing one or both of an incident type mapping and a machine learning model using the determined incident type, whether a priority of the determined inconsistency meets an electronically stored threshold case impact level. When the priority of the inconsistency meets the stored threshold case impact level, the electronic processor is configured to take a first notification action and when the priority of the inconsistency does not meet the stored threshold case impact level, the electronic processor is configured to take a second notification action different from the first. Another example embodiment provides a method of prioritizing and resolving inconsistencies in digital evidence. The method includes receiving, with an electronic processor, a first type of data including electronically stored multimedia data related to an incident record and a second type of data including electronically stored first responder notes or reports related to the incident record from a database. The method also includes determining an inconsistency between the first type of data and the second type of data, determining an incident type from the incident record, and determining, by accessing one or both of an incident type mapping and a machine learning model using the determined incident type, whether a priority of the determined inconsistency meets an electronically stored threshold case impact level. The method further includes when the priority of the inconsistency meets the stored threshold case impact level, taking a first notification action and when the priority of the inconsistency does not meet the stored threshold case impact level, taking a second notification action different from the first. FIG.1is a block diagram of a system100for prioritizing and resolving inconsistencies in digital evidence. In the example shown, the system100includes a database105, an electronic computing device110, a first data source115and a second data source120(referred to herein collectively as the data sources115,120), and a first user device125and a second user device130(referred to herein collectively as the user devices125,130). The database105, electronic computing device110, data sources115,120, and user devices125,130are communicatively coupled via a communication network135. The communication network135is an electronic communications network including wireless and/or wired connections. The communication network135may be implemented using a variety of one or more networks including, but not limited to, a wide area network, for example, the Internet; a local area network, for example, a Wi-Fi network, or a near-field network, for example, a Bluetooth™ network. Other types of networks, for example, a Long Term Evolution (LTE) network, a Global System for Mobile Communications (or Groupe Spécial Mobile (GSM)) network, a Code Division Multiple Access (CDMA) network, an Evolution-Data Optimized (EV-DO) network, an Enhanced Data Rates for GSM Evolution (EDGE) network, a 3G network, a 4G network, a 5G network, and combinations or derivatives thereof may also be used. It should be understood that the system100may include different numbers of user devices and that the two user devices125,130included inFIG.1are purely for illustrative purposes. It should also be understood that the system100may include different numbers of data sources and that the two data sources115,120included inFIG.1are purely for illustrative purposes. It should also be understood that the system100may include a different number of electronic computing devices than the number of electronic computing devices illustrated inFIG.1and the functionality described herein as being performed by the electronic computing device110may be performed by a plurality of electronic computing devices. In the embodiment illustrated inFIG.1, the electronic computing device110is, for example, a server that is configured to prioritize and resolve inconsistencies in digital evidence. In the embodiment illustrated inFIG.1, the user devices125,130are electronic devices (for example, a smart telephone, a laptop computer, a desktop computer, a smart wearable, or other type of portable electronic device configured to operate as described herein). Each of the user devices are configured to send and receive information from the database105. Likewise, each of the data sources115,120are configured to send information to the database105. A data source may be for example, a camera, a microphone, or other type of sensor configured to collect data regarding an incident. A data source of the plurality of data sources115,120may be mounted or stationary, for example, a data source may be a body-worn camera worn by a police officer, a camera installed in a vehicle, or a camera mounted on a wall of a building. It should be noted that each of the user devices125,130may be any one of the above mentioned options regardless of which of the above mentioned options are used for the other user devices in the system100. For example, in one embodiment the first user device125may be a smart telephone while the second user device130may be a smart wearable. It should also be noted that each of the data sources115,120may be any one of the above mentioned options regardless of which of the above mentioned options are used for the other data sources are in the system100. For example, the first data source115may be a camera while the second data source120may be an infrared sensor. Additionally, in some embodiments, the functionality described herein as being performed by a data source is instead performed by a user device. In some embodiments, the system100does not include data sources and instead relies on user devices to perform the functionality of data sources. FIG.2is an illustrative example of a chain of evidence150. As illustrated inFIG.2, the chain of evidence150includes a plurality of steps that begin when data (evidence) is added to a records management system and ends when a decision is made as to how long the data will be stored in the records management system. Different users access the data at different steps in the chain of evidence150. For example, police officers access the data at the entry step155and review step160but detectives access the data at the analysis step165. Inconsistencies may occur at a number of steps in the chain of evidence150. In one example, an inconsistency may occur at the entry step155when a police officer enters a report that contradicts received video evidence. In another example, an inconsistency may occur when a detective submits a report after reviewing the evidence at the analysis step165. An inconsistency at one step may be the cause of an inconsistency at another step in the chain of evidence150. For example, there may be an inconsistency between a police officer's report and video evidence. If, at a later step in the chain if evidence150, a detective relies on the police officer's report to create an investigation report, the investigation report may include the same inconsistency as the police officer's report. FIG.3is a block diagram of the first user device125included in the system100. In the example illustrated, the first user device125includes a first electronic processor200(for example, a microprocessor, application-specific integrated circuit (ASIC), or another suitable electronic device), a first memory205(a non-transitory, computer-readable storage medium), a first communication interface210(including, for example, a transceiver for communicating over one or more networks (for example, the network135)), a display device215, and an input device220. The first memory205may include, for example, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), a Flash memory, or a combination of the foregoing. The first electronic processor200, first communication interface210, first memory205, display device215, and input device220communicate wirelessly or over one or more communication lines or buses. The display device215may be, for example, a touchscreen, a liquid crystal display (“LCD”), a light-emitting diode (“LED”) display, an organic LED (“OLED”) display, an electroluminescent display (“ELD”), and the like. The input device220may be, for example, a touchscreen (for example, as part of the display device215), a mouse, a trackpad, a microphone, a camera, or the like. It should be understood that the first user device125may include more, fewer, or different components than those components illustrated inFIG.3. For example, the first user device125, while illustrated as having only one input device, may include multiple input devices and may include an output device such as speakers. Also, it should be understood that, although not described or illustrated herein, the second user device130may include similar components and perform similar functionality as the first user device125. FIG.4is a block diagram of the database105included in the system100ofFIG.1. In the example illustrated, the database105includes a second electronic processor300(for example, a microprocessor, application-specific integrated circuit (ASIC), or another suitable electronic device), a second communication interface310(including, for example, a transceiver for communicating over one or more networks (for example, the communication network135)), and a second memory305(a non-transitory, computer-readable storage medium). The second memory305may include, for example, the types of memory described with respect to the first memory205. The second electronic processor300, second communication interface310, and second memory305communicate wirelessly or over one or more communication lines or buses. It should be understood that the database105may include more, fewer, or different components than those components illustrated inFIG.4. In the embodiment illustrated inFIG.4, the second memory305includes a first type of data315and a second type of data320. The first type of data315may be electronically stored multimedia data related to an incident record. For example, the first type of data315is video data received from a video camera (for example, a body-worn camera) associated with a first responder who witnessed the incident. The second type of data320may be electronically stored first responder notes or reports related to the incident record. For example, the second type of data320is a text document received from the user device125and created by a first responder to record their observations with regard to the incident. In some embodiments, both the first type of data315and the second type of data320may be electronically stored first responder notes or reports. It should be understood that the first type of data315and the second type of data320relate to the same incident record. In some embodiments, the second memory305includes numerous types of data related to numerous incident records and that the first type of data315and the second type of data320are purely for illustrative purposes. Additionally, it should be noted that each type of data may be associated with a plurality of incident records. In some embodiments, the types of data stored in the second memory305of the database105are tagged with one or more tags. Each tag includes a unique identifier for an incident record that the type of data is associated with. In some embodiments, the second electronic processor300receives an indication of the incident record when the second electronic processor300receives the data. For example, the second electronic processor300may receive, from the first user device125, the second type of data and, for example, a unique number or unique name of an incident record that the second type of data is associated with. In other embodiments, the second electronic processor300automatically determines an incident record that data is associated with. For example, the second electronic processor300may receive the first type of data from the first data source115. Based on when the first data source115captured the first type of data, a location at which the first data source115captured the first type of data, or both, the second electronic processor300determines an incident record the second type of data320is associated with and tags the second type of data320with the unique identifier of the associated incident record. In other embodiments, each incident record is associated with a location in the second memory305and data associated with an incident record is stored in the location associated with the incident record in the second memory305. FIG.5is a block diagram of the electronic computing device110included in the system100ofFIG.1. In the example illustrated, the electronic computing device110includes a third electronic processor400(for example, one or more of the electronic devices mentioned previously), a third communication interface410(including, for example, a transceiver for communicating over one or more networks (for example, the communication network135)), and a third memory405(a non-transitory, computer-readable storage medium). The third memory405may include, for example, the types of memory described with respect to the first memory205. The third electronic processor400, third communication interface410, and third memory405communicate via one or more of the mechanisms mentioned previously. It should be understood that the electronic computing device110may include more, fewer, or different components than those components illustrated inFIG.5. The third memory405illustrated inFIG.5includes an incident type mapping415and a machine learning model420. The incident type mapping415maps each type of incident to an associated priority. It should be noted that the third memory405may include multiple mappings that are used in combination with the incident type mapping415to determine the priority of an inconsistency. For example, the third memory405may include an originator mapping, a role type mapping, a resolution time mapping, a severity mapping, and an inconsistency impact mapping. In some embodiments, each of the above mentioned mappings are used alone or in combination to calculate a priority of the inconsistency using techniques such as Bayes Theorem. The machine learning model420may use, for example, logistic regression, Bayesian multivariate linear regression, or other appropriate forms of probabilistic modeling to determine a priority associated with an inconsistency. In some embodiments, weights included in the machine learning model420are initially set by a user but are later updated based on user feedback to optimize prioritization. In some embodiments, the user feedback includes direct feedback from users who resolve the inconsistency. For example, the users may provide feedback in the form of responding to a questionnaire. In some embodiments, the user feedback includes a time a user downstream from an originator of inconsistent data in a chain of evidence spends resolving an inconsistency. It should be noted that the weights of the machine learning model420may be adjusted manually by a user, instead of or in addition to being updated based on user feedback. The incident type mapping415and the machine learning model420may be used alone or in combination to determine a priority associated with an inconsistency. For example, the incident type mapping415and the machine learning model420may each determine a priority associated with an inconsistency and when the difference between the determined priorities is greater than a predetermined threshold, the incident type mapping415and the machine learning model420recalculate the priority. FIG.6illustrates one example of a method500of prioritizing and resolving inconsistencies in digital evidence. The method500begins at block505when the third electronic processor400receives the first type of data315and the second type of data320. The first type of data315may include electronically stored multimedia data related to an incident record (for example, which may have been generated by first data source115as video from a body-worn camera and tagged with a unique identifier of the incident record). The second type of data320may include electronically stored first responder notes or reports (for example, which may have been generated by an officer using first user device125and tagged with the unique identifier of the incident record) related to the incident record from the database105. The third electronic processor400may receive data by requesting data from the database105that is associated with the incident record. Using the unique identifier associated with the incident record, the second electronic processor300retrieves from second memory305the types of data tagged with the unique identifier or stored in a memory location associated with the unique identifier and sends the types of data tagged with the unique identifier to the third electronic processor400. At block510, the third electronic processor400determines an inconsistency between the first type of data315and second type of data320. For example, the third electronic processor400may use techniques such as optical character recognition (OCR), image analysis (for example, facial recognition software, object detection software, and the like), audio analysis, and the like to detect inconsistencies between the first type of data315and the second type of data320. In one example, when the first type of data315is a video recording of a stake out (for example, captured by a dashboard camera in a police vehicle) and the second type of data320is a police officer's account of the stake out, the third electronic processor400may utilize optical character recognition to determine one or more times the police officer's account mentions an event occurring. For example, the third electronic processor400may determine a first time when a police officer recorded seeing a suspect's car and a second time when a police officer recorded seeing the suspect commit a crime, for example, exchanging money for illegal drugs. The third electronic processor400may use image analysis to determine one or more frames of the video recording captured at the first time and one or more frames of the video recording recorded at the second time. If the suspect's car is not included in the one or more frames of the video recording captured at the first time or the one or more frames of the video recording recorded at the second time does not show the suspect exchanging money for illegal drugs, the third electronic processor400detects an inconsistency (or determines the existence of an inconsistency). In another example, when the first type of data315is a video recording of a robbery (for example, captured by a body-worn camera associated with a police officer) and the second type of data320is the police officer's account of the robbery. When the police officer's account of the robbery states a shirt worn by a person suspected of committing the robbery is red while the video recording shows a shirt worn by a person suspected of committing the robbery as blue, the third electronic processor400detects an inconsistency. In yet another example, when the first type of data315is a video recording of an assault with a deadly weapon (for example, captured by a body-worn camera associated with a police officer) and the second type of data320is the police officer's account of the assault. When the police officer's account of the assault states that a first suspect shot first while the video recording shows that a second suspect shot first, the third electronic processor400detects an inconsistency. At block515, the third electronic processor400determines an incident type from the incident record. The incident type is, for example, the type of incident associated with the incident record that the first type of data315and second type of data320are related to. For example, the incident type may be one of a homicide, an assault, a petty theft, an act of vandalism, and the like. Different incident types are related to different priorities. In general, an inconsistency related to a high priority incident type is considered more important to address than an inconsistency related to a low priority incident type. At block520, the third electronic processor400determines, by accessing one or both of the incident type mapping415and the machine learning model420stored in the third memory405of the electronic computing device110using the determined incident type, whether an initial priority of the determined inconsistency meets an electronically stored threshold case impact level (for example, a predetermined value stored in the third memory405). In some embodiments, the initial priority associated with an inconsistency may be a number selected from a scale of one to one hundred (1-100), where one is the lowest initial priority and one hundred is the highest initial priority. In one example, if the incident type of the incident record that the inconsistency has been determined in is petty theft, an initial priority of twenty-five (25) may be assigned to the inconsistency. In another example, if the incident type of the incident record that the inconsistency has been determined in is homicide, an initial priority of ninety (90) may be assigned to the inconsistency. In other embodiments, the initial priority associated with an inconsistency may be a level encompassing a range of values. For example, possible initial priorities for an inconsistency may be any value from 0 to 1 and a first level (the lowest initial priority) may be assigned values 0 to 0.33, a second level may be assigned values 0.34 to 0.66, and a third level (the highest initial priority) may be assigned values 0.67 to 1. In one example, the third electronic processor400may assign an inconsistency associated with an incident type of vandalism a value of 0.35 and therefore assigns the inconsistency to the second level. In some embodiments, in addition to using one or both of the incident type mapping415and the machine learning model420, the third electronic processor400uses additional contextual information from the first or second data types to determine if a modified priority of the determined inconsistency meets an electronically stored threshold case impact level. The additional contextual information includes at least one selected from the group comprising an originator of the first type of data (stored in the originator mapping), an originator of the second type of data (stored in the originator mapping), a role (for example, police officer, forensic specialist, and the like) of the originator of the first type of data (stored in the role type mapping), a role of the originator of the second type of data (stored in the role type mapping), whether the first type of data, the second type of data, or both are associated with more than one incident record, when an inconsistency occurs in a chain of evidence, a resolution time of the inconsistency (stored in the resolution time mapping), a severity associated with the incident (stored in a severity mapping), and an impact of the inconsistency (stored in the inconsistency impact mapping). In some embodiments, the originator mapping specifies a multiplier for the modified priority of the inconsistency based on an originator of a type of data (for example, the first type of data315and the second type of data320). For example, if the originator of the second type of data320is a police officer who is on probation, the third electronic processor400may significantly increase the modified priority, for example increase the modified priority by 50%-75%. In some embodiments, the role type mapping specifies a multiplier for the modified priority of the inconsistency based on a role of an originator of a type of data (for example, the first type of data315and the second type of data320). For example, the first type of data315may originate from a body-worn camera of an arresting officer. Therefore, the role of the originator of the first type of data315is arresting officer. When the role of the originator of the first type of data315is arresting officer, the third electronic processor400may significantly increase the modified priority, for example increase the modified priority by, for example, 50%-75%. In another example, when a role of an originator of the second type of data320is a trainee and an inconsistency is detected in the notes of the trainee, the third electronic processor400may only slightly increase the modified priority of the determined inconsistency (for example, the third electronic processor400may increase the modified priority by 1%-10%) or not change the modified priority at all (for example, the third electronic processor400may increase the modified priority by 0%). In some embodiments, the resolution time mapping specifies a multiplier for the modified priority of the inconsistency based on the amount of time it takes users in, for example, the chain of evidence150to resolve an inconsistency. For example, the third electronic processor400significantly increases the modified priority of a determined inconsistency by, for example, 50%-100% when the inconsistency takes a user a long time to resolve (for example, when the user has to review a large quantity of video footage in detail in order to resolve an inconsistency). On the other hand, the third electronic processor400may only slightly increase the modified priority of the determined inconsistency (for example, the third electronic processor400may increase the modified priority by 1% to 10%) or not change the modified priority at all (for example, the third electronic processor400may increase the modified priority by 0%) when the inconsistency takes a user a small amount of time to resolve. In some embodiments, for each incident type, the severity mapping specifies a multiplier for the modified priority of the inconsistency based on a severity of the incident. For example, the third electronic processor400significantly increases the modified priority of a determined inconsistency by, for example, 50%-100% when the incident type is homicide and the severity of the incident is first degree homicide. On the other hand, the third electronic processor400may only slightly increase the modified priority of the determined inconsistency (for example, the third electronic processor400may increase the modified priority by 1%-10%) or not change the modified priority at all (for example, the third electronic processor400may increase the modified priority by 0%) when the incident type is homicide and the severity of the incident is third degree homicide. In some embodiments, for each incident type, the inconsistency impact mapping specifies a multiplier for the inconsistency based on the impact of the inconsistency type. Inconsistency types having a high impact on a case are defined herein as inconsistency types which affect users downstream in a chain of evidence (for example, inconsistencies that affect the work of a detective or legal counsel). In one example, for a homicide incident, the inconsistency impact mapping may state that an inconsistency regarding a suspect's clothing description has a high impact and the third electronic processor400may significantly increase the modified priority of the inconsistency by, for example, 50%-100%. In another example, for a homicide incident, the inconsistency impact mapping may state that an inconsistency regarding a suspect's gait (for example, whether a suspect was running or walking up to a house) has a low impact and the third electronic processor400may only slightly increase the modified priority (by, for example, 1%-10%) or not change the modified priority at all. In yet another example, for a petty theft incident type, an inconsistency type mapping may state that both an inconsistency regarding a suspect's clothing description and an inconsistency regarding a suspect's gait may have similarly low impact and the third electronic processor400only slightly increases the modified priority (by, for example, 1%-10%) or does not change the modified priority at all. In some embodiments, when the first type of data, the second type of data, or both are associated with more than one incident record, the third electronic processor400increases the modified priority of the determined inconsistency. For example, the third electronic processor400may increase the modified priority of the inconsistency by 20% if the first type of data315is associated with two incident records and increase the modified priority of the inconsistency by 30% if the first type of data315is associated with three incident records. The third electronic processor400may determine the number of incident records by determining the number of tags associated with the first type of data315in the database105. In some embodiments, the third electronic processor400significantly increases the modified priority of inconsistencies that occur early in a chain of evidence (for example, the chain of evidence150) and only slightly increases the modified priority of inconsistencies that occur late in the chain of evidence150(by, for example, 1%-10%) or does not change it at all. For example, for an inconsistency in evidence that occurred at the entry step155of the chain of evidence150, the third electronic processor400may increase the modified priority by, for example, 75%-100%. In contrast, for an inconsistency that occurred at analysis step165, the third electronic processor400may increase the modified priority by, for example, 1%-20%. It should be noted that, in some embodiments, instead of including multipliers the mappings described above may include values. In some embodiments, rather than increasing the modified priority by a percentage, the modified priority may be increased by a value, for example, a number between 0 and 1. At block525, when the initial or modified priority (the priority) of the inconsistency meets or exceeds the stored threshold case impact level, the third electronic processor400takes a first notification action. At block530, when the initial or modified priority of the inconsistency does not meet the stored threshold case impact level, the third electronic processor400takes a second notification action different from the first. In one example of a notification action, the third electronic processor400adds an originator of the first type of data and an originator of the second type of data to a group communication regarding the inconsistency and sends a notification to the group communication regarding the inconsistency. In another example of a notification action, the third electronic processor400may identify a first user from one of an originator of the first type of data, an originator of the second type of data, a user in the first type of data, and a user in the second type of data. A user in the first type of data or second type of data may be, for example, a user included in video footage, a user whose name is mentioned in a report, and the like. The third electronic processor400identifies a second user as a user in the first type of data or a user in the second type of data. The third electronic processor400adds the first user and the second user to a group communication regarding the inconsistency and sends the notification to the group communication. Group communications may be accessed by users via the user devices125,130of the system100. For example, the notification may be displayed to the originator of the first type of data315via the first user device125and displayed to the originator of the second type of data320via the second user device130. In some embodiments, the third electronic processor400takes a first notification action by sending a notification to a group communication (for example, the group communication including an originator of the first type of data315and an originator of the second type of data320) and takes a second notification action by adding a notification to a queue of inconsistency notifications associated with the incident record. For example, when an initial or modified priority of a determined inconsistency meets the stored threshold case impact level, the third electronic processor400sends a notification to the group communication and when the initial or modified priority of the determined inconsistency does not meet the stored threshold case impact level, the third electronic processor400adds the determined inconsistency to the queue of inconsistencies. The queue of inconsistencies is accessed by users when the users wish to resolve one or more inconsistencies. In some embodiments, the third electronic processor400adds the notifications to the end of the queue. In other embodiments, the third electronic processor400adds the notifications to the queue based on the priority associated with the notifications (for example, notifications with a higher priority are added to the front of the queue are added to the back of the queue). In some embodiments, the third electronic processor400takes a first notification action by blocking users following an originator of the first type of data and an originator of the second type of data in a chain of evidence associated with the incident record. The third electronic processor400takes a second notification action by sending a notification to a group communication (for example, the group communication including an originator of the first type of data315and an originator of the second type of data320). For example, when an initial or modified priority of a determined inconsistency does not meet the stored threshold case impact level, the third electronic processor400sends a notification to a group communication and when the initial or modified priority of the determined inconsistency meets the stored threshold case impact level, the third electronic processor400blocks users following an originator of the first type of data and an originator of the second type of data in a chain of evidence associated with the incident record. Blocking users downstream in a chain of evidence prevents users accessing inconsistent data in the incident record that may negatively impact a case (for example, causing incorrect evidence to be presented in court). Once the inconsistency has been resolved, the blocked user's access to the first type of data and an originator of the second type of data is restored. It should be understood that, in some embodiments, there are further types of notification actions which may be taken by the third electronic processor400than are described herein. For example, the third electronic processor400may send a notification regarding an inconsistency to an originator of the first type of data and an originator of the second type of data individually rather than send a notification to a group communication including an originator of the first type of data and an originator of the second type of data. It should also be understood that there are multiple different combinations of notification actions that may be performed by the third electronic processor400as a first notification action and a second notification action. In some embodiments, the method500may be performed by the third electronic processor400continuously, periodically, upon receiving a request from a user, or whenever there is an update to the incident record. FIG.7is a flow chart of a method600of resolving inconsistencies in digital evidence. At block605, the third electronic processor400receives data (evidence) for inclusion in an incident record. Similar to the method500, at block610, the third electronic processor400determines if there is an inconsistency between a first type of data and a second type of data included in the incident record. When an inconsistency is detected, at block615, the third electronic processor400determines an initial and/or modified priority associated with the inconsistency, perhaps in a same or similar way as set forth with respect toFIG.6and method500. At block620, the third electronic processor400identifies users associated with the inconsistency. As described above, the identified users may include originators of the inconsistent data, users identified in the inconsistent data, and the like. At block620, the third electronic processor400is configured to identify sources of supplemental data not yet included in the incident record. At block625, the third electronic processor400sends notifications to the identified users and queries or searches the identified sources for supplementary data associated with an inconsistency. Once the supplementary data is found the supplementary data is added to the incident record. For example, the third electronic processor400may search the database105for video footage of an incident that was captured from a different perspective than the video data already included in the incident record. The supplementary data may aid users in resolving the inconsistency. At block630, the third electronic processor400receives supplemental data from at least one identified source, updated data from at least one of the notified users, or both. At block635, the third electronic processor400adds the updated data, supplemental data, or both to the incident record. In some embodiments, the third electronic processor400flags, tags, or otherwise marks the first type of data, the second type of data, or both as inconsistent in the stored record. For example, when the third electronic processor400receives updated data from the originator of the first type of data, the third electronic processor400flags the first type of data as inconsistent data. Once the third electronic processor400receives the updated data, supplemental data, or both, the third electronic processor400executes the method600again beginning at block610. After data is marked or flagged as inconsistent, it is not considered by the third electronic processor400when the third electronic processor400executes the method500. In some embodiments, when multiple inconsistencies are discovered in an incident record, the third electronic processor400determines an order for resolving the inconsistencies. The determined order is related to when in a chain of evidence (for example, the chain of evidence150) each inconsistency occurred. For example, an inconsistency in evidence that occurred at the entry step155of the chain of evidence150comes before an inconsistency that occurred at analysis step165. The third electronic processor400may provide the order to users associated with one or more of the discovered inconsistencies (for example, the third electronic processor400sends the order to a group communication including the users) so that inconsistencies earlier in the chain of evidence with the potential to impact other subsequent inconsistencies are resolved first. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has,” “having,” “includes,” “including,” “contains,” “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . . a,” “has . . . a,” “includes . . . a,” or “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially,” “essentially,” “approximately,” “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (for example, comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
50,087
11860838
DETAILED DESCRIPTION OF THE EMBODIMENTS With reference to the accompanying drawings, exemplary embodiments of the present application are described below, which include various details of the embodiments of the present application to facilitate understanding and should be considered as merely exemplary. Therefore, those of ordinary skill in the art should recognize that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the application. Also, for clarity and conciseness, descriptions for public knowledge of functions and structures are omitted in the following descriptions. FIG.1is a flowchart of a data labeling method according to an embodiment of the present application. As shown inFIG.1, the method may include the following steps. S11, sampling a data source according to an evaluation task for the data source to obtain sampled data. In the embodiment of the present application, the data source may include a plurality of types, such as a data warehouse, a File Transfer Protocol (FTP) system or a Hadoop Distributed File System (HDFS). The data warehouse can include structured data, a knowledge graph, and the like. The FTP system or HDFS can include a data set such as a log and offline data and the like. A unified management can be performed on multiple data sources that need to be evaluated. An evaluation task can be created for a data source that needs to be evaluated. The evaluation task may include various parameters required for an evaluation, such as accuracy rate, repetition rate, attributes to be evaluated, and the number of pieces of data to be evaluated. The data source can be sampled, at a task device such as a Product Manager/Project Manager (PM) device, according to the evaluation task to obtain the sampled data. For example, if the data source includes 10,000 pieces of data and 1,000 pieces of data thereof need to be evaluated, 1,000 pieces of sampled data shall be sampled from the data source. In one embodiment, the step S11includes: the data source is sampled according to the evaluation task for the data source and the address information of the data source, to obtain the sampled data. Exemplarily, the address information of the data source can be included in the evaluation task, or it can be sent to the task device separately outside of the evaluation task. The task device can quickly and accurately find the data source according to the address information. And the sampled data can be obtained by sampling the data source. S12, generating a labeling task from the sampled data. In an embodiment of the present application, a plurality of labeling tasks can be generated from the sampled data. The number of pieces of sampling data included in each labeling task may be the same or different. For example, 5 labeling tasks are generated from 1,000 pieces of sample data, one labeling task for every 200 pieces of data. As another example, 3 labeling tasks are generated from 1,000 pieces of sample data, and the 3 labeling tasks respectively include 300, 330, and 370 pieces of data. S13, sending the labeling task to a labeling device. In an embodiment of the present application, there may be a plurality of labeling devices. The working status of each of the labeling devices can be used as a basis for deciding the labeling task to be sent to each labeling device. Alternatively, the working status of each of the labeling devices can be pre-referred during the process of generating labeling tasks. It is beneficial to increase the labeling speed if the plurality of labeling devices are sent different labeling tasks, respectively. For example, 3 generated labeling tasks are respectively sent to 3 labeling devices. As another example, 10 generated labeling tasks are respectively sent to 5 labeling devices, each labeling device two labeling tasks. In one embodiment, the step S13includes: sending the labeling task to the labeling device via a server. For example, the task device sends each labeling task to the server, and the server evaluates the task status of each labeling device and makes a decision about to send each labeling task to which labeling device. S14, receiving a labeled result of the labeling task from the labeling device. In the embodiment of the present application, after receiving the labeling task, the labeling device may label various pieces of data in the labeling task. The labeling method can be different for different evaluation parameter such as repetition rate and accuracy rate and the like. Data to be labeled and a labeling page of the data can be displayed at the labeling device. A marker can operate on the labeling page and record the labeled result of each piece of data in the labeling task. The labeled result includes, but is not limited to, a text, an image, a video, and other things that are associated with the evaluation parameter. For example, it need to be labeled with information about whether the data A is accurate, a position of the labeling page where the data A is recorded to be inaccurate, and a retrieved screenshot with an accurate interpretation to the data A needed. The labeled result can further include information about the labeling device and the marker, the labeling time and the like. In one embodiment, the step S14includes the step of receiving the labeled result of the labeling task from the labeling device via the server. For example, the labeling device sends the labeled result of a labeling task to the server first, and then the server sends the labeled result to the task device that initiates the labeling task. It is helpful to facilitate the uniform management and rational distribution of the labeling task and the labeled result to transmit the labeling tasks and labeled results via the server. In an embodiment of the present application, an automatic evaluation of data can be implemented by using the evaluation task. As compared with manual evaluation, such automatic evaluation is helpful to improve the evaluation efficiency and reduce the evaluation time. By using the evaluation task to sample the sample data source, the amount of data to be processed can be reduced. And since the sampled data is random, accurate labeled results can be obtained. Further, since a unified management can be performed on the data source and data label evaluation process, the processes are centralized and the management costs are reduced. In addition, the parameters to be labeled can be configured uniformly through the evaluation task to unify evaluation standards. In an embodiment, as shown inFIG.2, the method may further include the following steps: S15, recording the labeled result of the labeling task into the evaluation task. The received labeled result of each labeling task can be recorded in the evaluation task that generates these labeling tasks for subsequent unified viewing and management. In an embodiment, the method further includes: S21, generating the evaluation task for the data source, wherein the evaluation task includes an evaluation index. Exemplarily, the evaluation task includes various parameters required for the evaluation, such as accuracy rate, repetition rate, which attributes to be evaluated, how many pieces of data to be evaluated, and the like. Those parameters can also be referred to as evaluation index. For different evaluation indexes, the sampling rules to be used, such as a reservoir method, a cluster method, can be preset. When performing an evaluation task, the preset sampling rule can be automatically called according to the evaluation index in the evaluation task so as to initiate the sampling. In an embodiment, as shown inFIG.3, the method further includes the following steps: S31, receiving an inspection result and the address information of a first data source which has passed a first inspection. S32, performing a second inspection on the first data source to use the first data source which has passed the second inspection as the data source, and generating an evaluation task for the data source. Before the evaluation task is generated, a first inspection can be performed on the data source in advance by an inspector, such as a Research and Development (RD) personnel. The first inspection includes, but is not limited to, inspections of obvious typographical errors, formatting errors and the like in the data. The first inspection can be performed offline or online. For example, the RD personnel extracts a small amount of data from a data source offline. After it is inspected to be accurate, the result of the first inspection is communicated to a PM personnel. The PM personnel performs a second inspection on the data source at the task device based on the result of the first inspection. If the result of the second inspection is accurate, it can be determined that the second inspection is passed. As another example, the RD personnel extracts a small amount of data from a data source at the inspection device. After it is inspected to be accurate, the result of the first inspection is sent to the task device by the server. A PM personnel performs a second inspection on the data source at the task device based on the result of the first inspection. If the result of the second inspection is accurate, it can be determined that the second inspection is passed. Unqualified data can be filtered out through multiple inspections, such that unnecessary data labeling processes are reduced. In an embodiment, as shown inFIG.2andFIG.3, the method further includes the following steps: S16, evaluating the received labeled result to obtain an evaluation result for the labeled result. S17, sending the evaluation result for the labeled result to a server. S18, receiving an analysis result from the server, wherein the analysis result comprises labeled results of labeling tasks belonging to the same evaluation task and/or evaluation results for labeling results of labeling tasks belong to the same evaluation task. The evaluation result may include the acceptance or modification of the evaluation result. It is helpful to summarize the overall qualities of the labels to analyze the labeled results and/or evaluation results via the server. For example, if there are 10,000 pieces of sampled data and 9,000 of them are accurate data, it can be concluded by analysis that the accuracy rate of the data source is 90%. The accuracy rate of the data source that is obtained through this evaluation can be displayed at the task device. In the embodiment of the present application, after the task device has received the labeled results from the labeling device via the server, the task device may evaluate each labeled result. For example, the PM personnel can see the labeled result of each of the labeling tasks divided by each evaluation task at the labeling device. Then, the PM personnel can check whether these labeled results are accurate or reasonable. It is possible to give an evaluation result for each labeled result. If the labeled results are accurate, the PM personnel can submit the evaluation result of each of the labeled results to the server at the labeling device. If a labeled result is inaccurate or unreasonable, the PM personnel can also return the labeled result to the labeling device for re-labelling. FIG.4is a block diagram of a data labeling apparatus according to an embodiment of the present application. The apparatus may include:a sampling module41, configured to sample a data source according to an evaluation task for the data source to obtain sampled data;a first generating module42, configured to generate a labeling task from the sampled data;a first sending module43, configured to send the labeling task to a labeling device; anda first receiving module44, configured to receive a labeled result of the labeling task from the labeling device. In an implementation, as shown inFIG.5, the apparatus further includes:a recording module45, configured to record the labeled result of the labeling task into the evaluation task. In an implementation, the apparatus further includes:a second generating module51, configured to generate an evaluation task for the data source, wherein the evaluation task includes an evaluation index. In an implementation, the apparatus further includes:a second receiving module52, configured to receive an inspection result and address information of a first data source which has passed a first inspection; anda third generating module53, configured to perform a second inspection on the first data source to use the first data source which has passed the second inspection as the data source, and generate an evaluation task for the data source. In an implementation, the sampling module41is configured to:sample the data source according to the evaluation task for the data source and the address information of the data source, to obtain the sampled data. In an implementation, the apparatus further includes:an evaluating module46, configured to evaluate the received labeled result to obtain an evaluation result for each labeled result;a second sending module47, configured to send the evaluation result for the labeled result to a server;a third receiving module48, configured to receive an analysis result from the server, wherein the analysis result comprises labeled results of labeling tasks belonging to the same evaluation task and/or evaluation results for labeling tasks belonging to the same evaluation task. In an implementation, the first sending module43is configured to send the labeling task to the labeling device via a server;the first receiving module44is configured to receive the labeled result of the labeling task from the labeling device via the server. In this embodiment of the present application, the functions of the modules in each device refer to the corresponding description of the above mentioned method and thus the description thereof is omitted herein. FIG.6is a block diagram of a data labeling system according to an embodiment of the present application. The data labeling system may include:a task device61, configured to execute the data labeling method according to any one of embodiments;at least one labeling device62, configured to receive a labeling task from the task device, to label data included in the labeling task, and to send a labeled result of the labeling task to the task device. In an implementation, as shown inFIG.7, the system further includes:a server63, configured to receive an evaluation result for each labeled result from the task device; to analyze labeled results of labeling tasks belonging to the same evaluation task to obtain an analysis result, wherein the analysis result includes the labeled results of the labeling tasks belonging to the same evaluation task and/or evaluation results for the labeled results belonging to the same evaluation task; and to send the analysis result to the task device; and to send the analysis result to the task device. In an implementation, the labeling device62is further configured to preload and cache pages of a plurality of pieces of data to be labeled. The page of the first piece of data can be shown on the labeling interface first. After the first piece of data has been labeled, the labeling interface switches to the page of the second piece of data and preload the pages of the subsequent pieces of data. In addition, a page can be reloaded and the top cached page(s) can be deleted, so as to reduce the impact of the page caching on the performance of the device. In an implementation, the system may further include an inspection device, which is configured to perform a first inspection on a first data source, and to send the inspection result of the first inspection and the address information of the first data source which has passed the first inspection to the task device. The task device performs a second inspection on the first data source based on the result of the first inspection. After the first data source passes the second inspection, the first data source can be used as the data source for evaluation and an evaluation task of the data source is generated. In the system of the embodiment of the present application, the functions of each component may refer to corresponding descriptions in the foregoing method, and thus the description thereof is omitted herein. In an application example, a quality evaluation platform may include the following parts: data source management, evaluation item management, data sampling, labeling task, statistical report, and so on. The quality evaluation platform can evaluate the labels on data. The functions of each part are introduced below. 1. Data Source Management: When performing data source management, a unified management can be performed on various currently evaluated data sources, such that repeated delivery of data can be avoided, data redundancy can be reduced, and overall process efficiency can be improved. Platform users such as a RD personnel or PM personnel can import various data sources to the platform for unified management at his client. Data source type includes, but are not limited to, various types of data warehouses (such as HBASE) and offline data (such as FTP systems or HDFS) and so on. The data warehouse can include structured data, a knowledge graph, and so on, and the FTP system or HDFS can include a data set such as a log and offline data and the like. In the process of data source management, the data source or the data thereof can be deleted, added, and updated and so on, so that distributed data can be effectively managed, as shown inFIG.8. For example, when creating a data source, information like the identification (ID) of the data source, the name of the data source, the person in charge, the data type, the address of the data source, the status, the updated time, the description, the operation permission and the like can be recorded. In addition, multiple data sources can be managed through a data source list. 2. Evaluation Item Management: For the selected data source, the PM personnel selects an index to be evaluated at the task device, and configures the related sampling method and evaluation type. As shown inFIG.9, the evaluation index may include accuracy rate, repetition rate, and other types. When performing evaluation item management, the operations like editing, deleting and adding can be performed on the evaluation items. Examples of functions of evaluation item management are as follows:1.) Creating a new evaluation item. For example, when configuring an evaluation item, the name of the evaluation item, the person in charge, the demander(s), and etc. can be determined, the data source to be evaluated, the scenario, the marker, and etc. can be selected, and the evaluation index(es) required by the data can be confirmed. Furthermore, a routine cycle of the evaluation task can also be set, for example, data sampling can be automatically performed every month/week, and an evaluation task can be initiated.2.) Configuring an evaluation index. For example, when configuring an index item, the attribute and number of samples can be selected according to a specific index such as repetition rate, accuracy rate, low quality rate, recall rate, and the like as the basis for performing the sampling and evaluation. One data source can be configured with multiple evaluation indexes at the same time. When creating the index item, the creation can be seemed to be completed if the information like a task type, task command, attribute, and number of samples have been filled or selected. For example, if the task type has been selected as “evaluation sampling”, the task command has been selected as “random SPO repetition rate”, the attribute has been filled with “name”, and the number of samples has been filled with “500”, then the creation for this index is seemed to be completed. 3. Data sampling: As shown inFIG.10, a sampling task can be initiated according to a sampling rule (or a sampling method), and sampled data can be output for labeling. The sampling rule includes, but are not limited to, a reservoir method, a cluster method or the like. The sampling rule can be predefined according to the data type and the data evaluation index. The sampling rule for different evaluation indexes may be the same or different. For example, different sampling types are set for accuracy rate and repetition rate, and each sampling type has a corresponding sampling method. When performing a data sampling, sampling operations, such as editing and initiating the sampling, can be performed. 4. Labeling Task Management: A labeling task is generated based on an evaluation item configuration, and the status of the labeling task is recorded. Further, the ID of the labeling task, a data source, an evaluation item, a labeling index, a start time, an end time, an operation, and the like can also be recorded. After the evaluation task has been created, data sampling would be initiated. In addition, the labeling task is generated using the sampled data, and the labeling task is issued to the labeling device. Markers can see their labeling tasks at the labeling device. 5. Data Labeling: After the labeling task of the data has been issued, the marker undertakes the task at the labeling device and starts to label the data. For example, a PM personnel can distribute the labeling tasks according to the amount of labeling tasks at the task device, and send a certain number of labeling tasks to each labeling device. The labeling device sends the label results to the task device. The PM personnel confirms the labeled results at the task device. According to the methods for evaluating different evaluation indexes such as accuracy rate and repetition rate and the like, the template and style of an evaluation page can be different. In order to improve the labeling speed, the interaction on the evaluation page can be optimized. For example, automatic Uniform Resource Locator (URL) splicing, page preloading, and other manners can be used to improve the speed and efficiency of evaluation. 5.1 The URL can be Automatically Spliced According to the Configuration to Open an Evaluation Reference Page. For example, if the accuracy of a movie data “Movie T-Release Time-2019-11-01” need to be evaluated, a retrieved page for a marker can be automatically loaded according to the rules as shown in the table below, to save his time for repeated mechanized operation. S = Movie TP = Release TimeBaiduhttps://www.baidu.com/aaa={{S + P}}Website1https://www.bbbbb.com/bbb=1002&q={{S}}Website 2https://cccc.com/ccc={{S}} Link addresses in the table above are only exemplary representations, and not specific to a specific web page. 5.2 Not only the reference page of the entity in the current evaluation can be loaded, but also the evaluation reference pages of the subsequent entities can be pre-loaded. In this way, the page loading speed can be greatly increased, the waiting time for page loading since switching to a next entity can be reduced, and the evaluation speed can be greatly increased. When the labeling task is opened for the first time, the pages of the first 10 entities to be evaluated (page 1 to 10) are preloaded and cached, and the page of the first entity is displayed. When the first evaluation is completed and a second evaluation is switched to be displayed, the 11th entity evaluation page is preloaded, and so on, until the page of the 20th entity has been cached. Any subsequent loading of an entity evaluation page will cause a headmost cached page to be deleted, and the maximum number of cached pages shall not exceed 20, so as to reduce the impact of page caching on the performance of PC. 6. Statistical Report: Statistics of the Evaluation Results is Summarized and the Report is Displayed. FIG.11is an overall process of data evaluation using the quality evaluation platform of this example. The evaluation process can involve multiple roles. For example: PM, RD, Quality Assurance (QA) and marker. In a data source preparation stage, a RD personnel (through the inspection device) and a PM personnel (through the task device) can prepare a data source through a data source management function. For example, data of HBASE data warehouse is routinely sent out of the warehouse. HBASE files out of the warehouse are stored in the server of the platform for easy sampling. Before initiating a new evaluation task, it should be determined whether the data version/update time of the database is consistent. If they are not consistent, data of the HBASE data warehouse is sent out of the warehouse again. The QA personnel can supervise and manage the process of data source preparation. For example, as shown inFIG.12, the RD personnel extracts 10 pieces of data from 100,000 pieces of data for inspection, and sends the inspection results to the task device. The PM personnel performs a secondary inspection of the inspection results at the task device. If this inspection is passed, the PM personnel initiates an evaluation task at the task device (for example, parameters in the evaluation task include: 10,000 pieces of data to be sampled). During an evaluation task initiation stage, the PM personnel can select an index for the evaluation task, modify the configuration or routine, and initiate an evaluation task through the evaluation item management function. In this process, PM can refer to the inspection results and relevant suggestions from the RD. For example, as shown inFIG.12, according to the parameter in the evaluation task, 10,000 pieces of data are sampled from 100,000 pieces of data. In a sampling task stage, the evaluation task is performed. And the data source is sampled according to a certain sampling rule to obtain sampled data. And a new labeling task is created from the sampled data. Wherein, different indexes may be corresponding to a same sampling rule. At this stage, if the automatic initiation of the evaluation task fails, the related personnel such as the PM and the RD may be notified by email to reinitiate the evaluation task. For example, as shown inFIG.12, the 10,000 pieces of sampled data are divided into two labeling tasks, and the labeling task1and labeling task2each include 5000 pieces of data. During a labeling task stage, a labeling task association template can be called at the task device to distribute a labeling task to each labeling device. After a labeling task has been completed at a labeling device, it can be sent to the task device for accepting and modifying by the PM. After it has been accepted and modified, the labeling task can be submitted to the server. For example, as shown inFIG.12, the labeling task1is assigned to the labeling device mark1for labeling, and the labeling task2is assigned to the labeling device mark2for labeling. Finally, the labeled results are summarized. During an analysis task stage, the server can automatically calculate each index based on the labeled result and the evaluation result and generate a report. The server can return the generated report to the relevant task device. The PM personnel can see the report on the task device. The server can also send the generated report to the relevant inspection device. The RD personnel can see the report on the inspection device. The data quality evaluation platform in this example of the application is used to manage the quality evaluation process for data like a knowledge graph. Through such a platform method, the evaluation process can be standardized, the threshold is decreased, and the cost for manual operation is reduced, so that evaluation efficiency is improved. According to an embodiment of the present application, the present application further provides an electronic apparatus and a readable storage medium. As shown inFIG.13, it is a block diagram of an electronic apparatus according to the content placement method according to the embodiment of the present application. The electronic apparatus are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workbenches, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic apparatus may also represent various forms of mobile devices, such as personal digital processing, cellular phones, intelligent phones, wearable devices, and other similar computing devices. The components shown here, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the application described and/or required herein. As shown inFIG.13, the electronic apparatus includes: one or more processors901, a memory902, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and can be mounted on a common motherboard or otherwise installed as required. The processor may process instructions executed within the electronic apparatus, including storing in or on a memory to display a graphical user interface (GUI) on an external input/output device such as a display device coupled to the interface) Graphic information instructions. In other embodiments, multiple processors and/or multiple buses can be used with multiple memories and multiple memories, if desired. Similarly, multiple electronic apparatus can be connected, each providing some of the necessary operations (for example, as a server array, a group of blade servers, or a multiprocessor system). A processor901is taken as an example inFIG.13. The memory902is a non-transitory computer-readable storage medium provided by the present application. The memory stores instructions executable by at least one processor, so that the at least one processor executes the content placement method provided in the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions, which are used to cause a computer to execute the content placement method provided by the present application. As a non-transitory computer-readable storage medium, the memory902can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions corresponding to the content placement method in the embodiments of the present application. Module/unit (for example, the sampling module41, the first generating module42, the first sending module43, the first receiving module44shown inFIG.4). The processor901executes various functional applications and data processing of the server by running non-transitory software programs, instructions, and modules stored in the memory902, that is, the content placement method in the embodiments of the foregoing method can be implemented. The memory902may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the content placement method, etc. In addition, the memory902may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory902may optionally include a memory remotely set relative to the processor901, and these remote memories may be connected to the electronic apparatus with the content placement method through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof. The electronic apparatus with the content placement method may further include an input device903and an output device904. The processor901, the memory902, the input device903, and the output device904may be connected through a bus or in other manners. InFIG.13, the connection through the bus is taken as an example. The input device903can receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of an electronic apparatus for content placement method, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointing stick, one or more mouse buttons, a trackball, a joystick and other input devices. The output device904may include a display device, an auxiliary lighting device (for example, an LED), a haptic feedback device (for example, a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (Liquid Crystal Display, LCD), a light emitting diode (Light Emitting Diode, LED) display, and a plasma display. In some embodiments, the display device may be a touch screen. Various implementations of the systems and technologies described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific integrated circuits (ASICs), a computer hardware, a firmware, a software, and/or combinations thereof. These various embodiments may include: implementation in one or more computer programs executable on and/or interpretable on a programmable system including at least one programmable processor, which may be a dedicated or general-purpose programmable processor that may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. These computing programs (also known as programs, software, software applications, or code) include machine instructions of a programmable processor and can be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, device, and/or device used to provide machine instructions and/or data to a programmable processor (for example, magnetic disks, optical disks, memories, and programmable logic devices (PLD)), include machine-readable media that receives machine instructions as machine-readable signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. In order to provide interaction with the user, the systems and techniques described herein may be implemented on a computer having a display device (for example, a CRT (Cathode Ray Tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (such as a mouse or trackball) through which the user can provide input to a computer. Other kinds of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or haptic feedback); and may be in any form (including acoustic input, voice input, or tactile input) to receive input from the user. The systems and technologies described herein can be implemented in a subscriber computer of a computing system including background components (for example, as a data server), a computing system including middleware components (for example, an application server), or a computing system including front-end components (for example, a user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the systems and technologies described herein), or a computer system including such background components, middleware components, or any combination of front-end components. The components of the system may be interconnected by any form or medium of digital data communication (such as, a communication network). Examples of communication networks include: a local area network (LAN), a wide area network (WAN), and the Internet. Computer systems can include clients and servers. The client and server are generally remote from each other and typically interact through a communication network. The client-server relationship is generated by computer programs running on the respective computers and having a client-server relationship with each other. According to the technical solution of the embodiment of the present application, by using the evaluation task, an automatic evaluation of data can be implemented, and the evaluation efficiency is improved. By using the evaluation task to sample the data source, the amount of data to be processed can be reduced. And since the sampled data is random, accurate labeled results can be obtained. It should be understood that the various forms of processes shown above can be used to reorder, add, or delete steps. For example, the steps described in this application can be executed in parallel, sequentially, or in different orders. As long as the desired results of the technical solutions disclosed in this application can be achieved, there is no limitation herein. The foregoing specific implementation manners do not constitute a limitation on the protection scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations, and substitutions may be made according to design requirements and other factors. Any modification, equivalent replacement and improvement made within the spirit and principle of this application shall be included in the protection scope of this application.
37,969
11860839
DETAILED DESCRIPTION In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Embodiments of the present disclosure provide techniques for tuning external invocations utilizing weight-based parameter resampling. Consider an example in which a computer system manages aspects of a data pipeline related to invocation of remote functions to process data. These remote functions may be deployed (e.g., and/or executed) within a distributed environment (e.g., a cloud computing environment) that is external to the computer system, for example, connected via a network. The computer system may receive a plurality of data records (e.g., respectively corresponding to text fields from blog posts) that are intended for data enrichment based on analysis performed by a remote function of the distributed environment. To mitigate the initialization cost involved with invoking the remote function, the computer system may determine to perform invocations in batches of data records. In this example, the optimal batch size for efficient data processing may vary over time, for example, related to network and/or computing resource availability at a given time. Accordingly, the computer system may be tasked with tuning the respective batch size parameter for one or more batches to be efficiently processed with each new invocation (e.g., iteration), based on information learned from the previous invocation. For example, the computer system may determine a set of potential parameter values (e.g., batch size values, such as ten data records, twenty data records, etc.) associated with the batch size parameter. The computer system may then partition the plurality of data records into a plurality of samples, so that each record is assigned to a particular sample, and whereby the sample size of a sample corresponds to a particular batch size parameter value of the set of potential parameter values. The computer system may then assign weights to each of the batch size parameter values. The weight of a batch size parameter value may be associated with a probability (e.g., 30%) of selecting a sample of the particular batch size value (e.g., ten data records). The computer system may then select a first sample from the plurality of samples based on the weight for the batch size of the first sample. The computer system may then execute a first external invocation by invoking the remote function, whereby the first sample (e.g., among other samples similarly selected for the first external invocation) is included as input to the remote function. The computer system may determine feedback data associated with the performance of the first external invocation. For example, the feedback data may indicate whether the first sample (and/or other samples) was successfully processed (e.g., enriched) and/or the respective response times for each sample. The computer system may then adjust weights of the set of potential batch size values based on the feedback data. The computer system may then select a second sample (e.g., having a particular batch size) to be processed as part of a second external invocation, whereby the selection is based in part on the adjusted weights. Accordingly, the computer system may proceed with a series of external invocations as more batches are input into the data pipeline, each invocation adjusting weights based on feedback from the previous iteration, until the full set of data records are processed. In at least this way, the computer system may coordinate efficient processing of data records via the data pipeline while also quickly tuning the process to adapt to changing conditions (e.g., fluctuating network and/or computing resource availability) over time. To further illustrate, consider a scenario in which a cloud services provider (CSP) provides a service to one or more customers, for example, including a social media service (SMS). The SMS customer (e.g., an online service provider) may receive a large number of user inputs (e.g., online posts) each day from users of the service. In this example, each post may correspond to a data record (e.g., a row) of one or more data fields (e.g., text fields). Accordingly, the SMS may determine to have a large number of data records processed. Each data record may vary, for example, according to a total number of bytes (e.g., and/or bits), a number of text characters per data field, a number of non-empty data fields, etc. A computer system may be provisioned (e.g., by the CSP) to the SMS (e.g., on premises of the SMS), whereby the computer system may be tasked with coordinating enrichment (and/or other processing) of data records received by the SMS. For example, the computer system may coordinate invocation of a remote function to be executed by a remote server(s) of the CSP. In this example, the remote server may utilize specialized hardware optimized for text analysis and/or include a trained machine learning (ML) model (e.g., a neural network). The ML model (and/or other suitable algorithms) may be executed upon invocation of the remote function, whereby, upon receiving one or more data records from the computer system via the external invocation, the ML model may analyze text from a particular data field of the data record, determine supplementary information from the data record (e.g., a subject matter of a particular string of text of the data record), enrich the data record by associating the supplementary information with the particular data field of the data record, and then provide the enriched data record back to the computer system in a response message. It should be understood that any suitable data enrichment (and/or processing via external invocations) may be performed according to techniques described herein, for example, including text, image, and/or video analysis and/or augmentation. As described above, the remote server may provide one or more remote functions that may be respectively invoked by the computer system via an external invocation (e.g., a remote function call over a network and/or a suitable data bus), whereby the computer system may provide one or more data records as input to the remote function. In some embodiments, the remove server may enable the computer system to batch a subset of data records as part of an external invocation, for example, to mitigate against initialization costs per external invocation. In some embodiments, the computer system may provide a plurality of batches (e.g., which may also/alternatively be described as “samples”) as part of a particular external invocation. In some embodiments, a particular sample may be associated with one or more parameters that are related to providing the particular sample as input for the external invocation. In this example, a particular parameter may correspond to a batch size (e.g., a sample size), which may indicate a number of data records included in the batch. As described further herein, any suitable number and/or type of parameters may be associated with a particular sample and/or an external invocation of the remote function. This may include, for example, a batch size (e.g., a number of data records per batch), a time interval between transmission of samples of a particular external invocation, a number of bits (e.g., and/or bytes) associated with a particular sample, etc. To efficiently coordinate the data enrichment process, the computer system may perform operations that enable the computer system to automatically adjust (e.g., tune) one or more parameters per external invocation. As described herein, these parameter adjustments may enable the computer system to more optimally execute external invocations, while taking into account a variety of factors that may (or may not) change over time. These factors may include, but are not limited to, network resource availability, computing resource availability, cloud provider policies (e.g., payload size limits, request rates), etc. Utilizing the batch size parameter as a representative example parameter that may be automatically tuned (e.g., among samples) between invocations, the computer system may determine a set of potential parameter values for a batch size (e.g., ten data records per batch, twenty data records, thirty data records, etc.). In some embodiments, this set may be a fixed (and/or discrete) set of values. In some embodiments, a number of values of this set of values may be referred to as a population size. For example, in this case, the population size may be five (e.g., ten, twenty, thirty, forty, and fifty data records, respectively corresponding to members of the population). Continuing with the above illustration, the computer system may then determine a plurality of samples. As described above, each sample may correspond to a particular parameter value (e.g., a particular batch size) and may be generated from one or more data records of the plurality of data records. In some embodiments, the computer system may assign weights to each of the batch size values, whereby a weight may be associated with a probability of selecting a sample of a particular batch size. For example, a batch size of ten rows may be assigned a weight of 0.30 (e.g., 30%), a batch size of twenty rows may be assigned a weight of 0.20 (e.g., 20%), etc. In some embodiments, the initial weights (e.g., prior to a first external invocation) may be set to substantially similar (e.g., the same) values across the set of parameter values. This may indicate that, initially, the computer system does not have data indicating which parameter value (e.g., which particular batch size) is likely to perform better, given the current network conditions and/or computing resource availability of the remote server. The computer system may then select a set of samples (e.g., for example, five batches) for a first external invocation (e.g., a first iteration) based on the initial weights, and then execute the first external invocation. For example, the computer system may invoke the remote function in parallel (e.g., for increased processing efficiency) for each sample of the set of samples for the first iteration. In some embodiments, the computer system may invoke the remote function serially for each sample of the set of samples, It should be understood that, in some embodiments described herein, an external invocation may alternatively be referred to as an iteration, and may include parallel and/or serially execution of one or more remote functions (e.g., and/or calls to the same remote function) for respective sample processing as part of the iteration. In some embodiments a number of the set of samples may be determined by the computer system according to any suitable method (e.g., based on the total number of data records to be processed and/or administratively configured). Continuing with the illustration, the computer system may receive feedback data associated with a level of performance of the first external invocation. For example, the feedback data may be associated with whether a particular sample of the first iteration was successfully processed or otherwise the remote function returned a failure code. In another example, the feedback data may be associated with a response time interval, indicating how long it took for the particular batch to be processed and/or the data enrichment results to be received. The computer system may analyze (e.g., measure) the feedback data for each of the samples of the first iteration, and then adjust the weights of the plurality of parameter values according to the measurements. For example, suppose that a batch size of ten data records had a response time of one second (e.g., in this case, the fastest turnaround time), while a batch size of fifty records consistently experienced a failure for the given iteration (e.g., due to exceeding payload policy limits for the particular remote server). In this example, the computer system may increase the weight of the parameter value corresponding to ten data records, and/or correspondingly decrease the weight of the parameter value corresponding to fifty data records. The computer system may similarly adjust the weights of the different parameter values based on the measurements (e.g., respective sample response times) from the previous iteration. The computer system may then select a new set of samples for a second external invocation (e.g., a second iteration) based on the updated weights. In some embodiments, the new set of samples may include a similar (e.g., same) number of samples as the previous iteration, but may contain samples with a different distribution of batch sizes from the previous iteration, reflecting the adjusted weights for the population of parameter values. For example, continuing with the illustration above, the computer system may select an increased number of samples that each have batch sizes of ten data records, while selecting a decreased number of samples that have batch sizes of fifty data records (e.g., based on the relative adjustments in weights). In some embodiments, because the selection (e.g., re-sampling) of samples for the second external invocation (and/or other external invocations) is done with replacement, samples with the same batch size may be selected more than once (e.g., out of a given population of batch sizes). For example, if the batch size of ten records performed significantly better than other batch sizes for a previous iteration, the corresponding weight may be increased such that the next sampling for the next iteration may include a higher number of samples with ten records each. Accordingly, the result of the re-sampling may result in a posterior probability distribution in accordance with the updated weights of the parameter values. The computer system may then execute the second external invocation with the new set of samples. The computer system may repeat these iterations until the plurality of data records has been enriched. In some embodiments, one or more additional samples may be optionally processed as part of an iteration (e.g., an external invocation). These one or more additional samples may be alternatively be referred to herein as “exploratory samples,” and may optionally supplement (e.g., be processed together with, in the same iteration) the set of samples that were sampled with replacement according to the weights, as described above. For example, suppose that with respect to the second iteration (e.g., the second external invocation) described above, the weight adjustments resulted in the weight for the batch size parameter value of fifty records being reduced to a small number (e.g., close to zero), corresponding to a small probability that batch sizes with this parameter value may be selected. In this case, the re-sampling with replacement process may select the new set of samples whereby no (or few) batch sizes of fifty data records are selected. Accordingly, there is a possibility that, in the absence of mitigating action by the system, the weight for the fifty data record batch size may remain perpetually low, and thus that batch size may be continuously excluded (e.g., “starved”) from being selected. To mitigate against the possibility that the process described remains in a local minima (e.g., precluded from exploring new optimum weights for parameter values), the system may select the one or more additional (“exploratory”) samples independently from the current value of the weights. For example, in some embodiments, a number of additional samples may be selected as a percentage (e.g., a fixed percentage, such as 20%) of the population size. For example, in a case where the percentage is 20% and the population is five (e.g., five parameter values, in the current example), the number of exploratory samples per iteration may be set to one. In some embodiments, the additional samples may be selected according to a uniform probability distribution. In some embodiments, the additional samples may be selected whereby the respective parameter values (e.g., batch sizes) of the additional samples are not represented in the plurality of samples that was selected based on the adjusted weights. Any suitable method may be used to select the exploratory samples. In this way, the system may continuously explore new optimal parameter values, while mitigating the possibility of starving other values from being selected for exploration. Embodiments of the present disclosure provide several technical advantages over conventional systems. For example, as described herein, distributed data pipelines often have a highly unpredictable nature, whereby network resource availability, computing resource availability, and/or cloud provider policies may be continuously changing. In these cases, choosing optimal perform parameters (e.g., how many data records to batch per external invocation) may be difficult to efficiently achieve via conventional approaches. For example, some of these approaches may not generalize well to be able to tune parameters for which the optimal values are regularly fluctuating. Some conventional approaches may require a significant amount of time and/or computing resources to learn an optimal policy. Also, some approaches may require increased system complexity, for example, storing learned parameters over multiple iterations. Techniques described herein provide an efficient way to continuously adapt to the stochastic nature of a cloud computing environment, where conditions may be regularly changing. For example, conducting measurements according to techniques describe herein may be done in parallel (e.g., within a distributed environment context). Also, techniques may enable the system to efficiently adjust weights per iteration to be used in a next iteration. Accordingly, the reaction time to be able to recover from faults (e.g., where a particular batch size may fail) and/or adapt to slower response times may be significantly reduced. For example, the system may quickly adjust the weights and select batch sizes in a subsequent iteration that are less likely to produce faults and/or more likely to produce faster response times. In another example, techniques described herein efficiently scale to handle multi-variate parameter tuning problems. For example, in some embodiments, the system may use a fixed population size that corresponds to the one or more parameters being measured. Accordingly, the system may efficiently scale (e.g., constant time) to handle additional parameters being measured. The system may also be able to handle a wider variety of parameters for which the optimal values may, respectively, be regularly fluctuating. In yet another example, techniques described herein may reduce system complexity at least in part because the techniques may not necessitate storage of learned parameters. For example, weights can be adjusted with each iteration based on results from the previous iteration, without requiring a complex process for storing and/or learning from extensive historical data. Furthermore, as described herein, techniques may enable the system to continually explore new optimal values, based in part on using exploratory samples to mitigate the risk of remaining in a local minima state (e.g., whereby some population members may be perpetually “starved” from being selected). For clarity of illustration, embodiments described herein may typically refer to automatically tuning parameters of external invocations within a distributed system context, whereby a computer system conducts an external invocation over a physical network (e.g., over the Internet) to invoke a remote function on a separate device (e.g., a remote server). However, embodiments should not be construed to be so limiting. For example, the remote function may be deployed in a virtual computing environment. In one example, remote function may be deployed on the same physical device as a virtual machine that invokes the remote function, but may execute on a different virtual machine (and/or a different virtual private cloud, utilizing a virtual network). In another example, embodiments described herein may typically refer to an example parameter that corresponds to a batch size (e.g., a number of data records (e.g., rows) for a given sample). However, any suitable parameter may be utilized to perform techniques herein, including, but not limited to, a request rate, a payload size per sample, a number of concurrent requests, etc. FIG.1is a simplified block diagram illustrating an example environment for tuning external invocations utilizing weight-based parameter resampling, according to some embodiments. In diagram100ofFIG.1, several elements are depicted, including a data store102, a computer system101, a network106, a cloud computing provider A108, and a cloud computing provider B110. In this example, computer system101may be tasked with coordinating processing (e.g., enrichment) of data records. For example, as described further herein, the computer system101may receive a plurality of data records from the data store102, which may (or may not) be associated with the computer system101. In this example, the computer system101and/or data store102may be associated with a service provider (e.g., an online service provider, such as an SMS). The service provider may regularly receive input from users (e.g., text inputs, images, videos, audio files, etc.) as online posts to a website, which subsequently are stored as data records to the data store102. In some embodiments, each data record may be associated with a user profile of a user of the service provider. The computer system101may then manage/execute a sequence of iterations (e.g., external invocations) of one or more remote functions that are deployed to a cloud services provider (e.g., cloud computing provider A108and/or cloud computing provider B110). For each external invocation, one or more samples (e.g., which may alternatively be referred to as a “batches,” each batch including one or more data records) may be included as input to be processed (e.g., enriched with supplementary information). In some embodiments, the enriched data records may be returned in a response message(s) to the computer system101. As described herein, the computer system101may measure feedback data from each iteration (e.g., response times, failures to respond, etc.), and then automatically adjust (e.g., tune) parameters, for example, by adjusting weights associated with parameter values of the one or more parameters. These adjusted weights may then be used to coordinate the next iteration (e.g., including coordinating the selection of samples). This automatic tuning of parameters may be performed over successive iterations until the plurality of data records has been enriched. Accordingly, embodiments may enable a more efficient end-to-end process for implementing a data processing pipeline. Turning toFIG.1in further detail, in some embodiments, the data store102may correspond to any suitable computing system that maintains (e.g., stores) data records. For example, the data store102may include a database112of data records. The data store102may receive user input (e.g., based on online posts to the website of the service provider) and then store the user input (e.g., text, images, videos, and/or any suitable input) as a data record of the data store102. For example, as depicted inFIG.1, database112contains a plurality of data records (e.g., 1 through N), with data record114(e.g., “Record 1, with Value “ABCD . . . ”) corresponding to a representative data record. It should be understood that a data record may include any suitable number of data fields, data types per data field, and/or utilize any suitable format (e.g., plain-text, compressed, suitable encryption format, etc.). Also, the data records may be received from any suitable source, including, but not limited to, user input via an online website post, manual text entry, bulk upload of data records to the data store102, etc. In some embodiments, the computer system101may be any suitable computing device. For example, computer system101may be a physical device (e.g., a server computer). In some embodiments, the computer system101may correspond to a virtual machine, for example, executing in a cloud computing environment. As described above, in some embodiments, the computer system101may (or may not) be associated with the data store102. For example, the computer system101and/or data store102may both be associated with a service provider (e.g., an online service provider, such as an SMS). In some embodiments, the computer system101may include the data store102as a component of the computer system101. Any suitable association between the computer system101and the data store102may be utilized to perform techniques herein. For example, as user input is received and stored as data records into the data store102, the computer system101may retrieve data records from the data store102(e.g., from database112) and then manage a process for enriching the data records. In some embodiments, data records may be received and/or processed (e.g., enriched) according to any suitable cadence. For example, in some embodiments, the data store102may continuously receive new user input, whereby the computer system101receives (e.g., retrieves) data records from the data store102as new records arrive, and then queues the data records for processing (e.g., data enrichment by an external source, such as cloud computing provider A108or B110). In some embodiments, the computer system101may receive a plurality of data records (e.g., a large set of data records, for example, a billion data records) from the data store102. For example, the plurality of data records may be received once per day (and/or week, month, etc.). In any case, and, as described further herein, the computer system101may then determine how to efficiently batch received data records so as to achieve more optimal processing times via a sequence of external invocations. In some embodiments, network106may include any suitable communication path or channel such as, for instance, a wire or cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link, a WAN or LAN network, the Internet, or any other suitable medium. The network106may include any one or a combination of many different types of networks, such as cable networks, the Internet, wireless networks, cellular networks, and other private and/or public networks. In some embodiments, network106may include and/or be connected to a virtual network, as described further herein. In some embodiments, for example, in a case where the computer system101corresponds to a virtual machine, the virtual machine may execute an external invocation (e.g., a remote function call) over a virtual network, which may involve the same (or different) physical computing resources as the physical device upon which the virtual machine is executing. In some embodiments, cloud computing provider A108and cloud computing provider B respectively correspond to representative remote computing devices (e.g., clusters of servers in the cloud) that are external (e.g., physically and/or virtually) from the computer system101, for example, being accessed via the network106. In some embodiments, a cloud computing provider operates within a distributed computing environment. In some embodiments, a particular cloud computing provider (and/or the associated distributed environment) may have different (or similar) characteristics when compared to other cloud computing providers. For example, using cloud computing provider A108as a representative example, cloud computing provider A may operate a particular set of hardware and/or software (e.g., for example, specialized hardware, such as specialized Graphics Processing Units (GPUs) and/or specialized Tensor Processing Units (TPUs)), which may be different from (or similar to) cloud computing provider B110. In some embodiments, cloud computing provider A may have different (or similar) networking and/or computing resources available, compared to other computing providers that are connected (e.g., via network106) to computer system101. For example, at a particular period in time, cloud computing provider A may be receiving a large amount of data records from one or more computer systems (e.g., multiple customers) for data enrichment, in which case the network106connection to cloud computing provider A may be congested and/or the available computing resources of cloud computing provider A108may be temporarily limited (e.g., compared to cloud computing provider B110). In some embodiments, cloud computing provider A108may implement a different set of one or more policies from other cloud computing providers. These policies may correspond to, for example, different limits on payload (e.g., batch) size, different request rates (e.g., a rate at which new batches of data records are received), etc. It should be understood that any suitable policies may be associated with a distributed environment. In some embodiments, a remote function that is executed by an external computing device (e.g., a cloud computing provider) may perform any suitable data processing of data records. For example, in the case of data enrichment, representative cloud computing provider A108may receive a data record from computer system101. In this example, the data record may contain a text field. The remote function may execute a machine learning model (and/or other suitable algorithm) to analyze the text field to determine, for example, a subject matter of the text field, information about the user who authored the text, etc. In another example, the remote function may analyze features of an image (and/or video) to determine characteristics of the image, such as an identity of a person in an image or other object recognition. It should be understood that any suitable types of data processing (e.g., and/or data enrichment) may be performed on data records via external invocations according to techniques described herein. In some embodiments, there may be exist a plurality of parameters (e.g., many parameters) that may be associated with the computer system101(e.g., as a representative computer system) executing (e.g., invoking) an external invocation to call a remote function that executes external from the computer system101(e.g. executing by a particular cloud computing provider, such as cloud computing provider A108). Techniques described herein may tune parameters to achieve efficient processing time for an external invocation of a remote function, while accommodating for the stochastic nature of cloud computing (e.g., variable network conditions, variable computing resource availability, various types of data records, etc.). For example, one type of parameter may correspond to a batch size (e.g., a number of data records of a batch of data records). For example, during a first time interval, network conditions (and/or computing resource availability) may be such that the computer system101may transmit a larger batch size (e.g., sixty data records each) that will be processed more efficiently than transmitting six batches of ten data records each. This may be due in part to avoiding initialization costs per batch. However, at a later time interval, during which time there may be less computing resources available (e.g., due to higher customer demand), the larger batch size may produce slower response times (e.g., slower data enrichment). This may be due, for example, to increased computing resources being allocated for garbage collection and/or process elimination, due in part to the higher batch size. In another example, a service provider may have a limit on the number of bits (e.g., bytes) that may be transmitted within a single batch, which limit may be adjusted at different times (e.g., unknown in advance to the computer system101). In this example, a first time interval may allow for a batch to have up to 6 megabytes size, while a later time interval (that may (or may not) be known in advance) may allow for up to 1 megabyte size. Accordingly, while a larger batch byte size may be more efficient (e.g., optimal) during the earlier time interval, the optimal parameter value for this parameter may change (e.g., unpredictably) over time. To further illustrate a technique for automatically tuning external invocations to account for the stochastic nature of cloud computing, and, usingFIG.1for illustration, consider a scenario in which the computer system101intends to execute a plurality of external invocations that are collectively intended to enrich a plurality of data records. In this example, as described herein, the computer system101receives the plurality of data records (e.g., from the database112), which may be a daily set of data records to be processed. In some embodiments, the computer system101may further determine one or more parameters116(e.g., a plurality of parameters) associated with executing external invocations for processing the plurality of data records. It should be understood that any suitable one or more parameters may be adjusted and/or affect performance (e.g., response time) when processing one or more samples of an external invocation. As indicated by parameters116, this may include, for example, a parameter associated with a number of rows (e.g., a number of data records, corresponding to a batch size of a sample), a parameter associated with a time interval from when the last batch was transmitted (e.g., a time interval between a transmission of a first sample and a previous transmission of another sample of a particular external invocation), a number of bits (e.g., and/or bytes) associated with a sample, etc. In this illustration, suppose that the computer system101determines to tune weights for parameter values for a particular parameter118(e.g., the batch size) of the plurality of parameters116over a series of external invocations involving a plurality of samples. Although, in this example, the auto-tuning involves adjustments to weights that are respectively associated with parameters values of the particular parameter118, embodiments should not be construed to be so limited. For example, as described further herein, the auto-tuning may involve tuning weights (e.g., which may include multi-variate weights) of parameter values for other (e.g., additional) parameters of the plurality of parameters116. Continuing with the illustration, the computer system101may determine a plurality of parameter values120that are potential values of the particular parameter118(e.g., batch size). For example, as depicted in diagram100, the plurality of parameter values120may include parameter value122(e.g., corresponding to ten data records, as a representative example), among other parameter values (e.g., twenty, thirty, forty, fifty, and sixty data records, as depicted inFIG.1). In some embodiments, the plurality of parameter values for a given parameter of the plurality of parameters116may be a fixed, unique, and/or discrete set (e.g., a fixed population) for a given sequence of external invocations. In some embodiments, the computer system101may further determine respective weights124of the parameter values120. For example, as depicted in diagram100, the weights124include weight126(e.g., 0.25, associated with the parameter value122of ten data records), another weight associated with the parameter value of twenty data records (e.g., 0.3), and so on. In some embodiments, the weights (e.g., which may be normalized) may add up to substantially 1.0. In some embodiments, described further below, a particular weight (e.g., weight126) may be associated with a probability of selecting a batch (e.g., also referred to herein as a “sample”) having the parameter value122associated with the particular weight. In some embodiments, initial weights (e.g., for an initial external invocation) may be set to a uniform probability distribution, as described further herein. In some embodiments, the computer system101may determine a plurality of samples from the plurality of data records. In some embodiments, each sample may correspond to a batch that is associated with (and/or includes at least a portion of) one or more data records (e.g., rows). In some embodiments, each sample of the plurality of samples may be assigned to one of the plurality of parameter values120. For example, suppose that the number of the plurality of data records corresponds to 1000 data records. The computer system101may determine a first subset of batches that each have 10 data records each (e.g., corresponding to the parameter value122of ten data records). UsingFIG.1for illustration, the first subset of batches may include representative batch A128, which may be associated with parameter value122and include data record114(e.g., previously received the database112) among the ten data records (e.g., rows) of the batch. In another example, a second subset of batches may be associated with the parameter value (e.g., batch size) of 20 data records, with batch B130corresponding to a representative batch of the second subset of batches. In yet another example, a third subset of batches may have fifty data records each (e.g., corresponding to another parameter value of the set of parameter values120), with batch E132corresponding to a representative batch. In some embodiments, the total number of data records across all the batches may correspond to the plurality of data records that is intended for data enrichment. In some embodiments, the computer system101may determine the plurality of samples in advance (e.g., prior to executing a first invocation). In some embodiments, the computer system101may determine new samples over time, for example, based on feedback data determined from previous external invocations used to process other data records of the plurality of data records. In some embodiments, existing batches may be combined with other batches and/or otherwise re-constituted to form another batch size, depending in part on feedback data received from external invocations. As described further herein, the computer system101may proceed to enrich the data records included within the plurality of samples by executing one or more external invocations, and tuning weights associated with parameters (e.g., parameter values) of each sample and/or external invocation based on feedback data determined from previous external invocations. For example, the computer system101may select a first sample (e.g., batch A128with parameter value122, having ten data records) from the plurality of samples for processing via a first external invocation (e.g., invocation A104). The selection of the first sample may be performed based on a first weight (e.g., weight126) that is associated with the parameter value122of ten data records. As described further herein, it should be understood that a plurality of samples may be processed as part of an external invocation (which may be otherwise be referred to as an “iteration”). For example, as depicted inFIG.1, the first external invocation (e.g., invocation A104) may include at least batch A128, batch B130, and batch E132. Although, in this case, each sample depicted has a different parameter value (e.g., batch size), it should be understood that any suitable combination of samples with respective parameter values may be selected for a given iteration, in accordance with a probability distribution indicated by the respective weights. For example, as described further herein (e.g., in reference toFIG.2), the sampling may be performed with replacement (e.g., of population members, such as parameter values), whereby a given iteration may include multiple samples having the same (or different) respective batch sizes. Continuing with the illustration, the computer system101may then execute the first external invocation via the network106, thus invoking a remote function to be executed by a cloud computing provider (e.g., cloud computing provider A108). The cloud computing provider may enrich the data records of the sample(s) of the first external invocation and then transmit a response message including the enriched data (and/or any other suitable response data). In some embodiments, a response message may be transmitted according to any suitable cadence. For example, a response message may be transmitted per enriched sample and/or per external invocation. In any case, the computer system101may determine feedback data associated with a level of performance of the first external invocation. This feedback data may be determined via any suitable method, including, for example, measuring response time intervals, receiving an indication of whether the sample was successfully processed or failed to receive any response, etc. Upon determining the feedback data, the computer system101may adjust weights124for the parameter values120of the parameter118(e.g., corresponding to batch size). Although, inFIG.1, only the parameter values and weights for parameter118are depicted, it should be understood that the computer system101may be configured to determine adjusted weights for parameter values of multiple respective parameters. In some embodiments, a particular weight may be associated with multiple parameters (e.g., a multi-variate weight), and thus, the computer system101may be enabled to automatically tune selection of samples that are associated with a plurality of parameters. As described herein, although the weights for the parameter values120in this example (e.g., for this particular external invocation) have a non-uniform distribution, in some embodiments, the weights may be initially configured to a uniform probability distribution. This initial weightage may be used to select samples for an initial external invocation of the series of external invocations (e.g., before feedback data is determined and utilized for adjusting the weights). Continuing with the illustration, the computer system101may then select one or more samples to be processed via execution of a second external invocation. In some embodiments, the number of samples selected between external invocations may be the same (or different) from other external invocations. In some embodiments, the selection of samples for the second external invocation may be based in part on the adjusted weights of the parameter values120. Accordingly, a different set of samples with respective parameter values120(e.g., batch sizes) may be selected, whereby batch sizes associated with the higher weights may be selected more frequently. The computer system101may similarly proceed with invoking a sequence of external invocations (e.g., iterations), while efficiently tuning parameters following each iteration. In at least this way, techniques described herein may enable the computer system101to efficiently search for optimal parameter values from iteration to iteration, until the full plurality of data records has been processed (e.g., enriched). FIG.2is a simplified flow diagram illustrating an example technique for tuning external invocations utilizing weight-based parameter resampling, according to some embodiments. The process200is an example high-level process for a system (e.g., computer system205, which may be similar to computer system101ofFIG.1) that may tune one or more parameters over a sequence of external invocations. In some embodiments, the process200may be performed in the context of a distributed environment, similar to as described in reference toFIG.1.FIG.2depicts example states that correspond to block of the process200. Also,FIG.2includes elements that may be similar to those depicted in reference toFIG.1. For example, table201may include a plurality of parameter values (e.g., similar to one or more parameter values of parameter values120). Also, network207may be similar to network106and remote server209may be similar to a cloud computing provider (e.g., cloud computing provider A108or cloud computing provider B110). Turning to the process200in further detail, at block202, the system (e.g., computer system205) may determine an initial weights for a plurality of parameter values according to a uniform probability distribution. For example, using the elements ofFIG.2for illustration, the system may initially receive a plurality of data records (e.g., from data store102ofFIG.1). In this example, the data records may be respectively intended for data enrichment (e.g., analyzing text in each data record to determine supplemental information (e.g., a subject matter) from the text). The system may determine to execute one or more of a sequence of external invocations to the remote server209(e.g., via the network207). Each external invocation (e.g., iteration) may be associated with one or more batches (e.g., samples), whereby each sample includes a particular number of data records. As described herein, a remote function of the remote server209may be responsible for perform the data enrichment for each data record of each batch. The particular number of data records that is selected for a given sample may be expressed as a parameter value of a batch size parameter. In some embodiments, the system may generate a number of batches of various batch sizes in advance (e.g., partitioning the plurality of data records into various batches). In some embodiments, batches may be generated dynamically throughout process200. As described herein, in some embodiments, the response time for processing a particular sample may be based at least in part on the particular batch size selected for a particular sample. Furthermore, the optimal batch size may vary over time, for example, as network conditions (e.g., of network207) and/or computing resource availability (e.g., of remote server209) change over time. Continuing with block202, the computer system205may not yet have invoked any external invocations over the plurality of data records, and thus, may not yet have determined feedback data (e.g., measurements) related to response times. Accordingly, because the computer system205does not yet possess information indicating which sample size is more likely to be processed with a shorter (e.g., more optimal) response time, the computer system205may determine at block202to determine initial weights for the plurality of parameter values of table201according to a uniform probability distribution. For example, as depicted in table201, each of the population of parameter values (e.g., 10, 20, 30, 40, and 50) may be initially set to the same weight value (e.g., 20), whereby the population in this case is five members (e.g., five parameter values). In this example, as introduced above, each parameter value may correspond to a potential batch size for the batch size parameter. Accordingly, in some embodiments, for the initial external invocation, a probability of selecting a sample with a particular batch size (e.g., parameter value) may be the similar (e.g., the same) relative to other samples. To further illustrate the uniform probability distribution associated with choosing a first set of samples for the initial external invocation, consider graph312ofFIG.3. Graph312depicts a prior probability distribution (e.g., prior to an external invocation, such as the initial external invocation) that corresponds, in this case, to a uniform probability distribution. For example, for each of the values (e.g., 10, 20, 30, 40, and 50) along the X-axis, there is a similar (e.g., same) probability of that parameter value (e.g., batch size) being selected, as reflected by plots along the Y-axis. Continuing with the illustration ofFIG.2, at block204, the system may invoke (e.g., execute) an external invocation. For example, upon determining the uniform probability distribution at block202, the system may determine (e.g., generate) a plurality of samples from the plurality of data records, as described above. In one example, the system may partition the plurality of data records to generate a first number of batches each having a size of 10 data records (e.g., a first parameter value of the table201), a second number of batches each having a size of 20 data records (e.g., a second parameter value of the table201), and so forth, until a suitable number of batches of various sizes are generated (e.g., representative of the population of batch sizes). It should be understood that each sample may be assigned to one of the plurality of parameter values of table201. The system may then select a first plurality of samples203(e.g., a subset of samples) from the plurality of samples, whereby the selection is performed according to the initial (e.g., uniform) probability distribution. In this example, the first plurality of samples203for the first external invocation includes a sample from each member of the population (e.g., a first sample with 10 data records, a second sample with 20 data records, and so forth). In this example, since it is the first external invocation, no exploration sample (described further herein) may be selected (e.g., represented by an empty brackets (“[ ]”)), in part because each population member (e.g., sample size) already has representation within the first plurality of samples203. In some embodiments, the number of batches (e.g., samples) included for an external invocation (e.g., the number of batches for the first plurality of samples203) may be any suitable number. In this example, first plurality of samples203includes five batches, each of different batch sizes. In some embodiments, the number of batches included for an external invocation may be similar (e.g., the same) across multiple iterations. Upon selecting the first plurality of samples203for the first external invocation, the computer system205may execute the first external invocation according to any suitable method. For example, the computer system205may invoke a remote function (e.g., executing on the remote server209) via the network207. In some embodiments, the computer system205may invoke the remote function in parallel, for example, requesting the remote function of the remote server209to process samples of the first plurality of samples in parallel. In some embodiments, this may enable the computer system205to conduct several measurements in parallel. This may further reduce the reaction time of recovery from faults (e.g., failures of the remote function to successfully return results) and/or recovery from slow response times. For example, as described further herein, the computer system205may more efficiently learn which population members (e.g., batch sizes) perform better most recently, and then adjust the weights for the respective parameter values accordingly for the next external invocation (e.g., involving a second plurality of samples). In some embodiments, the computer system205may execute the external invocation in any suitable manner. For example, the computer system205may invoke the remote function by inputting samples from the first plurality of samples in serial order. In some embodiments, as the remote server209receives and processes samples, the remote server209may return any suitable results (e.g., enriched data records, success/failure codes, etc.) to the computer system205. At block206, the computer system may perform measurements to obtain feedback data, whereby the feedback data may be used to update weights of parameter values of the table201. For example, continuing from block204, for each of the samples of the first plurality of samples203of the first external invocation, the computer system205may determine if the respective sample was successfully processed (e.g., enriched) via the remote function, or if the remote function did not execute successfully. For example, the remote server209may transmit a response code (e.g., “Success” or “Failure”) as feedback data, indicating a processing status for one or more data records of a particular sample. In another example, the computer system205may determine feedback data that corresponds to a time interval between invocation of a remote function for a particular sample and the time that a response (e.g., including a response code and/or enriched data records) is received from the remote server209. This time interval may correspond to feedback data that is indicative of a response time for a particular invocation of a particular sample. Any suitable feedback data may be utilized to perform techniques described herein. In some embodiments, the computer system205may associate the feedback data for a particular sample with a particular parameter value associated with the particular sample. The computer system205may then update a weight corresponding to the particular parameter value based in part on the feedback data. UsingFIG.3for further illustration of the operations of block206, the computer system may maintain (and/or update) a table301. As depicted, table301includes four columns: a batch size column302(e.g., corresponding to discrete parameter values of the batch size parameter, as an example parameter), a response time column304(e.g., in seconds), a non-normalized importance weight column306, and a normalized importance weight column308. The columns of table301may reflect feedback data and associated updated weights for particular parameter values based on the respective feedback data. For example, as described above, suppose that the computer system205invokes each of the samples of the first plurality of samples203(of the first external invocation) in parallel. In this example, a first sample (with a batch size of ten data records) has a response time of five seconds, a second sample (with a batch size of twenty data records) has a response time of four seconds, a third sample (with a batch size of thirty data records) has a response time of three seconds, a fourth sample (with a batch size of forty data records) has a response time of ten seconds, and a fifth sample (with a batch size of fifty data records) is not successfully enriched. Each of these response times may correspond to a type of feedback (e.g., measurement) data that is received and/or observed by the computer system205. The computer system205may then update (e.g., adjust) the weights of the parameter values of table201. For example, the computer system205may determine an updated weight (e.g., a non-normalized importance weight) for a particular parameter value based in part on the response time. In this example, the batch size of thirty data records has the lowest (e.g., best) response time of the parameter values, and therefore the non-normalized importance weight of the respective parameter value (e.g., a batch size of thirty data records) may be adjusted to 0.8 (e.g., the highest among the other parameter values). In another example, because the batch size of fifty data records did not return a successful result (e.g., returning a “Failure” code), the weight for column306may be reduced to a smaller weight (1.0×10−5). Accordingly, the weights for the other parameter values may be similarly adjusted. In some embodiments, a weight may be associated with an importance of a particular parameter value, whereby the importance is in accordance with the value (e.g., desirability) of the particular parameter value being selected. Thus, an adjusted weight that is increased from a previous value may reflect an increased importance of the associated parameter value. As described herein, this increased weight may correspond to an increased probability of subsequently selecting a sample associated with the particular parameter value. Likewise, an adjusted weight that is decreased from a previous value may reflect a decreased importance of that parameter value and/or a decreased probability of selecting a sample associated with the particular parameter value. In some embodiments, any suitable method (e.g., algorithm) may be used to determine a non-normalized importance weight. In one example, the non-normalized importance weight may be determined as a function of the previous response time, batch size, and/or other attributes of the batch. In some embodiments, upon determining the values for the non-normalized importance weight column306, the computer system may determine normalized importance weights. In some embodiments, the normalized importance weights may be determined according to any suitable method (e.g., a Gaussian probability function). For example, as depicted inFIG.3, the normalized importance weight column308may be determined such that the weights of the respective parameter values of column302may sum to substantially (e.g., approximately) 1.0. Although the example described above depicts weights being adjusted based on feedback data that corresponds to response times (e.g., for enriched batch results being returned to the computer system205), embodiments should not be construed to be so limited. For example, other feedback data may include whether the enriched data records are returned (e.g., from the remote server computer209) in a single batch or across multiple batches. In another example, feedback data may correspond to which remote server (e.g., among a distributed cluster of servers) the enriched data is received from. In any case, the initial weights of table201may be adjusted according to any suitable feedback data. Furthermore, althoughFIG.3depicts both non-normalized weights and subsequently determined normalized weights (e.g., according to a Gaussian probability function), it should be understood that the adjusted weights may be determined according to any suitable fashion. Continuing with the illustration of process200ofFIG.2, at block208, the computer system205may perform re-sampling with replacement. Turning to back toFIG.3for illustrating the operations of block208, the sampling wheel310may represent a mechanism for selecting samples (e.g., re-sampling) for executing a subsequent external invocation (e.g., a second external invocation). For example, as described above, table301may include updated weights for each of the parameter values (e.g., batch sizes) for the batch size parameter, and, in this example, the weights may be normalized (e.g., see column308of table301). Accordingly, the sampling wheel310may reflect the normalized updated weights of column308. In some embodiments, the sampling (e.g., and/or re-sampling) may be performed with replacement, as illustrated via the wheel310. In some embodiments, a “spin” of the sampling wheel310may represent a selection event (e.g., a random selection) according to the updated weights of column308, whereby the weights (e.g., portions/areas of the wheel310) correspond to respective probabilities for selecting a particular parameter value of table301. In this illustration, the portion (e.g., area) of the sampling wheel310that is adjacent to the marker309after the wheel spins (e.g., upon completion of the selection event) may be selected as the particular parameter value (e.g., the batch size) for a particular selection event. In some embodiments, the system may perform a number of selection events until the number of samples (e.g., five samples, similar to the initial iteration) for the next iteration (e.g., a second plurality of samples) has been selected. As depicted inFIG.2, the second plurality of samples211includes five samples. It should be understood that, based in part on the updated weights from the operations of block206, the parameter values for the samples selected for this next iteration may be different from the previous iteration. For example, the second plurality of samples211contains a first sample of ten data records, a second and third sample of twenty data records each, and a fourth and fifth sample of thirty data records each. It should be understood that the two samples of twenty data records each (and the two samples of thirty data records each) may be selected based in part on conducting the re-sampling process with replacement (e.g., being able to select the same parameter value more than one time for a given iteration). In at least this way, the system may determine which parameter values perform better from the previous iteration, and then optimize for selecting better-performing parameter values for the subsequent iteration. In some embodiments, the method of process200does not necessitate the system to learn (e.g., via machine learning) tuning parameters, nor to store learned parameters (e.g., over multiple iterations). For example, the system may not perform computations involving data from multiple iterations to tune the selection of parameters for a subsequent iteration. Instead, in some embodiments, the system may utilize the learned weights from the immediately previous iteration to be used in a subsequent iteration. In this way, techniques described herein may be performed more efficiently (e.g., less computing resources) and/or may better scale to handle multi-variate parameter tuning scenarios with more complexity. At block210, the system may select an exploration sample. In some embodiments, an exploration sample may optionally be selected, at least in part to ensure that the system continues to explore new optimal parameter values (e.g., avoiding being stationary in a local minima). In some embodiments, any suitable number and/or percentage of exploration samples may be selected. For example, in some embodiments, a fixed percentage of uniformly drawn random parameter values may be selected. In some embodiments, this fixed percentage may be included within (and/or in addition to) the second plurality of samples211as part of the second iteration (e.g., a second external invocation). For example, continuing with the above illustration, note that the second plurality of samples211does not contain any samples of batch size forty or fifty. This is due in part to the adjusted weights for the respective parameter values being significantly lowered, based on the feedback data from the initial iteration. Accordingly, if only the second plurality of samples211is transmitted during the second iteration, the system may not receive direct feedback data to indicate whether the parameter values of batch size forty and/or fifty have become more important since the previous iteration (and/or whether the weights for the respective parameter values should be readjusted (e.g., increased)). To mitigate the possibility of remaining stationary in a local minima, and enable the system to explore new optimal parameter values, the system may select one or more exploration samples. In the example of process200, the system may select a number of exploration samples from the plurality of samples (e.g., generated at block202) that is 20% of the population size (e.g., in this case, five parameter values). Accordingly, the system may select one exploration sample, which may correspond to exploration sample213. In this example, exploration sample213has a batch size of fifty. In at least this way, the system may utilize the exploration sample(s) to explore new more optimal parameter values for each iteration. Although the example of process200involves selecting exploration samples based on a fixed percentage of uniformly drawn parameter values, embodiments should not be construed to be so limited. For example, instead of selecting a fixed percentage of parameter values of a population, the percentage of parameter values may be adjusted between iterations. For example, if a particular parameter value has a large weight that exceeds a particular threshold (e.g., potentially crowding out other parameter values from being explored), the system may increase the percentage of parameter values selected for a subsequent iteration (e.g., as a percentage of the population). In another example, the system may determine to select only parameter values that are not already represented within a particular iteration's plurality of samples. It should be understood that any suitable method may be used to determine which samples and/or how many samples will be selected as exploration samples. Continuing with process200, at block212, upon the completion of re-sampling (e.g., with replacement), the system may determine a posterior probability distribution. In some embodiments, the posterior probability distribution may be in accordance with (and/or reflective of) the updated normalized importance weights of column308of table301that were used to select the new plurality of samples for the next iteration. For example, upon selecting the second plurality of samples211at block208, the posterior probability distribution of selected samples may be determined, as reflected by graph314ofFIG.3. In graph314, the posterior probability distribution reflects that samples with batch sizes ten, twenty, and thirty were selected, while batch sizes forty and fifty were not selected (e.g., as part of the second plurality of samples211). In some embodiments, the posterior probability distribution may (or may not) incorporate the zero or more exploration samples (e.g., exploration sample213) that are included within a particular iteration. In some embodiments, the posterior probability distribution may be used as an input to determine subsequently adjusted weights. In some embodiments, the posterior probability distribution may be optionally determined, for example, for auditing purposes. The system may then repeat steps of the process200to execute a sequence of one or more subsequent iterations, until the total plurality of data records is processed (e.g., enriched). For example, as depicted inFIG.2, the computer system205may execute the second iteration at block204(e.g., repeating one or more operations of block204), this time invoking the remote function of the remote server209to process the second plurality of samples211and the exploration sample213(e.g., which collectively represent samples215included in the second iteration). The computer system205may then repeat the measurement, re-sampling, and/or invocation steps over one or more iterations. For each iteration, the computer system205may update (e.g., tune) weights for parameter values based on measurements from the previous iteration. In this way, the system205may continuously adapt to changing conditions (e.g., changing network conditions, changing computing resource availability, and/or changing policies). Although the illustration ofFIG.2involves tuning a particular parameter (e.g., batch size), embodiments should not be construed to be so limited. For example, as described in reference toFIG.1, in another example, a plurality of parameters may respectively have parameter values that may be tunable (e.g., via adjustable weights). In some embodiments, the system may maintain multi-dimensional weights, respectively, associated with a particular combination of parameter values across multiple parameters. For example, a multi-dimensional weight may be represented by a vector of values, whereby each value of the vector corresponds to a particular parameter value of a different tunable parameter. The system may adjust one or more values of the weight vector (e.g., across multiple vectors) for each iteration. Accordingly, the system may then select samples based at least in part on adjusted multi-variate weight vectors per iteration. FIG.4is another simplified block diagram illustrating an example technique for tuning external invocations utilizing weight-based parameter resampling, according to some embodiments. In this illustration ofFIG.4, table400depicts a sequence of external invocations (e.g., iterations) that are executed to process a plurality of data records. Each iteration cycle may correspond to a series of one or more operations that are similar to process200ofFIG.2. In the example ofFIG.4, the computer system (e.g., computer system101ofFIG.1, and/or computer system205ofFIG.2) may be tasked with enriching 1,000 data records (e.g., received from data store102) via invoking the sequence of external invocations. In this example, similar to the illustrations ofFIGS.2and3, a single parameter (e.g., batch size) is repeatedly automatically tuned over the course of the sequence of iterations. Also, in this example, the population of potential parameter values for the batch size parameter has five members (e.g., a batch size of 10, 20, 30, 40, or 50). As described in reference toFIG.2, in some embodiments, the computer system may partition the plurality of 1,000 data records into a plurality of samples (e.g., batches of various batch sizes, the batch size for each batch being one of the population of batch sizes). For example, some batches may have a size of 10 data records, 20 data records, and so on. In this illustration, the number of exploration samples selected for each iteration, following the initial external invocation, may be set to 20% of the population (e.g., one sample). Table400includes column402(e.g., identifying each iteration in the sequence), column404(e.g., identifying a plurality of samples (e.g., including exploration samples) for each iteration), and column406(e.g., identifying a total number of data records (e.g., rows) enriched per iteration). Turning to the sequence of external invocations of table400in further detail, the computer system may execute an initial iteration408to invoke an initial external invocation. For example, similar to as described in reference to block202ofFIG.2, the computer system may first determine a uniform probability distribution of weights for batch size parameter values. For this illustration, the system determines that the number of the plurality of sample sizes (e.g., apart from potential exploration samples) to be executed per iteration is five. For example, a plurality of sample sizes selected for first external invocation includes 10, 20, 30, 40, and 50, which is in accordance with the uniform probability distribution. For the initial iteration, there are no exploration samples included. In some embodiments, this may be due in part to each population member already having representation within the plurality of sample sizes. In some embodiments, an exploration sample may be selected for the initial iteration. Continuing with the initial iteration408, a remote server (e.g., remote server209) may process each of the batch sizes and return the enriched data records to the computer system (e.g., 150 rows processed total, as reflected via column406). The computer system may measure feedback data for each sample (e.g., each sample size) processed. In this example, the system may determine that the batch size of 10 data records is associated with the fastest response times. The system then adjust weights for batch size parameter values, as described herein. For example, the weight for the “10 data records” parameter value may be increased, relative to other population members. Accordingly, the computer system may re-sample with replacement according to the updated weights. In this example, as reflected via column404for iteration410(e.g., the second iteration), the system may select a second plurality of five samples: three samples have a batch size of 10 data records, and two samples have a batch size of 40 data records. These selections may reflect a posterior probability distribution in accordance with the weight of the 10 (and 40) data records batch size having increased, relative to other weights. Since the second plurality of samples does not include representation from the other batch sizes, the system may select an exploration sample (e.g., from other batch sizes of the population of potential batch sizes). In this case, the system selects a batch size of 20 for the exploration sample. The system then executes the second iteration410(e.g., the second external invocation) that includes both the second plurality of samples and the exploration sample, for a total enrichment of 130 data records (e.g., of the original 1,000 data records that require enrichment). In some embodiments, the computer system may proceed similarly for the third iteration412and fourth iteration414. Note that, for each of those iterations, a different (e.g., random) exploration sample may be selected (e.g., 30 (for iteration412) and 40 (for iteration414)), for example, according to a uniform probability distribution. In this way, the system may ensure that parameter values that may have lower weights (e.g., due to previous slower response times) may still be explored as potential values that deserve renewed importance for a subsequent iteration (e.g., due to a change in network conditions, remote server policy, computing resource availability, etc.). At the end of the fourth iteration414, the computer system detects, based on feedback data from the exploration sample, that the batch size of 40 data records is now performing fastest, and increases the associated weight accordingly. Thus, for the fifth iteration416, three samples are included that each have a batch size of 40 data records. As depicted via table400, the batch size for 40 records continues to perform well for the sixth iteration418and the seventh iteration420, with the more samples being selected, having 40 data records (e.g., for each iteration). For the eighth (e.g., final) iteration422, the system may transmit the last remaining data records of the 1,000 total data records to be processed. For example, the computer system may perform re-sampling with replacement, following the seventh iteration420to determine new sample sizes (e.g., 40, 40, 40, 40, and 40), with an exploration sample size of 10. However, there may only be 20 data records left to process. Accordingly, the system may transmit the last 20 data records in any suitable fashion (e.g., a batch size of 20 data records, two batches of 10 data records, etc.). Similar to as described in reference toFIGS.2and3,FIG.4describes optimizing a single parameter value corresponding to batch size. However, embodiments should not be construed to be so limited. For example, multiple parameters may be tuned over each iteration, including, but not limited to, batch size, latency (e.g., between sample invocations), bit/byte size of a data record, etc. It should also be understood that, utilizing the techniques described herein, the system may automatically tune parameter values for one or more parameters for each new iteration, thus quickly adapting to changing conditions, while remaining resource efficient when determining how to tune the parameters. FIG.5is another simplified flow diagram illustrating an example process for tuning external invocations utilizing weight-based parameter resampling, according to some embodiments. In some embodiments, process500ofFIG.5may be performed be any suitable computer system (e.g., computer system101ofFIG.1). Process500is respectively illustrated as a logical flow diagram, each operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. Additionally, some, any, or all of the processes may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium is non-transitory. At block502, a computer system may receive a data record of a plurality of data records intended for processing via an external invocation. In some embodiments, the data record may correspond to any suitable data record (e.g., a user input, such as a text message, a blog post, a video, an audio recording, an image, etc.). In some embodiments, the data record may be processed in any suitable way, including, but not limited to, data enrichment that involves determining supplementary information from the data record. In some embodiments, the external invocation may involve invoking a remote function of a computer system executing within a distributed environment (e.g., in the cloud). At block504, the computer system may determine potential parameter values of a particular parameter associated with executing external invocations. In some embodiments, the particular parameter correspond to any suitable tunable parameter, whereby adjustments of the particular parameter may be associated with a success rate and/or rate of efficiency (e.g., response time) when invoking the remote function to process the data record. In one example, the particular parameter may correspond to a batch size of a batch of user inputs (e.g., a batch of text messages). The batch size may indicate a number of a subset of the plurality of data records that corresponds to the batch. In some embodiments, the plurality of parameter values may be a discrete and/or unique set of potential values for the particular parameter (e.g., different batch sizes). In some embodiments, the plurality of parameter values may be fixed (e.g., a fixed population size) between external invocations used to process the plurality of data records. In some embodiments, one or more operations of block504may be similar to block202ofFIG.2. In some embodiments, the particular parameter may be one of a plurality of parameters, for example, associated with a number of data records of a sample a time interval between transmission of samples of an external invocation, a number of bits associated with a sample, and/or any suitable type of parameter. At block506, the computer system may determine a first sample of a plurality of samples. In some embodiments, the first sample includes at least the data record and corresponds to a particular batch of user inputs intended for data enrichment. In some embodiments, the first sample is associated with a first parameter value of the potential values of the particular parameter, whereby each sample of the plurality of samples is assigned to one of the plurality of parameter values. In some embodiments, the plurality of samples may be generated from the plurality of data records, for example, by grouping subsets of data records into different samples, whereby each sample may have a particular batch size that is one of the plurality of parameter values. In some embodiments, one or more operations of block506may be similar to block202and/or block204ofFIG.2. At block508, the computer system may determine respective weights of parameter values of the plurality of parameter values. In some embodiments, a first weight of the first parameter value may be associated with a probability of selecting a sample having the first parameter value (e.g., a particular batch size). In some embodiments, one or more operations of block508may be similar to as described in reference toFIGS.2and3(e.g. related to determining weights of parameter values). In some embodiments, for example, in a case where there are multiple parameters, a weight (e.g., a weight vector) may be associated with multiple parameters (e.g., a particular combination of parameter values), as described herein. At block510, the computer system may select the first sample for processing via a first external invocation of the external invocations based at least in part on the first weight. In some embodiments, the first sample may be one of a first plurality of samples that is a subset of the plurality of samples. For example, the first plurality of samples may include the first sample and a second sample. In some embodiments, the selection of samples for inclusion among the first plurality of samples is performed with replacement. For example, in some embodiments, the second sample is also associated with the first parameter value of the population of parameter values of the particular parameter. In some embodiments, the first external invocation corresponds to an initial external invocation that is used to process a first portion of the plurality of data records. In this case, the respective weights may be determined according to a uniform probability distribution. At block512, the computer system may determine feedback data associated with a level of performance of the first external invocation. In some embodiments, the level of performance is associated with a response time interval between transmission of the first sample to a server computer (e.g., for invoking a remote function) and receipt of a response to the transmission that includes enriched data from the server computer. In some embodiments, one or more operations of block512may be similar to as described in reference toFIGS.2and3(e.g. block206ofFIG.2). In some embodiments, the requests (e.g., invocations) for processing samples of the first external invocation may be transmitted in parallel to the server computer that is configured to process requests in parallel. In some embodiments, being able to invoke parallel requests and conduct measurements in parallel may decrease the reaction time of recovery from faults (e.g., if a sample fails to be processed) and/or quickly improve processing times for subsequent iterations. In some embodiments, the samples may be transmitted and processed serially. At block514, the computer system may adjust weights of the parameter values of the particular parameter based at least in part on the feedback data. In some embodiments, one or more operations of block514may be similar to as described in reference toFIGS.2and3(e.g. block206ofFIG.2). For example, the computer system may measure response times for samples of different batch sizes, and then adjust the weights based on response times. At block516, the computer system may select a second sample of the plurality of samples to be processed via execution of a second external invocation. In some embodiments, the second sample may be selected based at least in part on a second weight associated with a second parameter value of the second sample, whereby the selection is performed based in part on the adjustment of weights of the parameter values. In some embodiments, similar to as described above, the second sample may be one of a second plurality of samples of the plurality of samples. In some embodiments, the system may optionally select one or more additional samples (e.g., exploratory samples), whereby both the second plurality of samples and the one or more additional samples are processed via the second external invocation. In some embodiments, the number of the one or more additional samples may be determined as a percentage (e.g., a fixed percentage) of the population size (e.g., the number of parameter values). In some embodiments, the additional samples may be selected independently from respective weights of parameter values of the plurality of parameter values. For example, the exploratory samples may be selected based on a uniform probability distribution instead of adjusted weights. In some embodiments, as described herein, the system may execute the sequence of iterations (e.g., the first external invocation, the second external invocation, and so on) to process the full plurality of data records. In some embodiments the system may continuously tune parameters per iteration, based on measurements from the previous iteration. In at least this way, techniques described herein enable the system to quickly adjust to changing conditions (e.g., recovery from outages and/or slower processing times), reduce system complexity (e.g., simplifying the parameter tuning process), and efficiently scale to solve multi-variate parameter tuning problems. The term cloud service is generally used to refer to a service that is made available by a cloud services provider (CSP) to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP's infrastructure are separate from the customer's own on-premise servers and systems. Customers can thus avail themselves of cloud services provided by the CSP without having to purchase separate hardware and software resources for the services. Cloud services are designed to provide a subscribing customer easy, scalable access to applications and computing resources without the customer having to invest in procuring the infrastructure that is used for providing the services. There are several cloud service providers that offer various types of cloud services. There are various different types or models of cloud services including Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and others. A customer can subscribe to one or more cloud services provided by a CSP. The customer can be any entity such as an individual, an organization, an enterprise, and the like. When a customer subscribes to or registers for a service provided by a CSP, a tenancy or an account is created for that customer. The customer can then, via this account, access the subscribed-to one or more cloud resources associated with the account. As noted above, infrastructure as a service (IaaS) is one particular type of cloud computing. IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an IaaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an IaaS provider may also supply a variety of services to accompany those infrastructure components (e.g., billing, monitoring, logging, load balancing and clustering, etc.). Thus, as these services may be policy-driven, IaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. In some instances, IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc. In most cases, a cloud computing model will require the participation of a cloud provider. The cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services. In some examples, IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment (e.g., on self-service virtual machines (e.g., that can be spun up on demand) or the like. In some examples, IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first. In some cases, there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files. In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on-demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve. In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned. FIG.6is a block diagram600illustrating an example pattern of an IaaS architecture, according to at least one embodiment. Service operators602can be communicatively coupled to a secure host tenancy604that can include a virtual cloud network (VCN)606and a secure host subnet608. In some examples, the service operators602may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS, and the like, and being Internet, e-mail, short message service, Blackberry®, or other communication protocol enabled. Alternatively, the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN606and/or the Internet. The VCN606can include a local peering gateway (LPG)610that can be communicatively coupled to a secure shell (SSH) VCN612via an LPG610contained in the SSH VCN612. The SSH VCN612can include an SSH subnet614, and the SSH VCN612can be communicatively coupled to a control plane VCN616via the LPG610contained in the control plane VCN616. Also, the SSH VCN612can be communicatively coupled to a data plane VCN618via an LPG610. The control plane VCN616and the data plane VCN618can be contained in a service tenancy619that can be owned and/or operated by the IaaS provider. The control plane VCN616can include a control plane demilitarized zone (DMZ) tier620that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep breaches contained. Additionally, the DMZ tier620can include one or more load balancer (LB) subnet(s)622, a control plane app tier624that can include app subnet(s)626, a control plane data tier628that can include database (DB) subnet(s)630(e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s)622contained in the control plane DMZ tier620can be communicatively coupled to the app subnet(s)626contained in the control plane app tier624and an Internet gateway634that can be contained in the control plane VCN616, and the app subnet(s)626can be communicatively coupled to the DB subnet(s)630contained in the control plane data tier628and a service gateway636and a network address translation (NAT) gateway638. The control plane VCN616can include the service gateway636and the NAT gateway638. The control plane VCN616can include a data plane mirror app tier640that can include app subnet(s)626. The app subnet(s)626contained in the data plane mirror app tier640can include a virtual network interface controller (VNIC)642that can execute a compute instance644. The compute instance644can communicatively couple the app subnet(s)626of the data plane mirror app tier640to app subnet(s)626that can be contained in a data plane app tier646. The data plane VCN618can include the data plane app tier646, a data plane DMZ tier648, and a data plane data tier650. The data plane DMZ tier648can include LB subnet(s)622that can be communicatively coupled to the app subnet(s)626of the data plane app tier646and the Internet gateway634of the data plane VCN618. The app subnet(s)626can be communicatively coupled to the service gateway636of the data plane VCN618and the NAT gateway638of the data plane VCN618. The data plane data tier650can also include the DB subnet(s)630that can be communicatively coupled to the app subnet(s)626of the data plane app tier646. The Internet gateway634of the control plane VCN616and of the data plane VCN618can be communicatively coupled to a metadata management service652that can be communicatively coupled to public Internet654. Public Internet654can be communicatively coupled to the NAT gateway638of the control plane VCN616and of the data plane VCN618. The service gateway636of the control plane VCN616and of the data plane VCN618can be communicatively couple to cloud services656. In some examples, the service gateway636of the control plane VCN616or of the data plane VCN618can make application programming interface (API) calls to cloud services656without going through public Internet654. The API calls to cloud services656from the service gateway636can be one-way: the service gateway636can make API calls to cloud services656, and cloud services656can send requested data to the service gateway636. But, cloud services656may not initiate API calls to the service gateway636. In some examples, the secure host tenancy604can be directly connected to the service tenancy619, which may be otherwise isolated. The secure host subnet608can communicate with the SSH subnet614through an LPG610that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet608to the SSH subnet614may give the secure host subnet608access to other entities within the service tenancy619. The control plane VCN616may allow users of the service tenancy619to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN616may be deployed or otherwise used in the data plane VCN618. In some examples, the control plane VCN616can be isolated from the data plane VCN618, and the data plane mirror app tier640of the control plane VCN616can communicate with the data plane app tier646of the data plane VCN618via VNICs642that can be contained in the data plane mirror app tier640and the data plane app tier646. In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet654that can communicate the requests to the metadata management service652. The metadata management service652can communicate the request to the control plane VCN616through the Internet gateway634. The request can be received by the LB subnet(s)622contained in the control plane DMZ tier620. The LB subnet(s)622may determine that the request is valid, and in response to this determination, the LB subnet(s)622can transmit the request to app subnet(s)626contained in the control plane app tier624. If the request is validated and requires a call to public Internet654, the call to public Internet654may be transmitted to the NAT gateway638that can make the call to public Internet654. Memory that may be desired to be stored by the request can be stored in the DB subnet(s)630. In some examples, the data plane mirror app tier640can facilitate direct communication between the control plane VCN616and the data plane VCN618. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN618. Via a VNIC642, the control plane VCN616can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN618. In some embodiments, the control plane VCN616and the data plane VCN618can be contained in the service tenancy619. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN616or the data plane VCN618. Instead, the IaaS provider may own or operate the control plane VCN616and the data plane VCN618, both of which may be contained in the service tenancy619. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users', or other customers', resources. Also, this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet654, which may not have a desired level of threat prevention, for storage. In other embodiments, the LB subnet(s)622contained in the control plane VCN616can be configured to receive a signal from the service gateway636. In this embodiment, the control plane VCN616and the data plane VCN618may be configured to be called by a customer of the IaaS provider without calling public Internet654. Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy619, which may be isolated from public Internet654. FIG.7is a block diagram700illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators702(e.g. service operators602ofFIG.6) can be communicatively coupled to a secure host tenancy704(e.g. the secure host tenancy604ofFIG.6) that can include a virtual cloud network (VCN)706(e.g. the VCN606ofFIG.6) and a secure host subnet708(e.g. the secure host subnet608ofFIG.6). The VCN706can include a local peering gateway (LPG)710(e.g. the LPG610ofFIG.6) that can be communicatively coupled to a secure shell (SSH) VCN712(e.g. the SSH VCN612ofFIG.6) via an LPG610contained in the SSH VCN712. The SSH VCN712can include an SSH subnet714(e.g. the SSH subnet614ofFIG.6), and the SSH VCN712can be communicatively coupled to a control plane VCN716(e.g. the control plane VCN616ofFIG.6) via an LPG710contained in the control plane VCN716. The control plane VCN716can be contained in a service tenancy719(e.g. the service tenancy619ofFIG.6), and the data plane VCN718(e.g. the data plane VCN618ofFIG.6) can be contained in a customer tenancy721that may be owned or operated by users, or customers, of the system. The control plane VCN716can include a control plane DMZ tier720(e.g. the control plane DMZ tier620ofFIG.6) that can include LB subnet(s)722(e.g. LB subnet(s)622ofFIG.6), a control plane app tier724(e.g. the control plane app tier624ofFIG.6) that can include app subnet(s)726(e.g. app subnet(s)626ofFIG.6), a control plane data tier728(e.g. the control plane data tier628ofFIG.6) that can include database (DB) subnet(s)730(e.g. similar to DB subnet(s)630ofFIG.6). The LB subnet(s)722contained in the control plane DMZ tier720can be communicatively coupled to the app subnet(s)726contained in the control plane app tier724and an Internet gateway734(e.g. the Internet gateway634ofFIG.6) that can be contained in the control plane VCN716, and the app subnet(s)726can be communicatively coupled to the DB subnet(s)730contained in the control plane data tier728and a service gateway736(e.g. the service gateway ofFIG.6) and a network address translation (NAT) gateway738(e.g. the NAT gateway638ofFIG.6). The control plane VCN716can include the service gateway736and the NAT gateway738. The control plane VCN716can include a data plane mirror app tier740(e.g. the data plane mirror app tier640ofFIG.6) that can include app subnet(s)726. The app subnet(s)726contained in the data plane mirror app tier740can include a virtual network interface controller (VNIC)742(e.g. the VNIC of 642) that can execute a compute instance744(e.g. similar to the compute instance644ofFIG.6). The compute instance744can facilitate communication between the app subnet(s)726of the data plane mirror app tier740and the app subnet(s)726that can be contained in a data plane app tier746(e.g. the data plane app tier646ofFIG.6) via the VNIC742contained in the data plane mirror app tier740and the VNIC742contained in the data plane app tier746. The Internet gateway734contained in the control plane VCN716can be communicatively coupled to a metadata management service752(e.g. the metadata management service652ofFIG.6) that can be communicatively coupled to public Internet754(e.g. public Internet654ofFIG.6). Public Internet754can be communicatively coupled to the NAT gateway738contained in the control plane VCN716. The service gateway736contained in the control plane VCN716can be communicatively couple to cloud services756(e.g. cloud services656ofFIG.6). In some examples, the data plane VCN718can be contained in the customer tenancy721. In this case, the IaaS provider may provide the control plane VCN716for each customer, and the IaaS provider may, for each customer, set up a unique compute instance744that is contained in the service tenancy719. Each compute instance744may allow communication between the control plane VCN716, contained in the service tenancy719, and the data plane VCN718that is contained in the customer tenancy721. The compute instance744may allow resources, that are provisioned in the control plane VCN716that is contained in the service tenancy719, to be deployed or otherwise used in the data plane VCN718that is contained in the customer tenancy721. In other examples, the customer of the IaaS provider may have databases that live in the customer tenancy721. In this example, the control plane VCN716can include the data plane mirror app tier740that can include app subnet(s)726. The data plane mirror app tier740can reside in the data plane VCN718, but the data plane mirror app tier740may not live in the data plane VCN718. That is, the data plane mirror app tier740may have access to the customer tenancy721, but the data plane mirror app tier740may not exist in the data plane VCN718or be owned or operated by the customer of the IaaS provider. The data plane mirror app tier740may be configured to make calls to the data plane VCN718but may not be configured to make calls to any entity contained in the control plane VCN716. The customer may desire to deploy or otherwise use resources in the data plane VCN718that are provisioned in the control plane VCN716, and the data plane mirror app tier740can facilitate the desired deployment, or other usage of resources, of the customer. In some embodiments, the customer of the IaaS provider can apply filters to the data plane VCN718. In this embodiment, the customer can determine what the data plane VCN718can access, and the customer may restrict access to public Internet754from the data plane VCN718. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN718to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN718, contained in the customer tenancy721, can help isolate the data plane VCN718from other customers and from public Internet754. In some embodiments, cloud services756can be called by the service gateway736to access services that may not exist on public Internet754, on the control plane VCN716, or on the data plane VCN718. The connection between cloud services756and the control plane VCN716or the data plane VCN718may not be live or continuous. Cloud services756may exist on a different network owned or operated by the IaaS provider. Cloud services756may be configured to receive calls from the service gateway736and may be configured to not receive calls from public Internet754. Some cloud services756may be isolated from other cloud services756, and the control plane VCN716may be isolated from cloud services756that may not be in the same region as the control plane VCN716. For example, the control plane VCN716may be located in “Region 1,” and cloud service “Deployment 6,” may be located in Region 1 and in “Region 2.” If a call to Deployment 6 is made by the service gateway736contained in the control plane VCN716located in Region 1, the call may be transmitted to Deployment 6 in Region 1. In this example, the control plane VCN716, or Deployment 6 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 6 in Region 2. FIG.8is a block diagram800illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators802(e.g. service operators602ofFIG.6) can be communicatively coupled to a secure host tenancy804(e.g. the secure host tenancy604ofFIG.6) that can include a virtual cloud network (VCN)806(e.g. the VCN606ofFIG.6) and a secure host subnet808(e.g. the secure host subnet608ofFIG.6). The VCN806can include an LPG810(e.g. the LPG610ofFIG.6) that can be communicatively coupled to an SSH VCN812(e.g. the SSH VCN612ofFIG.6) via an LPG810contained in the SSH VCN812. The SSH VCN812can include an SSH subnet814(e.g. the SSH subnet614ofFIG.6), and the SSH VCN812can be communicatively coupled to a control plane VCN816(e.g. the control plane VCN616ofFIG.6) via an LPG810contained in the control plane VCN816and to a data plane VCN818(e.g. the data plane618ofFIG.6) via an LPG810contained in the data plane VCN818. The control plane VCN816and the data plane VCN818can be contained in a service tenancy819(e.g. the service tenancy619ofFIG.6). The control plane VCN816can include a control plane DMZ tier820(e.g. the control plane DMZ tier620ofFIG.6) that can include load balancer (LB) subnet(s)822(e.g. LB subnet(s)622ofFIG.6), a control plane app tier824(e.g. the control plane app tier624of FIG.6) that can include app subnet(s)826(e.g. similar to app subnet(s)626ofFIG.6), a control plane data tier828(e.g. the control plane data tier628ofFIG.6) that can include DB subnet(s)830. The LB subnet(s)822contained in the control plane DMZ tier820can be communicatively coupled to the app subnet(s)826contained in the control plane app tier824and to an Internet gateway834(e.g. the Internet gateway634ofFIG.6) that can be contained in the control plane VCN816, and the app subnet(s)826can be communicatively coupled to the DB subnet(s)830contained in the control plane data tier828and to a service gateway836(e.g. the service gateway ofFIG.6) and a network address translation (NAT) gateway838(e.g. the NAT gateway638ofFIG.6). The control plane VCN816can include the service gateway836and the NAT gateway838. The data plane VCN818can include a data plane app tier846(e.g. the data plane app tier646ofFIG.6), a data plane DMZ tier848(e.g. the data plane DMZ tier648ofFIG.6), and a data plane data tier850(e.g. the data plane data tier650ofFIG.6). The data plane DMZ tier848can include LB subnet(s)822that can be communicatively coupled to trusted app subnet(s)860and untrusted app subnet(s)862of the data plane app tier846and the Internet gateway834contained in the data plane VCN818. The trusted app subnet(s)860can be communicatively coupled to the service gateway836contained in the data plane VCN818, the NAT gateway838contained in the data plane VCN818, and DB subnet(s)830contained in the data plane data tier850. The untrusted app subnet(s)862can be communicatively coupled to the service gateway836contained in the data plane VCN818and DB subnet(s)830contained in the data plane data tier850. The data plane data tier850can include DB subnet(s)830that can be communicatively coupled to the service gateway836contained in the data plane VCN818. The untrusted app subnet(s)862can include one or more primary VNICs864(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)866(1)-(N). Each tenant VM866(1)-(N) can be communicatively coupled to a respective app subnet867(1)-(N) that can be contained in respective container egress VCNs868(1)-(N) that can be contained in respective customer tenancies870(1)-(N). Respective secondary VNICs872(1)-(N) can facilitate communication between the untrusted app subnet(s)862contained in the data plane VCN818and the app subnet contained in the container egress VCNs868(1)-(N). Each container egress VCNs868(1)-(N) can include a NAT gateway838that can be communicatively coupled to public Internet854(e.g. public Internet654ofFIG.6). The Internet gateway834contained in the control plane VCN816and contained in the data plane VCN818can be communicatively coupled to a metadata management service852(e.g. the metadata management system652ofFIG.6) that can be communicatively coupled to public Internet854. Public Internet854can be communicatively coupled to the NAT gateway838contained in the control plane VCN816and contained in the data plane VCN818. The service gateway836contained in the control plane VCN816and contained in the data plane VCN818can be communicatively couple to cloud services856. In some embodiments, the data plane VCN818can be integrated with customer tenancies870. This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the IaaS provider may determine whether to run code given to the IaaS provider by the customer. In some examples, the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane tier app846. Code to run the function may be executed in the VMs866(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN818. Each VM866(1)-(N) may be connected to one customer tenancy870. Respective containers871(1)-(N) contained in the VMs866(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers871(1)-(N) running code, where the containers871(1)-(N) may be contained in at least the VM866(1)-(N) that are contained in the untrusted app subnet(s)862), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer. The containers871(1)-(N) may be communicatively coupled to the customer tenancy870and may be configured to transmit or receive data from the customer tenancy870. The containers871(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN818. Upon completion of running the code, the IaaS provider may kill or otherwise dispose of the containers871(1)-(N). In some embodiments, the trusted app subnet(s)860may run code that may be owned or operated by the IaaS provider. In this embodiment, the trusted app subnet(s)860may be communicatively coupled to the DB subnet(s)830and be configured to execute CRUD operations in the DB subnet(s)830. The untrusted app subnet(s)862may be communicatively coupled to the DB subnet(s)830, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s)830. The containers871(1)-(N) that can be contained in the VM866(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s)830. In other embodiments, the control plane VCN816and the data plane VCN818may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN816and the data plane VCN818. However, communication can occur indirectly through at least one method. An LPG810may be established by the IaaS provider that can facilitate communication between the control plane VCN816and the data plane VCN818. In another example, the control plane VCN816or the data plane VCN818can make a call to cloud services856via the service gateway836. For example, a call to cloud services856from the control plane VCN816can include a request for a service that can communicate with the data plane VCN818. FIG.9is a block diagram900illustrating another example pattern of an IaaS architecture, according to at least one embodiment. Service operators902(e.g. service operators602ofFIG.6) can be communicatively coupled to a secure host tenancy904(e.g. the secure host tenancy604ofFIG.6) that can include a virtual cloud network (VCN)906(e.g. the VCN606ofFIG.6) and a secure host subnet908(e.g. the secure host subnet608ofFIG.6). The VCN906can include an LPG910(e.g. the LPG610ofFIG.6) that can be communicatively coupled to an SSH VCN912(e.g. the SSH VCN612ofFIG.6) via an LPG910contained in the SSH VCN912. The SSH VCN912can include an SSH subnet914(e.g. the SSH subnet614ofFIG.6), and the SSH VCN912can be communicatively coupled to a control plane VCN916(e.g. the control plane VCN616ofFIG.6) via an LPG910contained in the control plane VCN916and to a data plane VCN918(e.g. the data plane618ofFIG.6) via an LPG910contained in the data plane VCN918. The control plane VCN916and the data plane VCN918can be contained in a service tenancy919(e.g. the service tenancy619ofFIG.6). The control plane VCN916can include a control plane DMZ tier920(e.g. the control plane DMZ tier620ofFIG.6) that can include LB subnet(s)922(e.g. LB subnet(s)622ofFIG.6), a control plane app tier924(e.g. the control plane app tier624ofFIG.6) that can include app subnet(s)926(e.g. app subnet(s)626ofFIG.6), a control plane data tier928(e.g. the control plane data tier628ofFIG.6) that can include DB subnet(s)930(e.g. DB subnet(s)830ofFIG.8). The LB subnet(s)922contained in the control plane DMZ tier920can be communicatively coupled to the app subnet(s)926contained in the control plane app tier924and to an Internet gateway934(e.g. the Internet gateway634ofFIG.6) that can be contained in the control plane VCN916, and the app subnet(s)926can be communicatively coupled to the DB subnet(s)930contained in the control plane data tier928and to a service gateway936(e.g. the service gateway ofFIG.6) and a network address translation (NAT) gateway938(e.g. the NAT gateway638ofFIG.6). The control plane VCN916can include the service gateway936and the NAT gateway938. The data plane VCN918can include a data plane app tier946(e.g. the data plane app tier646ofFIG.6), a data plane DMZ tier948(e.g. the data plane DMZ tier648ofFIG.6), and a data plane data tier950(e.g. the data plane data tier650ofFIG.6). The data plane DMZ tier948can include LB subnet(s)922that can be communicatively coupled to trusted app subnet(s)960(e.g. trusted app subnet(s)860ofFIG.8) and untrusted app subnet(s)962(e.g. untrusted app subnet(s)862ofFIG.8) of the data plane app tier946and the Internet gateway934contained in the data plane VCN918. The trusted app subnet(s)960can be communicatively coupled to the service gateway936contained in the data plane VCN918, the NAT gateway938contained in the data plane VCN918, and DB subnet(s)930contained in the data plane data tier950. The untrusted app subnet(s)962can be communicatively coupled to the service gateway936contained in the data plane VCN918and DB subnet(s)930contained in the data plane data tier950. The data plane data tier950can include DB subnet(s)930that can be communicatively coupled to the service gateway936contained in the data plane VCN918. The untrusted app subnet(s)962can include primary VNICs964(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs)966(1)-(N) residing within the untrusted app subnet(s)962. Each tenant VM966(1)-(N) can run code in a respective container967(1)-(N), and be communicatively coupled to an app subnet926that can be contained in a data plane app tier946that can be contained in a container egress VCN968. Respective secondary VNICs972(1)-(N) can facilitate communication between the untrusted app subnet(s)962contained in the data plane VCN918and the app subnet contained in the container egress VCN968. The container egress VCN can include a NAT gateway938that can be communicatively coupled to public Internet954(e.g. public Internet654ofFIG.6). The Internet gateway934contained in the control plane VCN916and contained in the data plane VCN918can be communicatively coupled to a metadata management service952(e.g. the metadata management system652ofFIG.6) that can be communicatively coupled to public Internet954. Public Internet954can be communicatively coupled to the NAT gateway938contained in the control plane VCN916and contained in the data plane VCN918. The service gateway936contained in the control plane VCN916and contained in the data plane VCN918can be communicatively couple to cloud services956. In some examples, the pattern illustrated by the architecture of block diagram900ofFIG.9may be considered an exception to the pattern illustrated by the architecture of block diagram800ofFIG.8and may be desirable for a customer of the IaaS provider if the IaaS provider cannot directly communicate with the customer (e.g., a disconnected region). The respective containers967(1)-(N) that are contained in the VMs966(1)-(N) for each customer can be accessed in real-time by the customer. The containers967(1)-(N) may be configured to make calls to respective secondary VNICs972(1)-(N) contained in app subnet(s)926of the data plane app tier946that can be contained in the container egress VCN968. The secondary VNICs972(1)-(N) can transmit the calls to the NAT gateway938that may transmit the calls to public Internet954. In this example, the containers967(1)-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN916and can be isolated from other entities contained in the data plane VCN918. The containers967(1)-(N) may also be isolated from resources from other customers. In other examples, the customer can use the containers967(1)-(N) to call cloud services956. In this example, the customer may run code in the containers967(1)-(N) that requests a service from cloud services956. The containers967(1)-(N) can transmit this request to the secondary VNICs972(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet954. Public Internet954can transmit the request to LB subnet(s)922contained in the control plane VCN916via the Internet gateway934. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s)926that can transmit the request to cloud services956via the service gateway936. It should be appreciated that IaaS architectures600,700,800,900depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. In certain embodiments, the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee. FIG.10illustrates an example computer system1000, in which various embodiments may be implemented. The system1000may be used to implement any of the computer systems described above. As shown in the figure, computer system1000includes a processing unit1004that communicates with a number of peripheral subsystems via a bus subsystem1002. These peripheral subsystems may include a processing acceleration unit1006, an I/O subsystem1008, a storage subsystem1018and a communications subsystem1024. Storage subsystem1018includes tangible computer-readable storage media1022and a system memory1010. Bus subsystem1002provides a mechanism for letting the various components and subsystems of computer system1000communicate with each other as intended. Although bus subsystem1002is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem1002may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard. Processing unit1004, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system1000. One or more processors may be included in processing unit1004. These processors may include single core or multicore processors. In certain embodiments, processing unit1004may be implemented as one or more independent processing units1032and/or1034with single or multicore processors included in each processing unit. In other embodiments, processing unit1004may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip. In various embodiments, processing unit1004can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s)1004and/or in storage subsystem1018. Through suitable programming, processor(s)1004can provide various functionalities described above. Computer system1000may additionally include a processing acceleration unit1006, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like. I/O subsystem1008may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands. User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like. User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system1000to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems. Computer system1000may comprise a storage subsystem1018that comprises software elements, shown as being currently located within a system memory1010. System memory1010may store program instructions that are loadable and executable on processing unit1004, as well as data generated during the execution of these programs. Depending on the configuration and type of computer system1000, system memory1010may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.) The RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated and executed by processing unit1004. In some implementations, system memory1010may include multiple different types of memory, such as static random access memory (SRAM) or dynamic random access memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system1000, such as during start-up, may typically be stored in the ROM. By way of example, and not limitation, system memory1010also illustrates application programs1012, which may include client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), etc., program data1014, and an operating system1016. By way of example, operating system1016may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® 10 OS, and Palm® OS operating systems. Storage subsystem1018may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem1018. These software modules or instructions may be executed by processing unit1004. Storage subsystem1018may also provide a repository for storing data used in accordance with the present disclosure. Storage subsystem1000may also include a computer-readable storage media reader1020that can further be connected to computer-readable storage media1022. Together and, optionally, in combination with system memory1010, computer-readable storage media1022may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. Computer-readable storage media1022containing code, or portions of code, can also include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. This can also include nontangible computer-readable media, such as data signals, data transmissions, or any other medium which can be used to transmit the desired information and which can be accessed by computing system1000. By way of example, computer-readable storage media1022may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media1022may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media1022may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system1000. Communications subsystem1024provides an interface to other computer systems and networks. Communications subsystem1024serves as an interface for receiving data from and transmitting data to other systems from computer system1000. For example, communications subsystem1024may enable computer system1000to connect to one or more devices via the Internet. In some embodiments communications subsystem1024can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem1024can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. In some embodiments, communications subsystem1024may also receive input communication in the form of structured and/or unstructured data feeds1026, event streams1028, event updates1030, and the like on behalf of one or more users who may use computer system1000. By way of example, communications subsystem1024may be configured to receive data feeds1026in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources. Additionally, communications subsystem1024may also be configured to receive data in the form of continuous data streams, which may include event streams1028of real-time events and/or event updates1030, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g. network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. Communications subsystem1024may also be configured to output the structured and/or unstructured data feeds1026, event streams1028, event updates1030, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system1000. Computer system1000can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system. Due to the ever-changing nature of computers and networks, the description of computer system1000depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly. Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or modules are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.
143,071
11860840
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings. DETAILED DESCRIPTION In the present disclosure, use of the term “a,” “an,” or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements. In some examples, data deduplication is accomplished by computing a fingerprint of an incoming data value that is to be stored (written) into a data storage system. A “data storage system” can include a storage device or a collection of storage devices. A data storage system can include a data storage array, a data storage appliance, and so forth. A data storage system may also include storage controller(s) that manage(s) access of the storage device(s). A “data value” can refer to any portion of data that can be separately identified in the data storage system. In some cases, a data value can refer to a chunk, a collection of chunks, or any other portion of data. A “controller” can refer to a hardware processing circuit, which can include any or some combination of a microprocessor, a core of a multi-core microprocessor, a microcontroller, a programmable integrated circuit, a programmable gate array, a digital signal processor, or another hardware processing circuit. Alternatively, a “controller” can refer to a combination of a hardware processing circuit and machine-readable instructions (software and/or firmware) executable on the hardware processing circuit. A “fingerprint” refers to a value derived by applying a function on the content of a data value (where the “content” can include the entirety or a subset of the content of the data value). An example of the function that can be applied includes a hash function that produces a hash value based on the incoming data value. Examples of hash functions include cryptographic hash functions such as the Secure Hash Algorithm 2 (SHA-2) hash functions, e.g., SHA-224, SHA-256, SHA-384, etc. In other examples, other types of hash functions or other types of fingerprint functions may be employed. Fingerprints represent data values stored in the data storage system. Full fingerprints uniquely identify respective data values (the difference between full fingerprints and partial fingerprints is discussed further below). A fingerprint computed for an incoming data value can be compared to fingerprints stored in a fingerprint index, which is used for data deduplication. The fingerprint index maps fingerprints for data values to storage location indicators of the data values. A “storage location indicator” can refer to any information that provides an indication of a storage location of a data values in a persistent storage. The persistent storage can be implemented using one or multiple persistent (e.g., nonvolatile) storage device(s), such as disk-based storage device(s) (e.g., hard disk drive(s) (HDDs)), solid state device(s) (SSDs) such as flash storage device(s), or the like, or a combination thereof. In some examples, the fingerprint index maps fingerprints to sequential block numbers (SBNs). An SBN is an example of a storage location indicator referred to above. An SBN is useable to determine where a data value is physically stored in a persistent storage. However, in some examples, the SBN does not actually identify the physical location, but rather, the SBN can be used to derive a physical address or other value that identifies a physical location. During a data deduplication operation performed for an incoming data value, a match between a fingerprint of the incoming data value with a fingerprint in the fingerprint index indicates that the incoming data value may be a duplicate of a data value already stored in the data storage system. If the incoming data value is a duplicate of an already stored data value, instead of storing the duplicative incoming data value, a reference count stored in the data storage system can be incremented to indicate the number of instances of the data value that have been received. As the data storage system fills up with data values, the size of the fingerprint index stored in the persistent storage increases. Keeping a large fingerprint index up to date can be costly in terms of resource usage. A data storage system can include a number of different types of storage for storing data, including a persistent storage, a non-volatile random access memory (NVRAM), and a volatile memory. A persistent storage can be implemented using relatively low cost storage device(s), such as disk-based storage device(s), solid-state storage device(s), and so forth. The persistent storage can have a relatively large storage capacity, but can have a relatively slow access speed. An NVRAM can be implemented using electrically erasable programmable read-only memory (EEPROM) device(s). In other examples, an NVRAM can be implemented using battery-backed dynamic random access memory (DRAM) device(s) or battery-backed static random access memory (SRAM) device(s). The NVRAM is randomly accessible (both readable and writeable) on a page or byte basis (in other words, a page or byte of the NVRAM is individually accessible in response to a request, without retrieving another page or byte of the NVRAM). A page or byte has a smaller size than a physical block used in another type of storage device, such as a solid-state storage device (e.g., a flash memory device). A solid-state storage device can be written to on a block basis; in other words, a write to the solid-state storage device would write an entire physical block, rather than to a portion less than the size of the physical block. Generally, an NVRAM can be relatively expensive (e.g., more expensive than a solid-state storage device or a disk-based storage device), and thus an NVRAM included in the data storage system may have a relatively small size. A volatile memory can be implemented using DRAM device(s), SRAM device(s), or any other type of memory where data stored in the memory is lost if power were removed from the memory. This is contrasted with the persistent storage or the NVRAM, which can maintain stored data even if power were removed from the persistent storage or the NVRAM. In some examples, to improve performance of a data storage system when performing data deduplication, updates for incoming data values (that are part of writes) are added to a B-tree index, which can be stored in the NVRAM. The B-tree index stored in the NVRAM is considered a cache index, since the B-tree index is stored in a memory having a faster access speed than the persistent storage. Such a cache index is referred to as a “B-tree cache index.” A B-tree cache index includes nodes arranged in a hierarchical manner. Leaf nodes of the B-tree cache index include update entries that map fingerprints to storage location indicators (e.g., SBNs). Intermediate nodes of the B-tree cache index are used to find a matching entry of the B-tree cache index based on a fingerprint. The B-tree cache index can quickly grow in size as the quantity of incoming data values (and thus corresponding update entries) increases. If a large portion of the NVRAM is consumed by the B-tree cache index, then the NVRAM may not be available for other services performed by the data storage system. Moreover, merging update entries for the B-tree cache index from the NVRAM into the B-tree cache index can be expensive in terms of consumption of processing resources. In addition, performing a read of the B-tree cache index involves performing a binary search of the hierarchical nodes of the B-tree cache index, which can also be expensive in terms of consumption of processing resources. In accordance with some implementations of the present disclosure, instead of implementing a cache index for a fingerprint index as a B-tree, the cache index can instead be implemented using a log structured hash table. In addition, in some examples, the cache index implemented as the log structured hash table can be stored in a persistent cache memory that is separate from the NVRAM. The persistent cache memory can be implemented using solid-state storage device(s), such as flash memory device(s), for example. A “persistent cache memory” is a cache memory that maintains data stored in the cache memory even when power is removed from the cache memory. Also, by storing updates to the cache index in a volatile memory instead of the NVRAM, expensive and limited storage space of the NVRAM can be made available to other processes of a system. Although reference is made to implementing the persistent cache memory with solid-state storage device(s) in some examples, it is noted that the persistent cache memory can be implemented using other types of memory device(s) in other examples. In alternative examples, the cache index implemented as the log structured hash table can be stored in the NVRAM instead of in a separate persistent cache memory. In such alternative examples, the persistent cache memory is the NVRAM. In the ensuing discussion, the cache index implemented as the log structured hash table is referred to as a cached fingerprint index. The fingerprint index stored in the persistent storage is referred to as a persistent fingerprint index. Note that the persistent fingerprint index may also be in the form of a log structured hash table. In accordance with some implementations of the present disclosure, in response to incoming data values, fingerprint index updates for the fingerprint index are created and merged, in a merge operation, to the persistent fingerprint index in the persistent storage. As part of the merge operation, the fingerprint index updates are also mirrored to leaf blocks in the cached fingerprint index in the persistent cache memory, and further, an indirect block is updated that contains references to the leaf blocks in the cached fingerprint index that are used to receive the fingerprint index updates. A fingerprint index includes different types of blocks, including leaf blocks and indirect blocks. A leaf block of a fingerprint index contains fingerprint index entries, where each fingerprint index entry maps a fingerprint for a data value to a storage location indicator (e.g., SBN) of the data value. The storage location indicator can be used to determine a storage location of the data value stored by the persistent storage. An indirect block does not contain any fingerprint index entries; instead, an indirect block contains location information (“references”) relating to locations of leaf blocks. In some examples, a log structured hash table stores entries of a fingerprint index (cached fingerprint index or persistent fingerprint index) in a series of leaf blocks, with each leaf block including a number of buckets. Each leaf block of the fingerprint index is a unit of the log structured hash table and is uniquely identifiable using a block identifier (discussed further below). Each bucket is a partition of a leaf block, and is uniquely identifiable using a bucket identifier (discussed further below). A leaf block can include a number of buckets. Each bucket can in turn store multiple fingerprint index entries. As the log structured hash table grows in size, additional blocks (leaf blocks and indirect blocks) are appended to the log structured hash table. The blocks of the log structured hash table are part of a log structured file system (LFS) according to some examples. In the log structured hash table, the fingerprint index entries of the fingerprint index are sorted in order of values of the fingerprints of the fingerprint index entries. Unlike in a B-tree, a log structured hash table is not arranged as a hierarchical tree structure, but rather includes a sequence of portions (buckets) containing the fingerprint index entries in sorted order (in ascending or descending order of fingerprint values, for example). In some examples, the buckets can be included in leaf blocks, which can in turn be included in segments. The leaf blocks and buckets can be variably sized. In some examples, searching a log structured hash table does not involve a binary search (as would be the case with a B-tree), which can allow the search of a fingerprint index to be more efficient than searching a B-tree. Rather, a log structured hash table is randomly accessible, in that a request to access the log structured hash table can retrieve an entry of the log structured hash table without having to perform a binary search through hierarchical nodes. Arranging a fingerprint index as a log structured hash table can reduce usage of memory, reduce network bandwidth consumption of a data storage system, reduce consumption of processing resources, and so forth, as compared to using a B-tree fingerprint index. FIG.1shows an example of a data storage system102that includes a volatile memory104, an NVRAM150, a persistent cache memory152, and a persistent storage112. Although a specific arrangement of components is shown inFIG.1, it is noted that in other examples, the data storage system102can include a different arrangement of components. The data storage system102also includes a storage controller103that includes various engines, including a deduplication engine118, a merge engine109, a recovery engine170, and a garbage collector engine180. Although specific engines are depicted in the example ofFIG.1, the storage controller103can include fewer or more engines in other examples. Each engine can refer to a portion of a hardware processing circuit of the storage controller103, or alternatively, can refer to machine-readable instructions (software and/or firmware stored on at least one machine-readable storage medium) executable by the hardware processing circuit of the storage controller103. Also, in other examples, some of the engines may be separate from the storage controller103. In accordance with some implementations of the present disclosure, a persistent fingerprint index110(in the form of a log structured hash table) is stored in the persistent storage112, and a cached fingerprint index154is stored in the persistent cache memory152. The cached fingerprint index154is also in the form of a log structured hash table, and includes a portion of the persistent fingerprint index110. As incoming data values114(of write requests) are received by the data storage system102, fingerprint index updates can be created for the incoming data values114. The write requests can be received from a requester device (or multiple requester devices) that is (are) coupled to the data storage system102over a network, such as a local area network (LAN), a wide area network (WAN), a storage area network (SAN), and so forth. A requester device can refer to a server computer, a desktop computer, a notebook computer, a tablet computer, a smartphone, or any other type of electronic device. After data deduplication applied by the deduplication engine118, data values of the write requests can be written to a data store156in the persistent storage112. A “fingerprint index update” can refer to update information for the fingerprint index for an incoming data value that is to be stored in the data storage system102. For example, a fingerprint index update for an incoming data value can include a fingerprint (e.g., a hash value) computed for the incoming data value, and a storage location indicator (e.g., an SBN) for the data incoming value. The ensuing discussion refers to SBNs used by the fingerprint indexes110,154. It is noted that techniques or mechanisms according to some examples can be used with other types of storage location indicators in the fingerprint indexes110,154. Each fingerprint index update can be temporarily stored in a buffer. The volatile memory104includes an active update buffer106and a synchronization buffer108(referred to as a “sync buffer”). Although just one active update buffer106and one sync buffer108are shown inFIG.1, in other examples, the data storage system102can include multiple active update buffers106and/or multiple sync buffers108. More generally, in other examples, the data storage system102can have a different arrangement. For example, instead of including both an active update buffer106and a sync buffer108, just one buffer can be used. The active update buffer106is used to receive fingerprint index updates140corresponding to incoming data values114. The sync buffer108also stores fingerprint index updates. Fingerprint index updates in the sync buffer108are merged by the merge engine109with the persistent fingerprint index110in the persistent storage112of the data storage system102. The roles of the active update buffer106and the sync buffer108can change over time. The active update buffer106is to receive fingerprint index updates140while a merge of the fingerprint index updates140in the sync buffer108to the persistent fingerprint index110can proceed. Once the merge of the fingerprint index updates140in the sync buffer108to the persistent fingerprint index110is completed, the roles of the active update buffer106and the sync buffer108can switch when the active update buffer106is filled with fingerprint index updates140past a threshold (e.g., the number of fingerprint index updates140in the active update buffer106exceeds a threshold number or a threshold percentage of the storage capacity of the update buffer106). The switching of the roles of the buffers106and108causes the buffer that was previously designated the active update buffer to become the sync buffer, and the buffer that was previously designated the sync buffer to become the active update buffer. With further reference toFIG.2, further details are depicted for the persistent fingerprint index110that is arranged as a log structured hash table according to some examples. The log structured hash table of the cached fingerprint index154can have a similar arrangement. In the example shown inFIG.2, the log structured hash table includes log segments202,204,206,208, and210. A log segment of the log structured hash table that contains the persistent fingerprint index110can have a fixed size in some examples, such as 32 megabytes (MB) or some other size. Each log segment can store a leaf block (or multiple leaf blocks) and/or an indirect block (or multiple indirect blocks). A leaf block (e.g., any of leaf blocks204-1in the log segment204) stores fingerprint index entries. A “fingerprint index entry” includes a fingerprint and a corresponding SBN. An indirect block (e.g.,204-2in the log segment204) contains references to leaf blocks. A “reference” to a leaf block can include an indication of a location (e.g., a physical address) in the persistent storage112where the leaf block is stored. In some examples, a leaf block that stores buckets containing fingerprint index entries can have a nominal size of 16 kilobytes (KB). A leaf block can grow in size up to 32 KB (an example leaf block maximum size) to accommodate more fingerprint index entries if the leaf block becomes full. In some examples, an indirect block can be 5 megabyte (MB) in size or another size. Although specific size values have been specified for leaf and indirect blocks, it is noted that in other examples, leaf and indirect blocks can have other sizes. As leaf blocks are added to the persistent fingerprint index110and a corresponding number of block references are added to an indirect block, the indirect block can become full (i.e., filled with block references to leaf blocks such that the indirect block does not have sufficient space to receive additional block references), another indirect block can be created and added to a log segment. As further shown inFIG.2, the log structured hash table that contains the persistent fingerprint index110grows in size by appending additional blocks in the direction indicated by arrow212(e.g., the additional blocks can be appended to an end of the log structured hash table). The log structured hash table further includes header information214that identifies locations of indirect blocks (if present) within each log segment. Leaf blocks and indirect blocks can mix within a same log segment. The location information (in the header information214) for each indirect block identifies a log segment (within which the indirect block is contained) and an offset within the log segment in which the indirect block is contained. As further shown inFIG.2, the log segment204includes multiple leaf blocks204-1. For example, the multiple leaf blocks204-1include M leaf blocks, including block 0, block 1, block 2, . . . , block M−1, where M≥2. Each leaf block further includes P buckets, including bucket 0, bucket 1, . . . , bucket P−1, where P≥2. Each bucket includes a number of fingerprint index entries, where each fingerprint index entry associates a fingerprint with a storage location indicator. Each fingerprint index entry includes a key-value pair, where the key is the fingerprint and the value is the corresponding SBN. A leaf block also includes block header information216, which includes information indicating the size of the leaf block, information indicating a number of fingerprint index entries in each of the N buckets of the leaf block, and other information. The merge engine109performs a merge operation to merge fingerprint update entries in the sync buffer108with the leaf blocks of the persistent fingerprint index110. Each fingerprint update entry in the sync buffer108can be in the form of a key-value pair (where the key is a fingerprint and the value is the corresponding SBN). The merge operation merges key-value pairs of the sync buffer108into the persistent fingerprint index110in sorted order (e.g., ascending order of fingerprints) of the key-value pairs in the sync buffer108. For example, during the merge operation, the merge engine109can pass through the sync buffer108in order from a key-value pair with the lowest key (fingerprint) value to the key-value pair with the highest key (fingerprint) value. As a result of merging key-value pairs from the sync buffer108in sorted order into the persistent fingerprint index110, the resultant key-value pairs in the persistent fingerprint index110are also sorted by key (fingerprint) value. If a log segment of the persistent fingerprint index110already contains existing key-value pairs prior to the merge operation, then the existing key-value pairs can be retrieved from the log segment into a stage area (not shown) of the memory104, and the key-value pairs of the sync buffer108to be merged are also moved to the stage area. The existing key-value pairs of the log segment are merged with the key-value pairs of the sync buffer108in sorted order of key (fingerprint) values, and the sorted key-value pairs are then written from the staging area to the log segment of the persistent fingerprint index110. Data Deduplication The deduplication engine118of the storage controller103performs data deduplication for the incoming data values114. To perform data deduplication, the deduplication engine118uses a fingerprint index and a location index116(stored in the persistent storage112). For each incoming data value114, the deduplication engine118first attempts to perform a lookup of the cached fingerprint index154. If the corresponding fingerprint for the incoming data value114is present in the cached fingerprint index154, then the entry of the cached fingerprint index154can be used for data deduplication. However, if the corresponding fingerprint for the incoming data value114is not present in the cached fingerprint index154, then the deduplication engine118accesses the corresponding entry from the persistent fingerprint index110to perform data deduplication Note that although the location index116is shown as stored in the persistent storage112, in some cases, portions of the location index116may be retrieved into the memory104for faster lookup. In examples where the persistent storage112includes a storage disk, the location index116is referred to as a “disk index.” The location index116may be in the form of a B-tree index, or can have a different format in other examples. As noted above, a fingerprint index (e.g., the cached fingerprint index154or the persistent fingerprint index110) maps fingerprints to SBNs. More specifically, the fingerprint index maps partial fingerprints to SBNs. Partial fingerprints are discussed further below. In some examples, the location index116maps SBNs to corresponding physical locations, such as physical addresses (ADDR) of the persistent storage112. More specifically, each entry of multiple entries117(e.g., leaf nodes of a B-tree storing the location index116) maps an SBN to a respective physical location (e.g., physical address, ADDR) as well as to a full fingerprint, Full FP (explained further below). Thus, given a fingerprint of an incoming data value, if a lookup of the fingerprint index (cached fingerprint index154or persistent fingerprint index110) using the given fingerprint produces a match to an entry of the fingerprint index, then that match produces an SBN corresponding to the given fingerprint. The SBN is then used to look up the location index116, which maps the SBN to a corresponding identifier of a physical location (e.g., a physical address) of a data value. A partial fingerprint stored by the fingerprint index110or154includes a portion (i.e., less than an entirety) of a full fingerprint computed by applying a fingerprint function on the content of a data value. For example, a partial fingerprint can include a partial hash value that includes a portion of a full hash value (such as a subset of the bits that make up the full hash value). The bits that make up the partial hash value can be the least significant bits of the bits that make up the full hash value. As shown inFIG.1, the persistent fingerprint index110includes multiple entries111, where each entry111maps a partial fingerprint (Partial FP) to a respective SBN. Similarly, the cached fingerprint index154includes multiple entries155, where each entry155maps a partial fingerprint (Partial FP) to a respective SBN. The entries111,155of the respective persistent/cached fingerprint indexes110,154are included in the buckets of the log structured hash tables discussed above. In some examples, a lookup of the fingerprint index154or110is a lookup of a partial fingerprint computed based on an incoming data value114. In such examples, a match of the partial fingerprint in the fingerprint index is not conclusive regarding whether or not a duplicative data value is already stored in the data store156. Because a partial fingerprint is used by the fingerprint index154or110potentially multiple different data values can produce the same partial fingerprint. In such examples, to confirm that the matching entry155or111of the fingerprint index154or110(that matches a partial fingerprint of a given incoming data value114) actually corresponds to a duplicate of the given incoming data value114, the SBN of the matching entry155or111of the fingerprint index154or110is used to retrieve the corresponding entry117of the location index116, where the retrieved entry117of the location index116maps the SBN to a physical location of the given incoming data value114and the full fingerprint of the given incoming data value114. The deduplication engine118is able to determine, based on the full fingerprint from the location index116, whether or not the data storage system102actually contains a duplicate of the given incoming data value114. More specifically, the deduplication engine118compares the full fingerprint computed for the given incoming data value114to the full fingerprint retrieved from the location index116. In such examples, if the full fingerprints match, then the deduplication engine118can make a determination that a duplicate of the given incoming data value114is already stored (in the data store156) in the data storage system102. As a result, the deduplication engine118can decide to not write the given incoming data value114to the persistent storage112, but instead, can update a count of the number of instances of the data value (sharing the matching full fingerprint) that have been received. On the other hand, if the full fingerprint computed for the given incoming data value114does not match the full fingerprint retrieved from the location index116, then that indicates that the data storage system102does not store a duplicate of the given incoming data value114. As a result, the given incoming data value114is written to the data store156of the persistent storage112. In addition, the deduplication engine118produces a fingerprint index update140for the given incoming data value. Note that a fingerprint index update140is not produced for an incoming data value that is duplicative of a data value already stored at the persistent storage112. As shown inFIG.1, the deduplication engine118includes a data value hasher120. The data value hasher120can be implemented using a portion of the hardware processing circuit of the deduplication engine118, or alternatively, can include machine-readable instructions executable by a hardware processing circuit of the storage controller103. Although the data value hasher120is shown as being part of the deduplication engine118, it is noted that in other examples, the data value hasher120can be separate from the deduplication engine118. A fingerprint produced by the data value hasher120can include a hash value. In other examples, a different type of fingerprint generator can be used to generate another type of a fingerprint. The data value hasher120produces both a full fingerprint (e.g., a full hash value) that is to be stored in the location index116and a partial fingerprint (e.g., a partial hash value) that is to be stored in the fingerprint indexes154,110. Merge of Fingerprint Index Updates FIG.3is a flow diagram of a merge process300performed by the storage controller103according to some examples. The storage controller103receives (at302) incoming data values114. Based on the incoming data values114, the storage controller103produces fingerprint index updates140(FIG.1) that are added (at304) to the active update buffer106in the memory104, according to some examples. Specifically, for the incoming data values114, the data value hasher120produces respective partial fingerprints (as well as respective full fingerprints), where the partial fingerprints are included in the fingerprint index updates140. Each fingerprint index update140further contains the corresponding SBN. In examples where both the active update buffer106and the sync buffer108are used, the active update buffer106can be switched to become the sync buffer108at some point, such as in response to the active update buffer106becoming full. The merge engine109of the storage controller103merges (at306), in a merge operation, the fingerprint index entries in the sync buffer108into the persistent fingerprint index110. The merging of the fingerprint index updates into the persistent fingerprint index110can include adding fingerprint index updates to existing leaf block(s) of the persistent fingerprint index110and/or creating a new leaf block in the persistent fingerprint index110for adding fingerprint index entries. A new leaf block is created to add fingerprint index entries that have fingerprint values that do not correspond to any existing leaf block of the persistent fingerprint index110. Note that each existing leaf block of the persistent fingerprint index110is to store fingerprint index entries containing fingerprint values that span from a leading fingerprint value to an ending fingerprint value. As part of the merge operation that merges the fingerprint index updates into the persistent fingerprint index110, the merge engine109mirrors (at308) the fingerprint index updates into the cached fingerprint index154in the persistent cache memory152, and further adds (at310), to an indirect block142(FIG.1) location information relating to a leaf block (a cached leaf block of the cached fingerprint index154) into which the fingerprint index updates are copied. The indirect block142is a “shadow” indirect block that refers to a location of a leaf block in the cached fingerprint index154in the persistent cache memory152. In accordance with some implementations of the present disclosure, the location information is added to the indirect block142instead of to a B-tree cache index. The indirect block142is cached in the memory104, and can be a cached version of an indirect block162in the persistent cache memory152. Using techniques or mechanisms in which fingerprint index updates are mirrored to the cached fingerprint index154and locations of cached leaf blocks are added to the indirect block142in the volatile memory104, a B-tree cache index in the NVRAM150does not have to be used, such that the fingerprint index updates and corresponding location information do not have to be added to such B-tree cache index. The mirroring of the fingerprint index updates into the cached fingerprint index154refers to copying the fingerprint index updates to existing leaf block(s) and/or creating a new leaf block to accommodate fingerprint index updates, which mirrors updates performed at the persistent fingerprint index110. The location information relating to the cached leaf block is location information that indicates the location in the persistent cache memory152where the leaf block is stored. InFIG.1, the location information relating to a leaf block is in the form of a block reference144. A block reference144can include a physical address of a location in the persistent cache memory152where a leaf block of the cached fingerprint index154is stored. In some examples, when a fingerprint index update mirrored (at308) to the cached fingerprint index154causes a new leaf block to be created in the cached fingerprint index154(which mirrors the creation of a new leaf block in the persistent fingerprint index110), the location information (block reference144) of the new leaf block is added (at310) to the indirect block142. According to some implementations of the present disclosure, the updating of the persistent fingerprint index110and the cached fingerprint index154with the fingerprint index updates is a shared update. The shared update allows for the cached fingerprint index154to be maintained in a consistent state with respect to the persistent fingerprint index110. Moreover, in some examples, the shared update ensures that fingerprint index updates that are merged with the persistent fingerprint index110and copied to the cached fingerprint index154are committed together. The merge operation that merges the fingerprint index updates can be considered a transaction that is either committed in its entirety or is aborted if the transaction cannot finish. Committing fingerprint index updates of a merge operation can refer to marking the updates as being successfully persistently stored into the persistent fingerprint index110and the cached fingerprint index154. In this manner, updates to both the persistent fingerprint index110and the cached fingerprint index154are committed together or not at all. By adding, to the indirect block142, location information relating to leaf block(s) of the cached fingerprint index154into which the fingerprint index updates are copied as part of the merge operation, the location information relating to the leaf block(s) does not have to be added as a separate entry in the NVRAM150(FIG.1). As a result, NVRAM space is not consumed for purposes of merging fingerprint index updates to the persistent fingerprint index110. Also, by mirroring the fingerprint index updates of the merge operation into the cached fingerprint index154, separate update and sync buffers do not have to be provided in the memory104for the cached fingerprint index154. Not including separate update and sync buffers for the cached fingerprint index154reduces the amount of the memory104consumed for the merge operation, and also the amount of processing involved in performing fingerprint index maintenance. Lookups for Data Deduplication FIG.4is a flow diagram of a lookup process400to perform a lookup of the cached fingerprint index154during a deduplication operation performed by the deduplication engine118(FIG.1). The deduplication operation is performed for an incoming data value114, for determining whether or not a duplicate of the incoming data value114is already stored in the persistent storage112and thus does not have to be stored again. The lookup process400determines (at402) whether a corresponding leaf block (of the cached fingerprint index154) for the incoming data value114is in the memory104. This determination is based on computing a block identifier (for a leaf block) based on the partial fingerprint computed from the incoming data value114. In some examples, the block identifier of a leaf block is computed according to Eq. 1: Block⁢Identifier=Bucket⁢IdentifierNumber⁢of⁢Buckets⁢per⁢Block.(Eq.1) The bucket identifier is computed according to Eq. 2: Bucker⁢Identifier=fpS,(Eq.2) where fp represents the partial fingerprint value of the incoming data value114, and S represents a bucket span that is the distance between a first key (a first fingerprint value) and a last key (a last fingerprint value) in a bucket. The block identifier can be used as an index into the indirect block142(FIG.1), where the index points to an entry in the indirect block142. The entry indexed by the block identifier contains a block reference144(FIG.1) that indicates a location of the corresponding leaf block in the cached fingerprint index154. Although specific equations are used for computing the block identifier in some examples, it is noted that the block identifier can be computed in a different way in other examples. The determination of whether the leaf block corresponding to the incoming data value114is in the memory104can be based on whether the memory104contains the leaf block having the block identifier computed from the partial fingerprint of the incoming data value114. If the corresponding leaf block is stored in the memory104(this leaf block is referred to as an “in-memory leaf block”), then the lookup process400can retrieve (at404) a respective entry for the incoming data value114from the in-memory leaf block, for purposes of performing data duplication. The retrieved entry contains a mapping between the partial fingerprint of the incoming data value114and a corresponding SBN, which can be used to perform a lookup of the location index116(FIG.1) to retrieve the corresponding full fingerprint used for data deduplication as explained further above. If the corresponding leaf block is not in the memory104, then the lookup process400accesses (at406) the indirect block142in the memory104to find the location of the corresponding leaf block. The block identifier computed from the partial fingerprint of the incoming data value114is used as an index into the indirect block142, to retrieve a respective block reference144, as discussed above. The retrieved block reference144is used to locate the corresponding leaf block contained in the cached fingerprint index154stored in the persistent cache memory152. The lookup process400copies (at408) the corresponding leaf block located using the retrieved block reference144into the memory104. The lookup process400then retrieves (at404) the respective entry for the incoming data value114from the in-memory leaf block, for purposes of performing data duplication The access of the cached fingerprint index154based on use of the indirect block142in the memory104as discussed above can be performed more quickly than would be the case in accessing a B-tree cache index, which would involve a binary search of the B-tree cache index to find the corresponding leaf block. In examples where there are multiple indirect blocks, some of the indirect blocks162(FIG.1) may be stored in the persistent cache memory152, and may not be stored in the memory104. For example, the memory104may be used to cache some number of indirect blocks (where the number can be 1 or greater than 1). As the indirect block142in the memory104becomes full (i.e., the indirect block142is filled with block references144so the indirect block142can no longer accept more block references), the indirect block142is written to the persistent cache memory152, and a new indirect block is created and stored in the memory104. In cases where the corresponding leaf block is not in the memory104, and the indirect block142is also not in the memory104, the lookup process400can access indirect block location information160stored in the NVRAM150, for example. The indirect block location information160specifies the locations of respective indirect blocks162that are stored in the persistent cache memory152. The indirect block location information160can map ranges of block identifiers to respective indirect blocks162. The block identifier generated based on the partial fingerprint for an incoming data value114can be used determine which of the indirect blocks162is relevant for the incoming data value114, and the relevant indirect block162can be retrieved from the persistent cache memory152into the memory104. Recovery from Crash During Merge During a merge operation in which fingerprint index updates are merged to the persistent fingerprint index110(and also mirrored to the cached fingerprint index154), a crash may occur that can result in the merge operation not completing. A crash may occur during the merge operation as a result of a hardware error, an error during execution of machine-readable instructions, a communication error, and so forth. As noted above, as part of the merge operation, updates are made to the indirect block142in the memory104. The updates to the indirect block142may not be persistently stored in the data storage system102. As result, the crash may cause the content of the indirect block142in the memory104to be lost. If the indirect block142in the memory104is lost, then the storage controller103is unable to determine which leaf blocks of the cached fingerprint index were updated as a result of a merge operation (but not yet committed) Prior to the crash, fingerprint index updates merged to the persistent fingerprint index110and mirrored to the cached fingerprint index154may not have been committed. To perform recovery from the crash, information stored in the NVRAM150can be used to determine which sections of the cached fingerprint index154has (have) not yet been committed. The information stored in the in the NVRAM150that is used to recover from a crash during a merge operation includes section information164. The section information164contains section identifiers166that identify respective sections of the cached fingerprint index154. A “section” of the cached fingerprint index154can be larger in size than a leaf block—for example, a section may include multiple leaf blocks. In some examples, a section can be the size of a log segment as shown inFIG.2(or may be larger than a log segment). In some examples, commitment of fingerprint index updates is performed at the granularity of a section (i.e., the entire section is committed or not at all). The section information164also includes status indicators168associated with the respective section identifiers166. The status indicator168associated with a corresponding section identifier indicates whether or not the corresponding section of the cached fingerprint index154has been committed. For example, if the status indicator168has a first value (e.g., “1” or a different value), then that indicates that the corresponding section of the cached fingerprint index154has been committed. If the status indicator168has a different second value (e.g., “0” or another value), then that indicates that the corresponding section of the cached fingerprint index154has not been committed. The size of the section information164containing the section identifiers and corresponding status indicators is relatively small as compared to the sections of the cached fingerprint index154. Thus, storing the section information164in the NVRAM150does not consume a lot of storage space of the NVRAM150. FIG.5is a flow diagram of a recovery process500that can be performed by the recovery engine170of the storage controller103. The recovery process500is initiated by the recovery engine170in response to a crash. The crash may be indicated by a crash indicator stored in the NVRAM150, or the crash indicator can be provided by another entity of the data storage system102, such as an operating system (OS), a basic input/output system (BIOS), and so forth. The recovery process500accesses (at502) the section information164to identify section(s) of the cached fingerprint index154that has (have) not been committed. The corresponding section(s) of the persistent fingerprint index110would also not have been committed, since as discussed further above, commitment of fingerprint index updates involves commitment of both updates to the persistent fingerprint index110as well as the mirrored updates to the cached fingerprint index154. For the identified section(s) of the cached fingerprint index154that has (have) not been committed, the recovery process500commits (at504) the identified section(s) to the persistent fingerprint index110and the cached fingerprint index154. Committing the identified section(s) to the persistent fingerprint index110and the cached fingerprint index154refers to writing the fingerprint index entries of the identified section(s) to the persistent fingerprint index110and the cached fingerprint index154and marking such written fingerprint index entries as persistently stored. A crash may cause loss of fingerprint index updates in the active update buffer106and the sync buffer108. In some examples, fingerprint index updates in the active update buffer106and the sync buffer108are not recovered. Since the fingerprint index updates in the active update buffer106and the sync buffer108are used for purposes of data deduplication, not recovering the fingerprint index updates in the active update buffer106and the sync buffer18may result in the data deduplication not identifying all the duplicate instances of data values, such that some duplicate data values may be stored into the persistent storage112. However, less than ideal data deduplication may be acceptable in some scenarios, and may not have any noticeable impact on the overall performance of the data storage system102. Garbage Collection Garbage collection can be performed to remove older sections of the cached fingerprint index154from the persistent cache memory152to make room for additional fingerprint index updates to be added to the cached fingerprint index154. The garbage collector engine180of the storage controller103can perform garbage collection. The garbage collector engine180can track the frequency of use of respective leaf blocks of the cached fingerprint index154. The frequency of use of each leaf block of the cached fingerprint index154can be indicated in block usage information172, which can be stored in the memory104, for example. The block usage information172includes leaf block identifiers174and respective use indicators176. A use indicator176can be set to different values to indicate respective different frequencies of use of the corresponding leaf block of the cached fingerprint index154. For example, a first value of the use indicator may indicate that the corresponding leaf block has a first frequency of use, a second value of the use indicator may indicate that the corresponding leaf block has a second frequency of use that is greater than the first frequency of use, a third value of the use indicator may indicate that the corresponding leaf block has a third frequency of use that is greater than the second frequency of use, and so forth. As leaf blocks of the cached fingerprint index154are accessed, the storage controller can update the respective use indicators176in the block usage information172. FIG.6is a flow diagram of a garbage collection process600according to some examples. The garbage collection process600may be performed by the garbage collector engine180, for example. The garbage collection process600may be triggered in response to the cached fingerprint index154having grown to a size that exceeds a specified threshold, such as a threshold percentage of the persistent cache memory152. In response to the triggering event, the garbage collection process600can decide which leaf block(s) of the cached fingerprint index154to remove from the persistent cache memory152. The garbage collection process600accesses (at602) the block use information172to decide which leaf blocks are less frequently used than other leaf blocks. Based on the block use information172, the garbage collection process600identifies (at604) the leaf block(s) that is (are) less frequently used, for removal. This identification is based on comparing values of the use indicators of the respective leaf blocks of the cached fingerprint index154. The number of leaf block(s) identified can be dependent upon how much space in the persistent cache memory152is to be freed up. The identified leaf block(s) can be part of a given section of the cached fingerprint index154. The garbage collection process600can move (at606) the remaining leaf block(s) of the given section to a new section of the cached fingerprint index154. The remaining leaf block(s) of the given section refers to the leaf block(s) other than the identified leaf block(s) that is (are) less frequently used. The garbage collection process600marks (at608) the given section for removal, such that the given section can be removed from the persistent cache memory152at the next opportunity (such as during an idle time period of the data storage system102). Further Examples FIG.7is a block diagram of a non-transitory machine-readable or computer-readable storage medium700storing machine-readable instructions that upon execution cause a system (e.g., the data storage system102or another computing system having a computer or multiple computers) to perform various tasks. The machine-readable instructions include data deduplication instructions702to perform data deduplication using a deduplication fingerprint index in a hash data structure including a plurality of blocks, where a block of the plurality of blocks includes fingerprints computed based on content of respective data values. As used here, a “hash data structure” can refer to a data structure that stores keys (e.g., the partial fingerprints) in a sorted order (e.g., ascending order, descending order, etc.). An example of the hash data structure is a log based hash table. The machine-readable instructions include merge instructions704to merge, in a merge operation, updates for the deduplication fingerprint index to the hash data structure stored in a persistent storage. The machine-readable instructions include mirroring and indirect block update instructions706to, as part of the merge operation, mirror the updates to a cached copy of the hash data structure in a cache memory, and update, in an indirect block, information regarding locations of blocks in the cached copy of the hash data structure. In some examples, the cache memory is a persistent cache memory (e.g.,152inFIG.1). In some examples, the indirect block is stored in a volatile memory (e.g.,104inFIG.1). In some examples, the machine-readable instructions are to further, as part of the data deduplication, retrieve, from the cached copy of the hash data structure, a block of the plurality of blocks using the location information in the indirect block, obtain a fingerprint for an incoming data value using the retrieved block, and use the obtained fingerprint to perform data deduplication of the incoming data value. In some examples, the machine-readable instructions are to perform data deduplication for an incoming data value by determining whether an indirect block is stored in a volatile memory, and in response to determining that the indirect block is not stored in the volatile memory, copy the indirect block from the cache memory into the volatile memory. In some examples, the machine-readable instructions are to, responsive to a crash during the merge operation, determine which section of the cached copy of the hash data structure has not been committed, and perform a recovery of the section. In some examples, the machine-readable instructions are to identify a first subset of blocks of a plurality of blocks in the cached copy of the hash data structure that is used less than a second subset of blocks of the plurality of blocks in the cached copy of the hash data structure, and perform garbage collection on the first subset of blocks. FIG.8is a block diagram of a data storage system800according to some examples. The data storage system800includes a persistent storage802to store a persistent fingerprint index804having a hash data structure, and a persistent cache memory806to store a cached fingerprint index808having a hash data structure. The data storage system800includes a storage controller810to perform various tasks. The tasks include a fingerprint computation task812to compute fingerprints based on incoming data values for storing in the data storage system800. The tasks further include a fingerprint index update production task814to produce fingerprint index updates based on the computed fingerprints. The tasks further include a merge task816to perform a merge operation to merge the fingerprint index updates to the persistent fingerprint index. The tasks further include a fingerprint index update mirroring and indirect block update task818to, as part of the merge operation, mirror the fingerprint index updates to the cached fingerprint index, and add, to an indirect block, location information of leaf blocks of the cached fingerprint index to which the fingerprint index updates have been added, wherein the location information specifies locations of the leaf blocks in the persistent cache memory. FIG.9is a flow diagram of a process900according to some examples, which may be performed by a storage controller (e.g.,103inFIG.1). The process900includes computing (at902) fingerprints based on incoming data values for storing in a data storage system. The process900includes performing (at904) data deduplication for the incoming data values, and producing (at906) fingerprint index updates for a subset of the incoming data values, the subset including data values that are not duplicative of data values already stored in a persistent storage, and the fingerprint index updates computed based on the fingerprints of incoming data values in the subset. The process900includes performing (at908) a merge operation to merge the fingerprint index updates to a persistent fingerprint index stored in the persistent storage, the persistent storage having a hash data structure. The process900includes, as part of the merge operation, mirroring (at910) the fingerprint index updates to a cached fingerprint index stored in a cache memory, and adding (at912), to an indirect block, location information of leaf blocks of the cached fingerprint index to which the fingerprint index updates have been added, wherein the location information specifies locations of the leaf blocks in the cache memory, and the cached fingerprint index has a hash data structure. A storage medium (e.g.,700inFIG.7) can include any or some combination of the following: a semiconductor memory device such as a dynamic or static random access memory (a DRAM or SRAM), an erasable and programmable read-only memory (EPROM), an electrically erasable and programmable read-only memory (EEPROM) and flash memory or other type of non-volatile memory device; a magnetic disk such as a fixed, floppy and removable disk; another magnetic medium including tape; an optical medium such as a compact disk (CD) or a digital video disk (DVD); or another type of storage device. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution. In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
57,946
11860841
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Implementations of the present disclosure are directed to using versioned tables for online import of content. More particularly, implementations of the present disclosure are directed to using versioned tables to provide multiple content versions for content of tables in a shared container during import of content to a multi-tenant database system. Implementations can include actions of setting a session variable of each of a plurality of tenants to a first timestamp, importing, after the first timestamp, a first set of content to a shared container within a database system, during importing, each tenant in the plurality of tenants accessing pre-import data stored in the shared container based on the session variable being set to the first timestamp, and after importing the first set of content to the shared container, un-setting, at a second timestamp, the session variable of each of the plurality of tenants from the first timestamp, after the second timestamp, each tenant in the plurality of tenants accessing post-import data stored in the shared container. FIG.1depicts an example architecture100in accordance with implementations of the present disclosure. In the depicted example, the example architecture100includes one or more client devices102, a server system104and a network106. The server system104includes one or more server devices108. In the depicted example, respective users110interact with the client devices102. In an example context, a user110can include a user, who interacts with an application that is hosted by the server system104. In another example context, a user110can include a user, who interacts with the server system104, as described in further detail herein. In some examples, the client devices102can communicate with one or more of the server devices108over the network106. In some examples, the client device102can include any appropriate type of computing device such as a desktop computer, a laptop computer, a handheld computer, a tablet computer, a personal digital assistant (PDA), a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or an appropriate combination of any two or more of these devices or other data processing devices. In some implementations, the network106can include a large computer network, such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a telephone network (e.g., PSTN) or an appropriate combination thereof connecting any number of communication devices, mobile computing devices, fixed computing devices and server systems. In some implementations, each server device108includes at least one server and at least one data store. In the example ofFIG.1, the server devices108are intended to represent various forms of servers including, but not limited to a web server, an application server, a proxy server, a network server, and/or a server pool. In general, server systems accept requests for application services and provides such services to any number of client devices, e.g., the client devices102, over the network106. In some implementations, one or more data stores of the server system104store one or more databases. In some examples, a database can be provided as an in-memory database. In some examples, an in-memory database is a database management system that uses main memory for data storage. In some examples, main memory includes random access memory (RAM) that communicates with one or more processors, e.g., central processing units (CPUs), over a memory bus. An-memory database can be contrasted with database management systems that employ a disk storage mechanism. In some examples, in-memory databases are faster than disk storage databases, because internal optimization algorithms can be simpler and execute fewer CPU instructions, e.g., require reduced CPU consumption. In some examples, accessing data in an in-memory database eliminates seek time when querying the data, which provides faster and more predictable performance than disk-storage databases. Implementations of the present disclosure are described in further detail herein with reference to an example context. The example context includes business applications that are executed in a client-server architecture, such as the example architecture100ofFIG.1. In some examples, business applications can be provided in a business suite that includes two or more business applications. Example business applications can include an enterprise resource planning (ERP) application, a customer relationship management (CRM) application, a supply chain management (SCM) application, and a product lifecycle management (PLM) application. It is contemplated, however, that implementations of the present disclosure can be realized in any appropriate context (e.g., healthcare applications). Referring again toFIG.1, and in the example context, one or more applications can be hosted by the server system104. A user110can interact with an application using the client device102. More specifically, a session can be established between the client device102and one or more server devices104, during which session the user110is able to interact with one or more applications hosted on the server system104. The one or more applications can enable the user to interact with data stored in one or more databases. In some examples, interactions can result in data being stored to the database, deleted from the database, and/or edited within the database. As introduced above, a multi-tenancy architecture can include instances of a software application that runs on one or more servers and that serve multiple tenants. A tenant is an entity (e.g., a customer of the software vendor) having multiple users that share a common access to a software instance. In a multi-tenant architecture, the software application can be designed to provide every tenant a dedicated share of an instance of the application. This can include tenant-specific data, configuration, user management, and tenant-specific functionality. More particularly, in a multi-tenancy architecture, resources can be shared between applications from different tenants. Shared resources can include, for example, vendor code, application documentation, and central runtime and configuration data. Multi-tenancy can enable improved use of shared resources between multiple application instances, across tenants, which can reduce disk storage and processing requirements, among other advantages. Multi-tenancy can enable centralized software change management for events such as patching or software upgrades. For example, a database system that is accessed by the application instances includes a shared container that stores shared data, also referred to as shared content, and respective tenant containers, each tenant container storing tenant-specific data for a respective tenant. Example shared content can include, without limitation, report sources, dictionary metadata, help short-texts, and the like, which are the same for and accessed by all tenants. By storing shared content in a shared container, the memory is conserved (e.g., as opposed to providing a copy of the shared content in each tenant container). During production use, content is sometimes deployed to the database system. For example, content can be shared content that is deployed to (imported to) a shared container (e.g., shared database). In some examples, content can also include content that is deployed to each tenant container (e.g., tenant database). However, previous approaches in deploying content causes disruptions to the tenants. For example, previous approaches require views in each tenant to be dropped and created or altered to facilitate tenant access to the new shared content. These actions are executed during the import of the tenant remaining parts through execution of data definition language (DDL) statements. However, execution of the DDL statements result in disruption of running database transactions in the tenant(s). To illustrate such issues, a non-limiting example can be considered, in which a first tenant (T1) and a second tenant (T2) each access a shared container. In this example, the shared container stores a table that is named TAB #1. Although a single table is depicted for simplified illustration, it is contemplated that a shared container can store hundreds, even thousands of shared tables. As described herein, each table can include a versioned table, reading from which can be based on a timestamp setting. Being shared, TAB #1 is made read-only using a tabled view, named view TAB in this example, in each of respective tenant containers (e.g., a tenant container of T1, a tenant container of T2). Also in this example, each of T1 and T2 has access to a tenant-specific table, named TABA, that is stored locally in the respective tenant containers. That is, T1 has TABA stored in its tenant container and T2 has TABA stored in its tenant container. Being tenant-specific, T1 and T2 each have read and write access to TABA in the respective tenant containers. Hence, TABA of T1 can stored content that is different from TABA of T2. For example, T1 and T2 can each extend their respective TABA to modify TABA as originally provisioned (e.g., add fields). In a previous approach to deploying content, an import process (e.g., executed by a deploy tool) includes creating a clone of TAB #1 in the shared container. For example, TAB #1 is copied and the copy is named TAB #2. The new shared content is deployed to TAB #2. For each tenant, the view TAB to TAB #1 is dropped, a view TAB to TAB #2 is created, and new content is deployed to TABA. However, during import (deployment), production use of each of the tenants continues, which can cause errors. For example, the drop/create of view TAB in a tenant container can cause a structured query language (SQL) error. In further detail, a first transaction TX1can be executing and can implicate reading of multiple tables and writing to some tables. In this example, the first transaction TX1begins executing with auto-commit off within a tenant container, and can include:TX1: SELECT * from TAB Here, the first transaction TX1reads data from the view TAB, which can include:SELECT * FROM SHARED.SHARED.TAB #1 In short, the view TAB reads from the shared table TAB #1 in the shared container. Because the first transaction TX1reads from the view TAB, the first transaction TX1, in effect, reads from the shared table TAB #1 in the shared container. The first transaction TX1is successful and results in reading a snapshot of the shared table. A second transaction TX2is executed as part of a process running an import of content to the tenant. The second transaction TX2can include, for example:DROP VIEW TABCREATE VIEW TAB AS SELECT * FROM SHARED.SHARED.TAB #2 Accordingly, the view TAB is now set to read from TAB #2, which includes the cloned and imported content. However, the dependent views of the newly created view must be validated by the database system before use to ensure consistency (e.g., if any field selected by the dependent view is no longer available in the newly created view). As part of on-going production use, the first transaction TX1executes again, but prior to all dependent view being validate after the view TAB being re-created (i.e., the view TAB now reading from TAB #2). As a result, the first transaction TX1fails, within the transaction the previous definition of TAB (reading from TAB #1) was used and snapshots/caches in the tenant of TAB #1 content cannot be updated with further content because the definition of the view TAB has been dropped (i.e., the previous view TAB (to TAB #1) was dropped). Further, the content of TAB #2 is different from TAB #1. This all can result in a SQL error (e.g., that dependent views are invalid, or that data cannot be read from TAB #1). In view of the foregoing, and as introduced above, implementations of the present disclosure are directed to using versioned tables for online import of content. More particularly, implementations of the present disclosure are directed to using versioned tables to provide multiple content versions for content of tables in a shared container during import of content to a multi-tenant database system. In the context of the present disclosure, online import refers to import of new content (e.g., shared content, tenant-specific content) to a database system during production use of the database system. That is, the database system is not taken offline to execute the import. This can occur, for example, in instances of patches being deployed to the database system, which can be required to be deployed, while the database system is online and in production use (e.g., emergency patch procedure). In accordance with implementations of the present disclosure, and as described in further detail herein, versioned tables are used to enable transition from old content to new content on a transaction-by-transaction basis, such that each transaction executes on consistent content. That is, in terms of the data that a transaction executes on, the transaction has a consistent start and a consistent finish. The switch to new content is done at a transaction end individually for every transaction. In this manner, implementations of the present disclosure avoid a hard switch to newly deployed content across all transaction, which can result in errors (e.g., SQL errors introduced above). In further detail, each versioned table provides a respective content version for the content of the tables in the shared container. In some implementations, a first versioned table can provide upon select data which corresponds to a first timestamp (ts1) and using a different configuration at the select or session can provide upon a second select data which corresponds to a second timestamp (ts2). The first timestamp corresponds to shared content before the import begins, which can be referred to as pre-import version of shared content. In this manner, tenant transactions are not disrupted by the import to the shared container. Also, the import to the tenant(s) happens after import to the shared container and reading content of two different versions can lead to any kind of unexpected behavior. The second timestamp corresponds to completion of the import is complete (i.e., both to the shared container and the tenant containers). Accordingly, the second timestamp corresponds to shared content after the import is complete, which can be referred to as post-import version of shared content. As described in further detail herein, each of the tenants is configured to read content associated with session variable indicating a time. In some examples, the session variable is un-set, which indicates that transactions are to read the “latest” (i.e., most recent) data from the shared content. In accordance with implementations of the present disclosure, the session variable is set to ensure that transactions only access data of the pre-import shared content during execution of the import process to the shared container. That is, for example, the session variable is set to the first timestamp indicating that “latest” is the first timestamp (i.e., only data on or before the first timestamp is accessed). In this manner, transactions staring before the import begins, or starting while the import is ongoing only access the pre-import version of shared content. At completion of the import, the session variable is un-set, which again indicates that transactions are to read the “latest” (i.e., most recent) data from the shared content. At this point, the “latest” data is that of the post-import version of shared content. FIGS.2A-2Cdepicts an example progression of online import using versioned tables in accordance with implementations of the present disclosure. With particular reference toFIG.2A, an example system200is depicted and is a simplified, non-limiting representation of a multi-tenant system that includes application servers202,204, and a database system206. In the example ofFIG.2, the database system206includes a shared container208and tenant containers210,212. The tenant container210corresponds to a first tenant (Tenant1) and the tenant container212corresponds to a second tenant (Tenant2). Although a single shared container is depicted, it is contemplated that implementations of the present disclosure can be realized in database systems having multiple shared containers. Also, while two tenant containers are depicted, it is contemplated that implementations of the present disclosure can be realized in database systems having any appropriate number of tenant containers. In some examples, each of the application servers202,204executes a respective instance of a multi-tenant application for a respective tenant. For example, the application server202executes an application instance for the first tenant and communicates with (e.g., issues queries to) the tenant container210, and the application server204executes an application instance for the second tenant and communicates with (e.g., issues queries to) the tenant container212. In some examples, queries issued by the application instances implicate shared content and/or tenant-specific content. With continued reference toFIG.2A, the shared container208stores shared content220. In the example ofFIG.2A, the shared content220is stored in a table having the name TAB #1, which is configured to be a versioned table, as discussed above. In this manner, the data of the shared content220can be read for any given timestamp. That is, for example, it can be specified to read content version at a first timestamp before the import of new content or to read the content version at a second timestamp after the import of the delta. The timestamp indicating when the data was added to or last modified the shared content220. The tenant container210includes a view222and tenant-specific content224, and the tenant container212includes a view226and tenant-specific content228. Each of the views222and the view226is a view to the shared content220. Each of the tenant-specific content224,228is stored in respective tables (e.g., with table name TABA). The example ofFIG.2Adepicts operation of the system200prior to import of content to the shared container208and/or the tenant containers210,212. In some examples, a session variable that indicates a time associated with shared content that is accessible is un-set, which results in sessions accessing the most recent (i.e., the latest) data in the shared content220. In some examples, a session is established between an application server (e.g., the application server202, the application server204) and a database system (e.g., the database system) and, during the session, the application transmits queries to and receives results from the database system (e.g., as part of one or more transactions). Each session is associated with a session variable, as described herein. An example session variable is provided as:TEMPORAL_SYSTEM_TIME_AS_OF which indicates a time associated with shared content that is accessible to transactions. Prior to triggering an import, the session variable, for sessions across all tenants, is un-set, which, by default, indicates that the most recent (i.e., the latest) data in the shared content220is accessible. In some implementations, a management component (not depicted) can be used to manage the application. An example management component is Service Provider Cockpit (SPC) provided by SAP SE of Walldorf, Germany. In general, the management component enables lifecycle management of applications, which can include, for example, orchestrating deployments of new content. In some examples, the management component (e.g., in response to prompting by a user) determines a cluster that the content is to be deployed to. In some examples, a cluster includes one or more shared containers and a set of tenant containers that read from the shared container(s), which the content is deployed to. A current timestamp (<utc_ts1>) is determined (e.g., as the first timestamp (ts1). The management component loops over all tenants in the cluster and, for each tenant, calls the tenant and passes the first timestamp to prompt the tenant to set the session variable to the first timestamp. For example:SET [SESSION] ‘TEMPORAL_SYSTEM_TIME_AS_OF’=‘<utc_ts1>’ In response, the tenant sets its session variable to the first timestamp. For example, the respective application server sets the session variable to the first timestamp. In this manner, any executing transactions (i.e., transaction in-progress prior to setting the session variable) access the “latest” data in the shared content, which is anyway data that was included in the shared content at or prior to the first timestamp. Similarly, any transactions that begin after the first timestamp, will only access data that was included in the shared content at or prior to the first timestamp. In some examples, each application server also invalidates a prepared statement cache, which ensures that any transactions that begin after the first timestamp, will only access data that was included in the shared content at or prior to the first timestamp. In some examples, the prepared statement cache stores statements (e.g., SQL statements) of previously submitted transactions. In some implementations, the management component waits a pre-determined amount of time, after which all sessions in the cluster have set the session variable. The pre-determined amount of time can be a maximum-runtime for a session that is set within the system. Accordingly, upon expiration of the pre-determined amount of time, the session variable is determined to be set for all sessions in the cluster. In some implementations, each application server is configured to notify the management component as to setting of the session variable. Accordingly, upon reporting of all application servers in the cluster, the session variable is determined to be set for all sessions in the cluster. After the session variable is set for all sessions, all tenants read from the shared container with the first timestamp, and all subsequent changes to the shared container are not visible to the tenants, until the session variable is again modified. After the session variable is set for all sessions, the management component calls a deployment tool to deploy the new content to the shared container. An example deployment tool includes R3trans provided by SAP SE. In some examples, the call includes or otherwise indicates the content that is to be imported (deployed) to the shared container, as well as the shared container. In response to the call, the deployment tool imports the content directly to the implicated table(s) within the shared container. That is, instead of cloning tables and adding the content to cloned tables, the content is directly added to the existing tables. In some examples, the deployment tool records which tables have received content as well as a timestamp of when the tables received with content. This deployment is depicted inFIG.2B, which includes a delta230and a deployment tool232. Here, the delta230represents a post-import version of the shared content220(i.e., content that is imported after <utc_ts1>). In response to the call from the management component, the deploy tool232imports the content directly to the shared content220(i.e., TAB #1 in the example ofFIG.2B), the imported content being represented by the delta230. At this state, the shared container220includes the delta (i.e., content change), but the tenant containers only read the pre-import version of the shared content (i.e., data associated with a timestamp that is equal to or earlier than <utc_ts1>). In some implementations, the delta230(i.e., the post-import version of the shared content220) is made visible to the tenant containers210,212by un-setting the session variable for each of the tenants. For example, the session variable is un-set by deleting the first timestamp from the configuration. In some examples, the management component loops over all tenants in the cluster and, for each tenant, calls the tenant to prompt the tenant to un-set the session variable. For example:UNSET [SESSION] ‘TEMPORAL_SYSTEM_TIME_AS_OF’ In response, the tenant unsets its session variable. For example, the respective application server unsets the session variable. In this manner, any executing transactions (i.e., transaction in-progress prior to un-setting the session variable) access the “latest” data in the shared content, which is anyway data that was included in the shared content at or prior to a second timestamp (i.e., the time at which the session variable is un-set). Similarly, any transactions that begin after the second timestamp, will only access data that was included in the shared content after to the second timestamp (i.e., the post-import version of the shared content). In some examples, each application server also invalidates a prepared statement cache, which ensures that any transactions that begin after the second timestamp, will only access data that was included in the shared content after the second timestamp. In some implementations, the session variable is un-set at the tenants prior to and/or during deployment of content to the tenants (e.g., to the tenant-specific tables). In this manner, deployment of content to tenants can be coordinated with respective tenants. That is, the deployment of content to tenants need not be synchronous. For example, each tenant can determine when content is to be deployed to their respective tenant container. This is depicted inFIG.2C, which includes the views222,226reading from the delta230and the deploy tool232importing content to the tenant container210and the tenant container212(e.g., at different times). FIG.3depicts an example timeline300of online import using versioned tables in accordance with implementations of the present disclosure. The example timeline300includes a first timestamp (ts1), a second timestamp (ts2), and transactions of multiple tenants (Tenant 1, Tenant 2, Tenant 3). In the example ofFIG.3, prior to executing deployment, the session variable of all tenants is set to the first timestamp (e.g., <utc_ts1>). That is, after the first timestamp, the session variable is set for any tenant that does not have an executing transaction (e.g., Tenant 2), and is set for tenants after executing transactions complete (e.g., Tenant 1, Tenant 3). As described in detail herein, after all tenants set the session variable, online import of content to the shared content is executed. This can last until the second timestamp, at which point, the session variable of all tenants is un-set, as described in detail herein. That is, after the second timestamp, the session variable is set for any tenant that does not have an executing transaction (e.g., Tenant 2), and is set for tenants after executing transactions complete (e.g., Tenant 1, Tenant 3). More particularly, any tenants having transactions that begin prior to the second timestamp and are still executing will have their session variable unset after the executing transactions complete. FIG.4depicts an example process400that can be executed in accordance with implementations of the present disclosure. The example process400can be executed by one or more computer-executable programs. A call is transmitted to application servers to set a session variable (402). For example, and as described herein, a management component (e.g., in response to prompting by a user) determines a cluster that content is to be deployed to, the cluster including specified application servers. The management component loops over all tenants in the cluster and, for each tenant, calls the tenant and passes a first timestamp to prompt the tenant to set the session variable to the first timestamp. In response, each tenant (application server of each tenant) sets its session variable to the first timestamp. It is determined whether the session variable of all application servers has been set (404). In some examples, and as described herein, the management component waits a pre-determined amount of time, after which all sessions in the cluster have set the session variable. The pre-determined amount of time can be a maximum-runtime for a session that is set within the system. Accordingly, upon expiration of the pre-determined amount of time, the session variable is determined to be set for all sessions in the cluster. In some examples, and as described in detail herein, each application server is configured to notify the management component as to setting of the session variable. Accordingly, upon reporting of all application servers in the cluster, the session variable is determined to be set for all sessions in the cluster. After the session variable is set for all sessions, all tenants read from the shared container with the first timestamp, and all subsequent changes to the shared container are not visible to the tenants, until the session variable is again modified. A first set of content is imported to a shared container (406). For example, and as described in detail herein, after the session variable is set for all sessions, the management component calls a deployment tool to deploy the new content to the shared container. In response to the call, the deployment tool imports the content directly to the implicated table(s) within the shared container. That is, instead of cloning tables and adding the content to cloned tables, the content is directly added to the existing tables. In some examples, the deployment tool records which tables have received content as well as a timestamp of when the tables received with content. It is determined whether the import is complete (408). For example, the deployment tool can provide an indication to the management component, the indication indicating that the import is complete. Upon completion of the import, a call is transmitted to the application servers to unset the session variable (410). For example, the session variable is un-set by deleting the first timestamp from the configuration. In some examples, the management component loops over all tenants in the cluster and, for each tenant, calls the tenant to prompt the tenant to un-set the session variable. A second set of content is imported to tenant containers (412). For example, tenant-specific content is imported to one or more tenant containers. Referring now toFIG.5, a schematic diagram of an example computing system500is provided. The system500can be used for the operations described in association with the implementations described herein. For example, the system500may be included in any or all of the server components discussed herein. The system500includes a processor510, a memory520, a storage device530, and an input/output device540. The components510,520,530,540are interconnected using a system bus550. The processor510is capable of processing instructions for execution within the system500. In one implementation, the processor510is a single-threaded processor. In another implementation, the processor510is a multi-threaded processor. The processor510is capable of processing instructions stored in the memory520or on the storage device530to display graphical information for a user interface on the input/output device540. The memory520stores information within the system500. In one implementation, the memory520is a computer-readable medium. In one implementation, the memory520is a volatile memory unit. In another implementation, the memory520is a non-volatile memory unit. The storage device530is capable of providing mass storage for the system500. In one implementation, the storage device530is a computer-readable medium. In various implementations, the storage device530may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. The input/output device540provides input/output operations for the system500. In some implementations, the input/output device540includes a keyboard and/or pointing device. In some implementations, the input/output device540includes a display unit for displaying graphical user interfaces. The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device, for execution by a programmable processor), and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer can include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer can also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet. The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.
37,119
11860842
DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Embodiments may be practiced without some or all of these details. It will be understood that the forgoing disclosure is not intended to limit the scope of the claims to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the scope of the disclosure as defined by the appended claims. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the subject matter. Example embodiments involve a system and methods for identifying valuable view item pages (VIPs) to improve (e.g., increase) traffic from a search engine (e.g., GOOGLE™, BING™) to a site linked to the VIP (e.g., a linked site, EBAY™). The system and methods provide an improvement over existing systems, which do nothing to identify or select valuable VIPs for use in driving traffic from display sites. The system and methods described herein improve the earlier system by predicting the probability of future traffic for a given product based on a number of product level factors as input variables, and identifying a selection of VIPs corresponding to the products with the probability of the highest future traffic in order to maximize driving natural search traffic to a linked site of the corresponding VIP (e.g., EBAY). The probability of future traffic for a given item is determined by building a machine learned (ML) model. An ML model makes data-driven predictions and decisions based on a set of inputs. In some example embodiments, the ML model is generated through gradient boosted machine learning techniques (GBM). GBM is a machine learning technique for regression and classification, which produces a prediction model in the form of an ensemble of weak prediction models (e.g., a decision tree). The ML model is trained and tested by providing sample data as input data and a target variable (e.g., a natural search traffic value). The ML model may then make correlations between the sample data and the target variable output. The ML model calculates a probability of future traffic for a given product (or set of products) based on input variables that include item level factors and previous search engine optimization (SEO) performance metrics for a VIP that corresponds to a product or set of products. Previous SEO performance metrics include, for example, natural search traffic, view count, listing type (e.g., good till canceled, auction format, Dutch auction, buy it now, etc.), rank weighted impressions count, meta category, watch count, predicted quality, median price of similar products, price, number of listings by the seller of the product, time on site, seller feedback score, bounce count (e.g., number of sessions in which a user left a site from the entrance page without interacting with the page), previous bounce count, quantity of the product, condition, and quantity sold. In some embodiments, the natural search traffic through the VIPs are used as a target variable by the ML model to calculate the probability of future traffic based on a GBM technique. The natural search traffic is a frequency in which the product is returned as a natural search result, as opposed to being presented as an advertisement. Based on the natural search traffic calculated by the ML model, the system is configured to make an index or no-index decision for a product (or set of products). Having calculated the probability of future traffic for a set of VIPs, and made index and no-index decisions for a product (or set of products), the ML model identifies one of more VIPs from among the set of VIPs to maximize traffic to a linked site (e.g., of the VIP), based on the index and no-index decisions. For example, the system may access a VIP of a product which has been categorized as “index,” and insert an HTML meta tag within the VIP indicating that the VIP is an index page. Meta tags are used by search engines to determine whether a page is to be returned as a search result or not. A web page (e.g., a product listing page, a VIP) which includes an “index” meta tag would therefore be retrieved and included among a set of search results, while a page that includes a “no-index” tag would not be shown in a set of search results. Thus, by including the appropriate tag (index or no-index), whether or not a page (e.g., a VIP) may be returned among a set of search results may be specified. FIG.1is an example embodiment of a high-level client-server-based network architecture100. A networked system102, in the example forms of a network-based publication or payment system, provides server-side functionality via a network104(e.g., the Internet or wide area network (WAN)) to one or more client devices110.FIG.1illustrates, for example, a web client112(e.g., a browser, such as the Internet Explorer® browser developed by Microsoft® Corporation of Redmond, Wash. State), client application(s)114, and a VIP locator application116executing on the client device110. The client device110may comprise, but is not limited to, a wearable device, mobile phone, desktop computer, laptop, portable digital assistant (PDA), smart phone, tablet, ultra-book, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronics, game console, set-top box, or any other communication device that a user may utilize to access the networked system102. In some embodiments, the client device110comprises a display module (not shown) to display information (e.g., in the form of user interfaces). In further embodiments, the client device110comprises one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user that is used to perform a transaction involving digital items within the networked system102. In one embodiment, the networked system102is a network-based publication system that responds to requests for product listings, publishes publications comprising item listings of products available on the network-based publication system, and manages payments for these transactions. One or more portions of the network104may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, another type of network, or a combination of two or more such networks. The client device110may include one or more client applications114(also referred to as “apps”) such as, but not limited to, a web browser, messaging application, electronic mail (email) application, an e-commerce site application (also referred to as a marketplace application), and the like. In some embodiments, if the e-commerce site application is included in the client device110, then the client application(s)114is configured to locally provide the user interface and at least some of the functionalities with the client application(s)114configured to communicate with the networked system102, on an as needed basis, for data or processing capabilities not locally available (e.g., access to a database of items available for sale, to authenticate a user, to verify a method of payment). Conversely, if the e-commerce site application is not included in the client device110, the client device110may use its web browser to access the e-commerce site (or a variant thereof) hosted on the networked system102. One or more users106may be a person, a machine, or other means of interacting with the client device110. In example embodiments, the user106is not part of the network architecture100, but may interact with the network architecture100via the client device110or other means. For instance, the user106provides input (e.g., touch screen input or alphanumeric input) to the client device110and the input is communicated to the networked system102via the network104. In this instance, the networked system102, in response to receiving the input from the user106, communicates information to the client device110via the network104to be presented to the user106. In this way, the user106can interact with the networked system102using the client device110. An application program interface (API) server120and a web server122are coupled to, and provide programmatic and web interfaces respectively to, one or more application servers140. The application server(s)140may host one or more publication system142and payment system144, each of which may comprise one or more modules or applications and each of which may be embodied as hardware, software, firmware, or any combination thereof. The application server(s)140are, in turn, shown to be coupled to one or more database server124that facilitates access to one or more information storage repositories or database(s)126. In an example embodiment, the database(s)126are storage devices that store information to be posted (e.g., publications or listings) to the publication system(s)142. The databases(s)126may also store digital item information in accordance with example embodiments. Additionally, a third party application132, executing on third party server(s)130, is shown as having programmatic access to the networked system102via the programmatic interface provided by the API server120. For example, the third party application132, utilizing information retrieved from the networked system102, supports one or more features or functions on a website hosted by the third party. The third party website, for example, provides one or more promotional, publication, marketplace, or payment functions that are supported by the relevant applications of the networked system102. The publication system(s)142provides a number of publication functions and services to the users106that access the networked system102. The payment system(s)144likewise provides a number of functions to perform or facilitate payments and transactions. While the publication system(s)142and payment system(s)144are shown inFIG.1to both form part of the networked system102, it will be appreciated that, in alternative embodiments, each system142and144may form part of a payment service that is separate and distinct from the networked system102. In some embodiments, the payment system(s)144may form part of the publication system(s)142. A selective indexing system150provides functionality operable to calculate a probability of future traffic for a given product, and based on the probability, select a set of view item pages corresponding to the products with the probability of the highest traffic. For example, the selective indexing system150accesses a set of items, generates an ML model, and based on the ML model, predicts the probability of future traffic for items from among a set of items. In some example embodiments, the selective indexing system150is a part of the publication system(s)142. Further, while the client-server-based network architecture100shown inFIG.1employs a client-server architecture, the present inventive subject matter is of course not limited to such an architecture, and could equally well find application in a distributed, or peer-to-peer, architecture system, for example. The various publication system(s)142, payment system(s)144, and selective indexing system150could also be implemented as standalone software programs, which do not necessarily have networking capabilities. The web client112may access the various publication and payment systems142and144via the web interface supported by the web server122. Similarly, the VIP locator application116accesses the various services and functions provided by the publication and payment systems142and144via the programmatic interface provided by the API server120. The VIP locator application116may, for example, generate an ML model to enable users to predict the probability of future traffic for a particular item from among a set of items, and select view item pages corresponding to the items with the greatest probability of future traffic for display at a display site through the networked system102in an off-line manner, and to perform batch-mode communications between the VIP locator application116and the networked system102. FIG.2is a block diagram illustrating components of the selective indexing system150that configure the selective indexing system150to identify valuable view item pages for selective indexing, according to some example embodiments. The selective indexing system150is shown as including a data collection module202, a modeling module204, an indexing module206, and an item page selection module208, all configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of these modules202-208may be implemented using one or more processors210(e.g., by configuring such one or more processors210to perform functions described for that module) and hence may include one or more of the processors210. Any one or more of the modules202-208described may be implemented using hardware alone (e.g., one or more of the processors210of a machine) or a combination of hardware and software. For example, any module described of the selective indexing system150may physically include an arrangement of one or more of the processors210(e.g., a subset of or among the one or more processors of the machine) configured to perform the operations described herein for that module. As another example, any module of the selective indexing system150may include software, hardware, or both, that configure an arrangement of one or more processors210(e.g., among the one or more processors of the machine) to perform the operations described herein for that module. Accordingly, different modules of the selective indexing system150may include and configure different arrangements of such processors210or a single arrangement of such processors210at different points in time. Moreover, any two or more modules of the selective indexing system150may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices. FIG.3is a flowchart illustrating operations of the selective indexing system150in performing a method300of identifying valuable view item pages based on indexed items, according to some example embodiments. As shown inFIG.3, one or more operations302,304,306,308, and310may be performed as part (e.g., a precursor task, a subroutine, or a portion) of the method300, according to some example embodiments. In operation302, the data collection module202of the selective indexing system150accesses a database (e.g., database(s)126) to collect input variables associated with a product (or set of products). The input variables include item level factors. For example, the item level factors that are gathered include natural search traffic of the product (i.e., frequency with which the product is returned as a natural search result, as opposed to being presented as an advertisement), a view count of the product (i.e., how many “clicks” a listing of the product gets), price, number of listings of the product (i.e., within a network based publication system), bounce count, number of unique sellers offering the product, quantity of the product sold, duration of time that the product has been on a site, as well as search engine optimization factors, such as keywords in the title of a publication that includes a listing of the product. In some example embodiments, the item level factors may be located within a local database (e.g., database(s)126), or at a third party server130. For example, the data collection module202may retrieve the item level factors as raw, unedited data from a dump (e.g., a database dump). In operation304, the modeling module204generates a machine learned (ML) model data object based on the input variables using Gradient Boosted Machine (GBM) modeling techniques, to calculate a probability of future traffic for the product. Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models (e.g., decision trees). Gradient boosting combines weak learners into a single strong learner, in an interactive fashion. In some example embodiments, the modeling module204may additionally use the previous SEO performance of the product in calculating the probability. In operations306, the modeling module204trains the ML model based on the input variables gathered by the collection module202. Supervised learning techniques may be applied to train the ML model using the input variables. In a supervised learning scenario, the model is provided with example inputs (e.g., the input variables) and their desired output (e.g., a target variable), with the goal being to map inputs to desired outputs. The collection module202may access and monitor a sample set of products (within the databases126, or at a third party server130) over a predetermined period of time (e.g., 3 months) to identify a search traffic value of the sample set of products. The sample set of products may comprise a portion of products with a zero search traffic value (no traffic at all), and a portion with a search traffic value greater than zero (any amount of traffic). Target variables may then be determined based on the search traffic values of the sample set of products, and be applied to the ML model. The modeling module204provides the ML model with the input variables and the natural search traffic of the products as the target variable. Based on the input variables and the target variable, the modeling module204trains the ML model to maximize the probability of driving natural search to a linked site by identifying the item level factors (within the input variables) that correspond to an increase in traffic. For example, the modeling module204may identify a set of item level factors that correspond to a net increase in natural search traffic. In operation308, the indexing module206assigns the product (from among a set of products) an index status (e.g., “index,” or “no-index”) based on the ML model. For example, the indexing module206may index the product as “index,” or “no index” based on the probability of natural search traffic calculated by the ML model. In some example embodiments, the indexing module206receives a threshold natural search traffic value, and indexes the products based on the corresponding natural search traffic values and item level factors of the products. For example, the threshold natural search traffic value may be based on the natural search traffic values of the sample set of products monitored by the collection module202. In further example embodiments, the indexing module206may receive a value indicating a natural search traffic of the set of products, and apply the natural search traffic of the set of products as the threshold value. In operation310, the item page selection module208selects valuable item pages (VIP) based on the indexed products. The VIP may correspond to the products having the greatest probability of natural search traffic (based on the natural search traffic value probability calculated based on the ML model). In operation312, the item page selection module208assigns an “index” HTML tag to the VIPs. For example, the item page selection module may simple tag the VIP with an “index” HTML tag, wherein the “index” HTML tag causes the VIP to be displayed among a set of search results by a search engine. Pages not identified by the item page selection module208are tagged with a “no-index” HTML tag in order to prevent search engines from retrieving the pages for search requests. FIG.4is a graph400illustrating how performance of a machine learned model varies with decision threshold (e.g., threshold value). As shown inFIG.4, the x-axis402corresponds to the threshold value calculated by the modeling module204, and the y-axis404corresponds to a percentage (e.g., of gain or loss). As indicated by the graph400, as the threshold value increases, the percentage along the y-axis404increases. In some embodiments, in order to validate the performance of the ML model, the system creates a validation data set. The validation data set is created by unbiased random sampling of products that are distinct (not a part of) the sample set of products. In order to measure the performance of the ML model, validation metrics are considered, including a decision threshold, and a correlation between a predicted and actual traffic. For a given decision threshold, the following values may be retrieved from the validation dataset: a percentage of total VIPs reduction; a percentage of traffic loss due to the reduction; a percentage of false negatives; a percentage of bought item loss; and a percentage of GBM loss. FIG.5is a diagram illustrating a process flow of a method500of identifying and indexing a valuable view item page, according to some example embodiments. The method500ofFIG.5is described with reference to interactions between the selective indexing system150and the database126. The database126may store a corpus of view item pages. Modules, Components and Logic Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired) or temporarily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation, and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs).) Electronic Apparatus and System Example embodiments may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Example embodiments may be implemented using a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable medium for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. In example embodiments, operations may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method operations can also be performed by, and apparatus of example embodiments may be implemented as, special purpose logic circuitry, e.g., a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that that both hardware and software architectures merit consideration. Specifically, it will be appreciated that the choice of whether to implement certain functionality in permanently configured hardware (e.g., an ASIC), in temporarily configured hardware (e.g., a combination of software and a programmable processor), or a combination of permanently and temporarily configured hardware may be a design choice. Below are set out hardware (e.g., machine) and software architectures that may be deployed, in various example embodiments. Example Machine Architecture and Machine-Readable Medium FIG.5is a block diagram illustrating components of a machine500, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.5shows a diagrammatic representation of the machine500in the example form of a computer system, within which instructions516(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine500to perform any one or more of the methodologies discussed herein may be executed. Additionally, or alternatively, the instructions may implement the modules ofFIG.2. The instructions transform the general, non-programmed machine into a specially configured machine programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine500operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine500may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine500may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions516, sequentially or otherwise, that specify actions to be taken by machine500. Further, while only a single machine500is illustrated, the term “machine” shall also be taken to include a collection of machines500that individually or jointly execute the instructions516to perform any one or more of the methodologies discussed herein. The machine500includes processors510, memory530, and I/O components550, which may be configured to communicate with each other such as via a bus502. In an example embodiment, the processors510(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, processor512and processor514that may execute instructions516. The term “processor” is intended to include multi-core processor that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.5shows multiple processors, the machine500may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core process), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory/storage530may include a memory532, such as a main memory, or other memory storage, and a storage unit536, both accessible to the processors510such as via the bus502. The storage unit536and memory532store the instructions516embodying any one or more of the methodologies or functions described herein. The instructions516may also reside, completely or partially, within the memory532, within the storage unit536, within at least one of the processors510(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine500. Accordingly, the memory532, the storage unit536, and the memory of processors510are examples of machine-readable media. As used herein, “machine-readable medium” means a device able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions516. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions516) for execution by a machine (e.g., machine500), such that the instructions, when executed by one or more processors of the machine500(e.g., processors510), cause the machine500to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes transitory signals per se. The I/O components550may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components550that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components550may include many other components that are not shown inFIG.5. The I/O components550are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components550may include output components552and input components554. The output components552may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components554may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components550may include biometric components556, motion components558, environmental components560, or position components562among a wide array of other components. For example, the biometric components556may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components558may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components560may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components562may include location sensor components (e.g., a Global Position System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components550may include communication components564operable to couple the machine500to a network580or devices570via coupling582and coupling572respectively. For example, the communication components564may include a network interface component or other suitable device to interface with the network580. In further examples, communication components564may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices570may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components564may detect identifiers or include components operable to detect identifiers. For example, the communication components564may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components564, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Transmission Medium In various example embodiments, one or more portions of the network580may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network580or a portion of the network580may include a wireless or cellular network and the coupling582may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling582may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. The instructions516may be transmitted or received over the network580using a transmission medium via a network interface device (e.g., a network interface component included in the communication components564) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions516may be transmitted or received using a transmission medium via the coupling572(e.g., a peer-to-peer coupling) to devices570. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions516for execution by the machine500, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Language Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
45,730
11860843
DETAILED DESCRIPTION OF THE EMBODIMENTS The technical solution according to the embodiments of the present disclosure will be described clearly and completely as follows in conjunction with the drawings. It is apparent that the described embodiments are only a few rather than all of the embodiments according to the present disclosure. Any other embodiments obtained by those skilled in the art based on the embodiments in the present disclosure without any creative work fall in the scope of the present disclosure. A real or virtual object is expressed and described by data, and such expression and description have different scales, such as zooming in and out of data display, and resolution of a screen. Electronic screens (i.e., view windows) have different resolutions. When data is displayed in a view window, if an image rendered by the data is small in the view window, data representing details of the data will render on the same pixels, since the resolution of the view window is limited; and if the image rendered by the data is large in the view window, the details will be displayed. That is, a larger number of pixels filled (rendered) by the data indicates a higher resolution and a smaller scale of the data, and a smaller number of pixels filled (rendered) by the data indicates a lower resolution and a larger scale of the data. Therefore, there is a characteristic of multi-scale and multi-resolution in data displaying. However, the characteristic of multi-scale of the data is not recorded in current index, data management and storage, and all the data is transmitted and displayed in reading, transmitting, and displaying, resulting in a bottleneck in network transmission and a bottleneck in data rendering. If self-adaptive simplification is performed on the data, a calculation bottleneck will be caused for the server, as is the data analysis and calculation. Therefore, the above technical bottleneck problem is solved with an index according to the present disclosure. Data in the embodiments of the present disclosure includes, but is not limited to, two-dimensional data, three-dimensional data, and multi-dimensional data. Reference is made toFIG.1, which is a flow chart of a method for processing data according to an embodiment of the present disclosure. The method includes following steps S11to S13. In step S11, a scale of data is set. The data includes, but is not limited to, a macro data set, data describing an object (such as surface data), coordinate data (such as coordinate points constituting the surface data), and a micro data bit (such as data in each of data bits constituting the coordinate data). The scale of the data includes but is not limited to a spatial scale of the data and a time scale of the data. The scale of the data is a variable representing macro and micro of the data. Data at a larger scale is more macroscopic than data at a smaller scale, while data at a smaller scale is more microscopic than data at a larger scale. The scale of the data includes but is not limited to a scale of observed data, such as a magnification ratio of spatial data when the spatial data is displayed on a computer. A spatial scale includes, but is not limited to, a resolution of the spatial data. The set scale of the data includes, but is not limited to, a preset scale, a temporarily set scale, a calculated scale in data processing. In step S12, a relationship between the data is analyzed and calculated based on the scale of the data. The relationship between the data includes, but is not limited to, a spatial relationship, a time relationship, and a scale relationship. The spatial relationship between data at a different scale shows a simple feature or a complex feature. For example, the spatial relationship between data at a large scale is simply a coincidence relationship or a non-coincidence relationship, and the spatial relationship between non-coincident data is simply an adjacent relationship or a separate relationship. In step S13, the data is processed based on the relationship, with a processing method corresponding to a set processing type. The data is processed based on the relationship, includes but is not limited to:index establishment, storage, reading, transmission, display, analysis of the data are performed based on relationships between data at different scales;and/or,index storage, reading, transmission, display, analysis of the data are performed based on different scales and relationships between data. Reference is made toFIG.2, which is a flow chart of a method for managing and storing data according to an embodiment of the present disclosure. The method includes following steps S21to S23. In step S21, a scale of data is set. An initial scale for data management is set based on simplicity of a relationship between data at different scales. A scale at which the data is coincident is set to the initial scale for data management. The data coincidence includes, but is not limited to, all data forms one coincidence, data forms multiple coincidences, and multiple coordinate points of one piece of data forms a coincidence. In step S22, a relationship between the data is analyzed and calculated based on the scale of the data. The relationship includes, but is not limited to, a coincidence relationship, an inclusion relationship, an intersection relationship, a tangency relationship, and a separation relationship. In step S23, the data is managed and stored based on the analyzed and calculated relationship between the data. The data between which the relationship is coincidence is used as a data management unit. Anew scale of the data is set based on the initial scale for data management. The relationship between the data is analyzed and calculated based on the new scale of the data. If the relationship between the data is not just the coincidence relationship, but further includes a complex relationship such as the intersection relationship, the tangency relationship, the separation relationship, or the inclusion relationship, then the data is used as a data storage unit at the new scale. Data, which meets a data management condition and between which the relationship is still the coincidence relationship, is used as a data management unit at the new scale, then another new scale of the data is set, and the above processing is repeated. The data management condition includes, but is not limited to, one or a combination of the number of data, an amount of data, and a range of data. The range of data includes, but is not limited to, a display range of the data at the new scale. Reference is made toFIG.3, which is a flow chart of a method for establishing an index and analyzing and managing data according to an embodiment of the present disclosure. The method includes following steps S31to S33. In step S31, a scale of data is set. The set scale of the data includes a scale in any case, such as a preset scale, a temporarily set scale, or a calculated scale in data processing. A level of an index is determined based on the set scale of the data. The data is managed and retrieved based on the determined level of the index. In step S32, a relationship between the data is analyzed and calculated based on the scale of the data. The relationship between the data includes, but is not limited to, a spatial relationship, a time relationship, and a scale relationship. The spatial relationship between data at a different scale shows a simple feature or a complex feature. For example, the spatial relationship between data at a large scale is simply a coincidence relationship or a non-coincidence relationship, and the spatial relationship between non-coincident data is a simple relationship such as an adjacent relationship or a separate relationship. In step S33, an index is established and the data is analyzed and managed based on the analyzed and calculated relationship between the data. After performing the above two steps, the data may be processed based on the relationship with a processing method corresponding to a set processing type. For example, step S33shows a way of processing data based on the relationship. The specific implementation process of steps S31and S32may refer to the embodiment shown inFIG.1. In this step, an index establishment condition may be determined based on a specific situation. Cases of determining the index establishment condition based on a specific situation include: a case of setting a condition and a case of not setting a condition. For example, it is unnecessary to set the index establishment condition in establishing an index for data in each of data bits constituting coordinate data. The index establishment condition includes, but is not limited to, one or a combination of a certain range of data, a certain amount of data, and a certain number of data. The range of data includes, but is not limited to, a range of the data or a calculated range based on the set scale of the data. Based on a preset index establishment condition, a next level of an index is continuously established for data meeting the preset index establishment condition and meeting a relationship between indexed data. A next level of an index is not established for data not meeting the index establishment condition and not meeting the relationship between indexed data. A next level of an index is continuously established for data which meets the index establishment condition and between which the relationship analyzed and calculated based on the set scale of the data is coincidence. The next level of the index corresponds to a scale of data at a next level. The scale of a next level of data is set with, but is not limited to, a direct or an indirect method, such as calculation based on a scale of a current level, and external assignment. A next level of the index is continuously established for data meeting the index establishment condition and meeting the relationship between the indexed data, and the next level of the index is managed with an index item for index management in the index. A managed index can be retrieved with the index item for index management. The data, not meeting the index establishment condition and not meeting the relationship between the indexed data, is managed with an index item for data management in the index. The data can be retrieved with the index item for data management. A next level of the index is continuously established for data which meets the index establishment condition and of which coordinate points analyzed and calculated based on the set scale of the data coincide into one point. Based on the set scale of the data, a unified index is established for any one or a combination of data including but not limited to data of a macro data set, data describing or expressing an object, data in a micro data bit (such as 21.345, where 4 is data in the second data bit after the decimal point). The unified index is used for data analysis and management. The data analysis and management includes any one or a combination of storage, query, reading, transmission, display, analysis, and spatial relationship calculation of data. The relationship between the data is analyzed and calculated based on the scale, the index is established based on the relationship between the data corresponding to the scale, and then the data is analyzed and managed based on the index, including solving the technical problem regarding to function and performance, such as storage, transmission, display and analysis of data. Based on the embodiment shown inFIG.3, after setting the scale of the data, analyzing and calculating the relationship between the data based on the scale of the data, and establishing the index and analyzing and managing the data based on the analyzed and calculated relationship between the data, subsequent processing may further be included based on the established index. A method for managing and storing data based on an established index is provided in an embodiment of the present disclosure. The method includes: managing and storing data based on an established index. The specific process includes: using an index item for data management in the index as a unit for managing and storing data. The unit for managing and storing data may be used for managing and storing data in various forms including storing a data block, storing a record, or storing a file. In this step, the index item for data management in the index is used as an object for managing and storing data, and the object for managing and storing data may be managed and stored in various forms such as a data block, or a storage record, or as a storage file. Reference is made toFIG.4, which is a flow chart of a method for displaying data according to an embodiment of the present disclosure. The method includes following steps S61to S63. In step S61, a scale of data is determined based on a scale of displayed data. An index of the data is established based on a different scale, and the different scale reflects the macro and micro of the data. The scale of the displayed data also reflects the macro and micro of the data. A specific scale of the data may be directly corresponded to or calculated based on the scale of the display data, that is, the scale of the data is directly determined based on the display scale of the data, or the scale of the data is calculated based on the display scale of the data. In step S62, to-be-displayed data is retrieved by an index of the data based on the scale of the data. In the embodiment, the index of the data is established with the method according to the embodiments shown inFIGS.1to3. The retrieving to-be-displayed data by an index of the data based on the scale of the data includes one or multiple of following ways:(1) using a retrieved index item for index management as display data for displaying, that is, at a scale of data correspond to the display scale of the data, data managed by the index item are displayed as a point;(2) using part of data corresponding to the scale of the data in the retrieved data as the display data for displaying, that is, for the retrieved data at the scale of the data corresponding to the display scale of the data, part of the data can be displayed, other data cannot be displayed due to the coincidence between the data, and only the part of the data which can be displayed is used as the display data;(3) using data in part of data bits corresponding to the scale of the data in the retrieved data as the display data for displaying, that is, for the retrieved data at the scale of the data corresponding to the display scale of the data, data in a part of data bits can be displayed, and only the data in the part of the data bits which can be displayed is used as the display data;(4) using the retrieved data as the display data for displaying; and(5) using one or multiple pieces of the retrieved data to replace other data having a relationship of coincidence as the display data for displaying, that is, for the retrieved data at the scale of the data corresponding to the display scale of the data, only a part of the data can be displayed due to the coincidence relationship between the data, thus only the data which can be displayed is used as the display data for displaying. In step S63, the to-be-displayed data is read for data displaying. The reading includes reading from storage device of any data, such as a memory and a hard disk. Based on step S62, an index data which can replace the data to be displayed and the data which can be displayed at the corresponding scale are read for displaying. A method for analyzing and calculating data is provided in an embodiment of the present disclosure. In the embodiment, the analysis and calculation include: an analysis and calculation related to a scale, and an analysis and calculation unrelated to a scale. The analysis and calculation includes, but is not limited to, spatial relationship calculation, aggregation analysis, or thermal map analysis of data. The analysis and calculation related to the scale is performed based on an index and data corresponding to the scale.FIG.5shows a flow chart of the method. The method includes following steps S71to S72. In step S71, an analysis and calculation is performed based on an index of data. In the embodiment, the index of the data is established with the method according to the embodiments shown inFIGS.1to3. The analysis and calculation are performed based on a spatial relationship of index data in the index, at different scales. It is determined, by performing analysis and calculation based on the index, that a relationship between the data includes, but is not limited to, there certainly have a spatial relationship, there certainly not have a spatial relationship, or there may have a spatial relationship. If it is analyzed and calculated by the index that there is a spatial relationship of separation, inclusion, or intersection between data, there certainly have the spatial relationship of separation, inclusion, or intersection between data. If it is analyzed and calculated by the index that there is a spatial relationship of coincidence or tangency between data, there may have a spatial relationship of coincidence, tangency, separation, or inclusion, and it is required to perform further analysis and calculation based on finer data, to confirm the relationship between the data. The analysis and calculation includes, but is not limited to, performing an analysis and calculation at one or more of multiple parts of a system such as a client and a server. The analysis and calculation includes, but is not limited to, performing the analysis and calculation based on the index and data at a macro scale at a non-data server such as a client part and an edge part of a system, and performing the analysis and calculation by using data at a micro scale and original data at a data storage end and a data service end. In step S72, the analysis and calculation are performed based on data at different scales in the data. The analysis and calculation are performed based on data at different scales in the data. Firstly, analysis and calculation are performed based on data at a certain scale corresponding to a small amount of data and a small amount of calculation. If it is required for performing the analysis and calculation that there certainly have a certain relationship between data, and if a result obtained by performing analysis and calculation based on data at a certain scale corresponding to a small amount of data and a small amount of calculation includes a result of might having the certain relationship, it is required to perform further analysis and calculation based on more data at a certain scale (that is, a more microscopic scale) in the data might having the certain relationship, until it is analyzed and calculated that there certainly have a certain spatial relationship between data. If the result, obtained by performing the analysis and calculation based on data at a certain scale corresponding to a small amount of data and a small amount of calculation, is that there is a spatial relationship of separation, inclusion, or intersection between data, then there certainly have the spatial relationship of separation, inclusion, or intersection between data. If the result, obtained by performing the analysis and calculation based on data at a certain scale corresponding to a small amount of data and a small amount of calculation, is that there is a spatial relationship of coincidence or tangency between data, there may have a spatial relationship of coincidence, tangency, separation, or inclusion between data, and it is required to perform further analysis and calculation based on finer data to confirm the relationship between the data. Reference is made toFIG.6, which is a flow chart of a method for progressively transmitting data according to an embodiment of the present disclosure. The method includes following steps S81to S83. In step S81, an incremental data request is sent if it is required to request incremental data. The request includes a scale parameter. In the embodiment, the method for progressively transmitting data is applied to a request sender. When determining that it is required to request incremental data, the request sender sends the incremental data request to a request receiver, and the request includes a scale parameter. If the request sender does not store a previously cached index, the requested scale parameter includes, but is not limited to, a current scale parameter. If the request sender stores the previously cached index, the requested scale parameter includes, but is not limited to, the current scale parameter and a scale parameter corresponding to previously cached index data. If the request sender stores the previously cached index and previously cached data, the requested scale parameter includes, but is not limited to, the current scale parameter and a scale parameter corresponding to the previously cached data. An index item in the index for managing finer and more microscopic data corresponds to a higher level of the scale parameter. Finer data and data having a higher resolution correspond to a higher level of the scale parameter. The method for progressively transmitting data, in the case that the requested sender caches the previously cached data, includes following steps. A current scale parameter is determined. A highest-level scale parameter corresponding to the previously cached data is obtained. A relationship between the highest-level scale parameter corresponding to the previously cached data and the current scale parameter is determined to determine whether it is required to request data, and subsequent steps are continuously performed if it is required to request the data, or the process is ended if it is unnecessary to request data. The incremental data request is sent. The request includes, but is not limited to, the current scale parameter and the highest-level scale parameter corresponding to the previously cached data, which are called the requested scale parameter. The previously cached data includes index data corresponding to a scale in a previously cached index and data corresponding to a scale in the previously cached data. Then, the incremental data is obtained based on the requested scale parameter. This step includes at least the following two ways in steps S82and S83. In step S82, the incremental data obtained by performing analysis based on the requested scale parameter and a scale of the index is received. The incremental data is an incremental data of the index. The incremental data obtained by performing analysis based on the requested scale parameter and the scale parameter of the index is received, where the incremental data is incremental data of the index. If previously cached index data exists, the received incremental data is inserted into the previously cached index data; and if the previously cached index data does not exist, the received incremental data is stored as cached data. In step S83, the incremental data obtained by performing analysis based on the requested scale parameter and a scale of the data is received. The incremental data is an incremental data of the data. The incremental data obtained by performing analysis based on the requested scale parameter and the scale parameter of the data is received, where the incremental data is incremental data of the data. If previously cached data exists, the received incremental data is inserted into the previously cached data; and if the previously cached data does not exist, the received incremental data is stored as cached data. In the embodiment, step S82or step S83may be performed separately to obtain the incremental data, or the two steps may be both performed to obtain the incremental data. In the embodiment, the index of the data is established with the method according to the embodiments shown inFIGS.1to3. With the method for progressively transmitting data according to the embodiment, the requested scale parameter is included in the incremental data request sent by the request sender, so that the request receiver can obtain the incremental data by performing analysis based on the requested scale parameter, ensuring that the obtained incremental data can be displayed without loss, the data transmission amount is reduced and the data transmission efficiency is improved. A method for progressively transmitting data is provided according to another embodiment of the present disclosure. The method is applied to a receiver for receiving an incremental data request. The method includes following steps. An incremental data request sent from a request sender is received. The incremental data request includes a requested scale parameter. Analysis is performed based on the requested scale parameter and a scale of an index, to determine index data meeting an incremental condition in the index as incremental data; and/or analysis is performed based on the requested scale parameter and a scale of data, to determine data meeting the incremental condition in the data as the incremental data. Finally, the incremental data is sent to the request sender. In implementation, the process of obtaining the incremental data requested by the incremental data request is shown inFIG.7, including following steps S91to S95. In step S91, an incremental data request sent from a request sender is received. The request includes a requested scale parameter. If the requested scale parameter included in the request includes a previously cached scale parameter, it indicates that the request sender stores previously cached data. In step S92, analysis is performed on the index and the data, based on a current scale parameter in the requested scale parameter, to obtain a current analysis result. Data in the index, at a scale corresponding to the current scale parameter, is used as the current analysis result. Data in the data, at a scale corresponding to the current scale parameter is used as the current analysis result. If the requested scale parameter includes the previously cached scale parameter, step S93is performed, and if the requested scale parameter does not include the previously cached scale parameter, step S94is performed. In step S93, analysis is performed on the index and the data based on the previously cached scale parameter in the requested scale parameter, to obtain a previous analysis result. The determining data meeting the incremental condition in the current analysis result as the incremental data includes: determining the data, in the current analysis result and not in the previous analysis result, as the incremental data. In step S94, the data of the current analysis result is used as the incremental data. If the requested scale parameter does not include the previously cached scale parameter, the data of the current analysis result is used as the incremental data. In step S95, the incremental data is sent to the request sender. The current scale parameter is used as a current highest-level scale parameter of the incremental data. The incremental data is sent to the request sender. If the request sender caches the previously cached data, it is required to insert the received incremental data into the previously cached data, to realize data reconstruction, and the reconstructed data is used as current cached data. If the request sender does not cache the previously cached data, the request receiver, after receiving the incremental data request, performs analysis on the data based on the current scale parameter in the requested scale parameter, and the obtained data is the incremental data. After receiving the incremental data, the request sender caches the incremental data as the previously cached data, for subsequent processing of progressive transmission. In the embodiment, the index of the data is established with the method according to the embodiments shown inFIGS.1to3. An apparatus for processing data is provided in an embodiment of the present disclosure.FIG.8shows a structure of the apparatus. The apparatus includes: a scale setting unit101, a data analysis unit102, and a data processing unit103. The scale setting unit101is configured to determine a scale of data. The scale of the data includes, but is not limited to, a preset scale, a temporarily set scale, and a calculated scale in data processing. The data analysis unit102is configured to analyze and calculate a relationship between the data based on the scale of the data. The data processing unit103is configured to process the data based on the relationship with a processing method corresponding to a set processing type. Only a better implementation of the apparatus for processing data is provided in the embodiment. The specific operating process of the apparatus may refer to any one of the processes shown inFIG.1to7, which is not repeated herein. An apparatus for establishing an index and analyzing and managing data is provided in an embodiment of the present disclosure.FIG.9shows a structure of the apparatus. The apparatus includes: a scale setting unit111, a data analysis unit112, an index generation unit113, and an analysis and management unit114. The scale setting unit111is configured to determine a scale of data. The scale of the data includes, but is not limited to, a preset scale, a temporarily set scale, and a calculated scale in data processing. The data analysis unit112is configured to analyze and calculate a relationship between the data based on the scale of the data. The index generation unit113is configured to establish an index, analyze and manage the data based on the analyzed and calculated relationship between the data. The analysis and management unit114is configured to analyze and manage the data in storage, display, analysis and calculation, and progressive transmission, based on the index. Alternatively, the index generation unit113may be configured to establish an index, analyze and manage the data based on the analyzed and calculated relationship between the data. Only a better implementation of the apparatus for establishing an index and analyzing and managing data is provided in the embodiment. The specific operating processes of the apparatus may refer to any one of the processes shown inFIG.3to7, which is not repeated herein. An apparatus for managing and storing data is provided in an embodiment of the present disclosure.FIG.10shows a structure of the apparatus. The apparatus includes: a scale setting unit121, a data analysis unit122, and a management and storage unit123. The scale setting unit121is configured to determine a scale of data. The scale of the data includes, but is not limited to, a preset scale, a temporarily set scale, and a calculated scale in data processing. The data analysis unit122is configured to analyze and calculate a relationship between the data based on the scale of the data. The management and storage unit123is configured to manage and store the data based on the analyzed and calculated relationship between the data. Only a better implementation of the apparatus for managing and storing data is provided in the embodiment. The specific operating processes of the apparatus may refer to the process shown inFIG.2, which is not repeated herein. An apparatus for managing and storing data is further provided in another embodiment of the present disclosure. The apparatus includes a management and storage unit. The management and storage unit may be used in conjunction with the apparatus for establishing an index and analyzing and managing data shown inFIG.9, to manage and store the data based on the index of the data obtained by the apparatus inFIG.9. The specific operating processes of the apparatus may refer to the operating processes in the method embodiments, which is not repeated herein. An apparatus for displaying data is provided in an embodiment of the present disclosure. The apparatus includes: a scale determination unit, a retrieval unit, and a data reading unit. The scale determination unit is configured to determine a scale of data based on a scale of displayed data. The retrieval unit is configured to retrieve to-be-displayed data by an index of the data based on the scale of the data. The data reading unit is configured to read the to-be-displayed data for data displaying. In the embodiment, the index of the data is generated by the apparatus for establishing an index and analyzing, and managing data shown inFIG.9. The specific operating processes of the apparatus for displaying data may refer to the embodiment shown inFIG.4, which is not repeated herein. An apparatus for analyzing and calculating data is further provided in an embodiment of the present disclosure. The apparatus includes: an analysis and calculation unit. The analysis and calculation unit is configured to perform analysis and calculation based on an index of data. The analysis and calculation unit may be used in conjunction with the apparatus for establishing an index and analyzing and managing data shown inFIG.9, to perform analysis and calculation based on the index of the data obtained by the apparatus inFIG.9. The specific operating processes of the apparatus may refer to the operating processes in the method embodiments, which is not repeated herein. An apparatus for progressively transmitting data is further provided in an embodiment of the present disclosure, applied to an incremental data request sender. The apparatus includes: an incremental data request sending unit and an incremental data receiving unit. The incremental data request sending unit is configured to send an incremental data request if it is required to request incremental data. The request includes a requested scale parameter. The incremental data receiving unit is configured to receive incremental data obtained by performing analysis based on the requested scale parameter and a scale of an index, where the incremental data is incremental data of the index, and/or, to receive incremental data obtained by performing analysis based on the requested scale parameter and a scale of data, where the incremental data is incremental data of the data. In the embodiment, the index of the data is generated by the apparatus for establishing an index and analyzing and managing data shown inFIG.9. The specific operating processes of the apparatus may refer to the embodiment shown inFIG.6, which is not repeated herein. In addition, an apparatus for progressively transmitting data is further provided in another embodiment of the present disclosure, applied to an incremental data receiver. The apparatus includes: an incremental data request receiver, an incremental data determination unit, and an incremental data sending unit. The incremental data request receiver is configured to receive an incremental data request sent from a request sender. The incremental data request includes a requested scale parameter. The incremental data determination unit is configured to perform analysis based on the requested scale parameter and a scale of an index, to determine index data meeting an incremental condition in the index as incremental data, and/or, to perform analysis based on the requested scale parameter and a scale of data, to determine data meeting the incremental condition in the data as the incremental data. The incremental data sending unit is configured to send the incremental data to the request sender. In the embodiment, the index of the data is generated by the apparatus for establishing an index and analyzing and managing data shown inFIG.9. The specific operating processes of the apparatus may refer to the embodiment shown inFIG.7, which is not repeated herein. The method and the apparatus for processing data according to the present disclosure may be arranged in a computer, or in a mobile phone or other equipment. The embodiments in this specification are described in a progressive way, each of which emphasizes the differences from others, and the same or similar parts among the embodiments can be referred to each other. Since the apparatus disclosed in the embodiments corresponds to the method therein, the description thereof is relatively simple, and for relevant matters references may be made to the description of the method. It may be known by those skilled in the art that, units and algorithm steps in each examples described in conjunction with the embodiments disclosed herein can be realized by electronic hardware, computer software or a combination thereof. In order to clearly illustrate interchangeability of the hardware and the software, steps and composition of each embodiment have been described generally in view of functions in the above specification. Whether the function is executed in a hardware way or in a software way depends on application of the technical solution and design constraint condition. Those skilled in the art can use different method for each application to realize the described function, and this is not considered to be beyond the scope of the application. The steps of the methods or algorithms described in conjunction with the embodiments of the present disclosure can be implemented with hardware, software modules executed by a processor, or a combination thereof. The software modules may reside in a Random Access Memory (RAM), an internal memory, a Read Only Memory (ROM), an Electrically Programmable ROM, an Electrically-Erasable Programmable ROM, a register, a hard disk, a removable disk drive, CD-ROM, or other types of storage media well known in the technical field. With the above descriptions of the disclosed embodiments, the skilled in the art may practice or use the present disclosure. Various modifications to the embodiments are apparent for the skilled in the art. The general principle suggested herein can be implemented in other embodiments without departing from the spirit or scope of the disclosure. Therefore, the present disclosure should not be limited to the embodiments disclosed herein, but has the widest scope that is conformity with the principle and the novel features disclosed herein.
38,442
11860844
DESCRIPTION OF EXAMPLE EMBODIMENTS In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention. The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention. Any reference in the specification to a method should be applied mutatis mutandis to a device or system capable of executing the method and/or to a non-transitory computer readable medium that stores instructions for executing the method. Any reference in the specification to a system or device should be applied mutatis mutandis to a method that may be executed by the system, and/or may be applied mutatis mutandis to non-transitory computer readable medium that stores instructions executable by the system. Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a device or system capable of executing instructions stored in the non-transitory computer readable medium and/or may be applied mutatis mutandis to a method for executing the instructions. Any combination of any module or unit listed in any of the figures, any part of the specification and/or any claims may be provided. The specification and/or drawings may refer to a processor. The processor may be a processing circuitry. The processing circuitry may be implemented as a central processing unit (CPU), and/or one or more other integrated circuits such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), full-custom integrated circuits, etc., or a combination of such integrated circuits. Any combination of any steps of any method illustrated in the specification and/or drawings may be provided. Any combination of any subject matter of any of claims may be provided. Any combinations of systems, units, components, processors, sensors, illustrated in the specification and/or drawings may be provided. The terms compressing and encoding are used in an interchangeable manner. The a fingerprint filter (FF) is an example of a management data structure (MDS) that may store fingerprints and run IDs. There may be provided a method, a system, and a computer readable medium for managing LSM-trees stored in a non-volatile memory such as an SSD memory, in an efficient manner. There may be provided a method that scales false positive rate for fingerprints in a proper manner. There is provided a method that efficiently keeps the run IDs within the FF up-to-date. For example—even without the extra read to storage to check if an entry exists before the write and, if so, to update its run ID. There may be provided a combination of both methods. For simplicity of explanation most of the following text will refer to a solution. An example of the solution is referred to as Chucky—Huffman Coded Key-Value Store. It should be noted that the Huffman coding may be replaced by another variable length code. To scale false positives, it was found that the run IDs are extremely compressible. The reason is that their distribution is approximately geometric, meaning that entries with run IDs of larger levels are exponentially more common than entries with run IDs of smaller levels. This allows encoding larger runs with fewer bits and smaller runs with more bits. The saved space can be dedicated to the fingerprints to keep them large as the data grows. For scaling updates, it has been found that that the run IDs can be opportunistically updated during merge operations while the target entries are brought to memory. Hence, we can keep the run IDs up-to-date without introducing any additional storage I/Os. Chucky has been found to scale memory and storage I/Os at the same time. It achieves this by replacing the BFs by a single FF with compressed run IDs that are updated during merge operations. The following text will illustrate examples for run ID compression using Huffman coding, while identifying and addressing the resulting challenges: (1) how to align fingerprints and compressed run IDs within the FF's buckets, and (2) how to encode and decode run IDs efficiently. Chucky may use the bits saved through compression to keep the fingerprints large and to thereby guarantee a scalable false positive rate as the data grows. Chucky can fit with any FF, and only for example there is illustrated how to tailor Chucky to a FF such as a Cuckoo filter. In the specification it is shown how to replace the BFs by an FF with auxiliary run IDs that are kept up-to-date opportunistically while merging. The Run ID is auxiliary in the sense that it include both run ID and fingerprints. In the specification it is shown that run IDs are extremely compressible, and we study how to minimize their size using Huffman coding. In the specification it is shown how to align compressed run IDs and fingerprints within FF buckets to achieve good space utilization. In the specification it is shown how to encode and decode run IDs efficiently. In the specification it is shown how to integrate Chucky with Cuckoo filter. In the specification it is shown experimentally that Chucky scales in terms of memory I/Os and storage I/Os at the same time. FIG.1includes graphs 11 and 12 that illustrate a comparison between the performance of Chucky versus prior art solutions. The specification illustrates that the Run IDs are extremely compressible by analyzing their information theoretical entropy, and that we can further reduce entropy and thus enable more compressibility by sorting a series of Run IDs and assigning them a single code The specification illustrates a compression variant called Multinomial Huffman that assigns a Huffman code to a bucket based on the probability of a given combination of run IDs coinciding in the bucket. In the specification it is shown that compressed Run IDs introduce the problem of bucket overflows, and we introduce two techniques to address it, namely Variable Minimally Bounded Fingerprints and a Leftover Huffman, an approach that assigns codes based on the leftover space in the bucket after the fingerprints. In the specification it is shown how to support updates of duplicate entries to the LSM-tree without causing infinite recursion chains in the Cuckoo filter. Chucky can be generalized across a number of widely used LSM-tree designs suited for different application workloads. In the specification it is shown how to recover the Cuckoo filter after power failure. An LSM-Tree consists of multiple levels of exponentially increasing capacities. Level 0 is an in-memory buffer (and/or in a first layer of the storage) while all other levels are in storage. The application inserts key-value pairs into the buffer. When the buffer reaches capacity, its contents get flushed as a sorted array called a run into Level 1 in storage. There are various merging policies that can be implemented. The first one is referred to a sequential merging in which whenever a given Level i reaches capacity, its runs get merged into Level i+1. The level i+1 then may replace the i level—and may be treated as a modified i level. To merge runs, their entries are brought from storage to memory to be sort-merged and then written back to storage as a new run. The number of levels L is logT(N), where T is the capacity ratio between any two adjacent levels and N is the ratio between the overall data size and the in-memory buffer's size. Another merging policy may be referred to as a multiple level merging includes merging runs of multiple levels at once. This may occur for various reasons for—example—when predicting that a certain merger will lead a certain level to be almost filled. This merging policy may be referred to as predictive merging. Figure Table 0 provides an example of different lists terms used to describe LSM-tree throughout the paper. These terms are: TABLE 0TermDefinitionLNumber of LSM-tree levelsTLSM-tree size ratioNData size to buffer size ratioKmerge triggering threshold for Levels 1 toZmerge triggering threshold for LevelHBloom filter hash functions num.RExpected point read I/O costMfiltering memory budget Updates and deletes are performed out-of-place by inserting a key-value entry with the updated value into the buffer (for a delete, the value is a tombstone). Whenever two runs get merged while containing two entries with the same key, the older entry is discarded as the newer entry supersedes it. In order to always find the most recent version of an entry, an application read traverses the runs from youngest to oldest across the levels and terminates when it finds the first entry with a matching key. If its value is a tombstone, the read returns a negative result to the application. For every run in storage, there is an array of fence pointers in memory that contains the min/max key at every data block and thereby allows finding the relevant block within a run with one storage I/O. The LSM-tree design space spans many variants that favor different application workloads. The most common two are Leveling and Tiering (used by default in RocksDB and Cassandra, respectively). This is illustrated inFIG.2. With Leveling, merging is performed greedily within each level (i.e., as soon as a new run comes in). As a result, there is at most one run per level and every entry gets merged on average about T/2 times within each level. With Tiering, merging is performed lazily within each level (i.e., only when the level fills up). As a result, there are at most about T runs per level and every entry gets merged once across each of the levels. Leveling is more read and space optimized while Tiering is more write-optimized. The size ratio T can be varied to fine-tune this trade-off. FIG.2also illustrates Lazy-Leveling, a hybrid that uses Leveling at the largest level and Tiering at all smaller levels to offer favorable trade-offs in-between (i.e., for space-sensitive write-heavy applications with mostly point reads). The recent Dostoevsky framework generalizes these three variants using two parameters: (1) a threshold Z for the number of runs at the largest level before a merge is triggered, and (2) a threshold K for the number of runs at each of the smaller levels before a merge is triggered. FIG.2and tables 1 and 2 show how to set these parameters to assume each of the three designs. TABLE 1Blocked bloom filters memory I/OLevelingLazy-levelingTieringProbe costO(L)O(L − T)O(L − T)Construction costO(L − T)O(L + T)O(L) TABLE 2Bloom Filters false positive rate complexitiesLevelingLazy-levelingTieringUniformO(2−M*ln(2)* L)O(2−M*ln(2)*O(2−M*ln(2)*L*T)L*T)OptimalO(2−M*ln(2))O(2−M*ln(2))O(2−M*ln(2)* T) See, for example equation (1) that denotes A_i as the maximum number of runs at Level i and A as the maximum number of runs in the system with respect to these parameters. Ai=Kforibetween 1 andL, for any other value ofi Ai=Z. A=SUM (foribetween 1 andL) ofAi=(L−1)*k+z.(1) Chucky can be built, for example, on top of Dostoevsky to be able to span multiple LSM-tree variants that can accommodate diverse workload. While some designs such as HBase and Cassandra merge entire runs at a time, others such as RocksDB partition each run into multiple files called Sorted String Tables (SSTs) and merge at the granularity of SSTs. This grants finer control of how merge overheads are scheduled in space and time, though it increases write-amplification. For ease of exposition, the specification illustrates merging as though it occurs at the granularity of runs, though this work is also applicable to designs that rely on SSTs for merging. We use RocksDB's dynamic level size adaptation technique that sets the capacities of Levels 1 to L−1 based on the number of entries at the largest level in order to restrict storage space-amplification. We assume preemptive merging, whereby we detect when Levels 1 to i are near capacity and merge their runs all at once as opposed to having the merge recursively trickling across the levels and resulting in more write-amplification. Bloom filters Each run in the LSM-tree has a corresponding in-memory Bloom filter (BF), which is a space-efficient probabilistic data structure used to test whether a key is a member of a set. All Bloom filters are persisted in storage to be recoverable in case of system failure. A BF is an array of bits with h hash functions. For every key inserted, we map it using each of the hash functions to h random bits, setting them from 0 to 1 or keeping them set to 1. Checking for the existence of a key requires examining its h bits. If any are set to 0, we have a negative. If all are set to 1, we have either a true or false positive. The false positive rate (FPR) is 2{circumflex over ( )}(−M*ln(2)), where M is the number of bits per entry. As we increase M, the probability of bit collisions decreases and so the FPR drops. In KV-stores in industry (e.g., RocksDB), the number of bits per entry is typically set to ten. A BF does not support deletes (i.e., by resetting bits back to 0) as this could lead to false negatives. For this reason, a new BF is created from scratch for a new run as a result of a merge. A BF entails h memory I/Os for an insertion as well as for a positive query. For a negative query, it entails on average two memory I/Os since about 50% of the bits are set to zero and so the expected number of bits checked before incurring a zero is two. To optimize memory I/Os has been proposed as an array of contiguous BFs, each the size of a cache line. A key is inserted by first hashing it to one of the constituent BFs and then inserting the key into it. This entails only one memory I/O for any insertion or query. The trade-off is a slight FPR increase. RocksDB recently switched from standard to blocked BFs. We use both approaches as baselines in this paper, and we focus more on blocked BFs as they are the tougher competition. For an LSM-tree with blocked BFs, an application query costs at most O(K*(L−1)+Z) memory I/Os (i.e., one to the filter of each run). On the other hand, an application update costs O(T/K*(L−1)+T/Z) amortized memory I/Os (the average number of times an entry gets merged and thus inserted into a new BF). Table 1 summarizes these costs for each of the LSM-tree variants. We observe that both cost metrics increase with respect to the number of levels L and thus with the data size. Second, we observe an inverse relationship between these metrics: the greedier we set the LSM-tree's merging to be (i.e., either by changing merge policy or by fine-tuning the size ratio), probe cost decreases as there are fewer BFs while construction cost increases as the BFs get rebuilt more greedily. Hence, it is impossible to improve on one of these metrics without degrading the other.FIG.1conceptually illustrates this relationship. KV-stores in industry set a uniform number of bits per entry to BFs at all levels. This approach, however, was recently identified as sub-optimal. The optimal approach is to reallocate about 1 bit per entry from the largest level and to use it to assign linearly more bits per entry to filters at smaller levels. While this slightly increases the largest level's FPR, it exponentially decreases the FPRs at smaller levels such that the overall sum of FPRs is smaller. Equations 2 and 3 express the FSR with both approaches: FPRuniform=2-M*ln(2)*(K*(l-1)+Z)(2)FPRoptimal=2-M*ln(2)*ZT-1T*K1T*TT-1TT-1(3) The intuition for Equation (2) is that as the data grows, the FPR increases as there are more runs and thus more BFs across which false positives can occur. On the other hand, Equation (3) states that with the optimal approach, the relationship between memory and FPR is independent of the number of levels and thus of data size. The reason is that as the LSM-tree grows, smaller levels are assigned exponentially smaller FPRs thus causing the sum of FPRs to converge. We summarize the corresponding FPR complexities in Table 2 and visualize them conceptually inFIG.1. While our primary goal is to improve on the BFs' memory bandwidth, we must also at least match the FPR scalability with the optimal BF approach to be competitive across all performance fronts. Fingerprint Filters (FFs) are a family of data structures that have recently emerged as an alternative to Bloom filter. At its core, an FF is a compact hash table that stores fingerprints of keys, where a fingerprint is a string of F bits derived by hashing a key. To test for set membership, FF hashes a key in question to a bucket and compares its fingerprint to all fingerprints in the bucket. If there is a match, we have a positive. An FF cannot return a false negative, and it returns a false positive with a probability of at least 2−F. The fingerprint size F controls a trade-off between accuracy and space. The various FFs that have been proposed differ in their collision resolution methods, which swap entries across buckets to resolve collisions. For example, Cuckoo filter uses a variant of Cuckoo hashing while Quotient filter cite, uses a variant of linear probing. While different collision resolution methods give different FFs nuanced performance and space properties, all FFs to date share a common set of desirable properties with respect to our problem. First, they support queries and updates in practically constant time for a similar memory footprint to Bloom filters. Second, unlike Bloom filters, FFs support storing updatable auxiliary data for each entry alongside its fingerprint. These capabilities allows to replace an LSM-tree's multiple Bloom filters with a single FF that maps from data entries to the runs in which they reside in the LSM-tree. Such a design promises to allow finding an entry's target run with a small and constant number of memory I/Os, unlike Bloom filters which require at least one memory I/O across numerous filters. Despite this promise, two challenges arise with this approach. The first is how to keep the run IDs up-to-date as entries get merged across the LSM-tree. The second is how to keep the size of the run IDs modest as the data size grows. Case-Study The recent SlimDB system is the first to integrate LSM-tree with an FF. As such, it provides an interesting case-study and baseline with respect to meeting the above two challenges. To keep the run IDs within the FF up-to-date, SlimDB performs a read I/O to storage for each application update to check if the entry exists and if so to update its run ID within the FF. This involves a substantial overhead in terms of storage I/Os, specifically for applications that perform blind writes. Second, SlimDB represents the run IDs using binary encoding. Each run ID therefore comprises log2(K*(L−1)+Z) bits to identify all runs uniquely. Hence, more bits are needed as the number of levels L grows. This is not a problem for SlimDB as it is designed for systems with a less constrained memory budget. In fact, SlimDB uses additional memory to prevent false positives altogether by storing the full keys of colliding fingerprints in memory. SlimDB also proposes a novel fence pointers format. In contrast, we focus on applications with a tighter budget of M bits per entry, where M is a non-increasing small constant. Under this constraint, Equation (4) denotes the FPR over a single entry with respect to the number of bits per entry M and the run ID size D. Fpr>2−F=2−M+D(4) By plugging the run ID size for D, the lower bound simplifies to 2−M*(K*(L−1)+Z), meaning the FPR increases with the number of levels as the run IDs steal bits from the fingerprints. Chucky is an LSM-based KV-store that scales memory and storage I/Os at the same time. It achieves this by replacing the Bloom filters with a fingerprint filter and innovating along two areas. Chucky keeps the run IDs within the FF up-to-date opportunistically during merge operations at no additional storage I/O cost. Moreover, it allows run IDs to be inherited across merge operations to obviate FF updates and thereby reduce memory I/Os. In this way, Chucky both scales and decouples the costs of updating and querying the FF, as shown inFIG.1. Chucky may compress run IDs to prevent their size from increasing and taking bits from the fingerprints as the data grows. Thus, Chucky scales the FPR and thereby storage I/Os as shown inFIG.1. For both generality and ease of exposition, we abstract the details of the FF's collision resolution method for now. FIG.3illustrates the architecture of Chucky, which uses a management data structure (MDS) to map each physical entry in the LSM-tree to one MDS entry that may include a fingerprint and a run ID. The figure also illustrates the query and update workflows with solid and dashed lines, respectively. InFIG.3, keys k_1, k_2 and k_3 reside across various runs but happen to be mapped by the FF's hash function to the same FF bucket. Keys k_2 and k_3 have a colliding fingerprint Y while key k_1 has a different fingerprint X. The application queries key k_3, and so we reach the bucket shown in the figure and traverse its fingerprints from those belonging to younger runs first (i.e., to find the most recent entry's version). For Run1, we have a negative as the fingerprint is different. For Run 2, we have a false positive leading to a wasted storage I/O. For Run 3, we have a true positive, and so the target entry is returned to the application. Whenever the LSM-tree's buffer flushes a new batch of application updates to storage, Chucky adds an FF entry for each key in the batch (including for tombstones). For example, consider entry k_1 inFIG.3, for which there is originally one version at Run 3. A new version of this entry is then flushed to storage as a part of Run 1. As a result, Chucky adds a new FF entry to account for this updated version. This leads to temporary space-amplification (SA), which is later resolved through merging while entries are brought to memory to be sort-merged. This SA is modest since the LSM-tree's exponential structure restricts the average number of versions per entry (e.g. T/(T−1)<2 with Leveling or Lazy-Leveling). In fact, BFs exhibit exactly the same memory SA since each version of an entry across different runs' BFs takes up M bits per entry. For every obsolete entry identified and discarded while merging runs, Chucky removes the corresponding entry from the FF. For every other entry, Chucky updates its run ID to the ID of the new run being created. Hence, Chucky maintains the FF's run IDs without requiring any additional storage I/Os. Furthermore, Chucky allows run IDs to be inherited across merge operations to obviate FF updates and save memory I/Os. It does this by setting the run ID of the j{circumflex over ( )}\textth oldest run at Level i of the LSM-tree to (i−1)*K+j. Thus, the run IDs range from 1 to A, where A the number of runs (from Equation (1). Effectively, this means that an entry's run ID only changes when the entry is merged into a new level, but not when a given entry stays at the same level after a merge. For example inFIG.3, when merging Runs 1, 2 and 3 into a new run at Level3, the new run also gets assigned a run ID of 3. During the merge operations, we identify and remove entry k_1's older version from the FF and update the run IDs of entry k_2 and of the new version of entry k_1 to 3. However, we keep Entry k_3's run ID the same since the new run inherits the older Run 3's ID. An application query probes the FF once while an update accesses it L amortized times (once for each time the updated entry moves into a new level). Table 3 summarizes these properties. Relative to the memory I/O complexities of BFs in Table 1, Chucky reduces querying cost to a constant. Furthermore, it cheapens update cost for greedier merge policies and thereby decouples the memory I/O costs of queries and updates. In this way, Chucky dominates Bloom filters in terms of memory bandwidth. TABLE 3Chucky's invocation complexitiesLevelingLazy-levelingTieringApplication queryO(L)O(1)O(1)Application updateO(L)O(L)O(L) TABLE 4FPR bounds without run ID compressionLevelingLazy-levelingTieringUniformO(2−M* L)O(2−M* L*T)O(2−M* L*T)OptimalO(2−M)O(2−M)O(2−M* T) As we saw earlier, binary encoded run IDs within FF buckets grow with data size thus taking bits from fingerprints and increasing the false positive rate. To prevent this problem, we now explore in detail how to keep run IDs as small as possible using compression. Run IDs are extremely compressible because they follow an approximately geometric probability distribution. We formalize this using Equation (5), which denotes pias the fraction of user data at Level i of the LSM-tree. pi=T-1TL-i*TL-1TL-1≈T-1T*1TL-i(5) A run with ID j resides at level [j/K] of the LSM-tree. It's frequency is therefore that levels probability p[j/K](from equation 5) divided by the number of runs at that level A[k/K](from equation 1). Thus, we denote fjas the frequency of the j'th run ID in equation (6). fj=p[j/K]A[j/K](6) These probabilities decrease exponentially for runs at smaller levels. Hence, it is possible to represent larger runs' IDs with few bits and smaller runs' IDs with more bits. Since smaller runs' IDs are exponentially less frequent, the average number of bits used to represent a run ID would stay small. To establish a limit on how much run IDs can be compressed, we derive their Shannon entropy, which represents a lower bound on the average number of bits needed to represent items within a given probability distribution. We do so in Equation (7) by stating the definition of entropy over the different run IDs' probabilities, plugging in Equations (1) and (5) for Aiand pi, respectively, and simplifying. Interestingly, the entropy converges to a constant that is independent of the number of levels and hence does not grow with data size. The intuition is that the exponential decrease in run ID probabilities for smaller levels trumps the fact that run IDs at smaller levels would require more bits to represent uniquely. H=∑j=1A→∞-fi*log2⁢(fi)=log2⁢(ZT-1T*K1T*TT-1TT-1)(7) By plugging Equation (7) as the run ID length D of Equation, we obtain FPR bounds in Table 4. These bounds hold for any FF for which the number of fingerprints checked per lookup is a small constant (i.e., all FFs to date in practice). The fact these bounds are lower than those in Table 2 for optimal BFs reaffirms our approach; an FF with compressed run IDs may be able to match or even improve on BFs in terms of FPR. % In the next section, we show how to do this in practice. To compress the run IDs in practice, we use Huffman coding. The Huffman encoder takes as input the run IDs along with their probabilities (from Equation (6)). As output, it returns a binary code to represent each run ID and whereby more frequent run IDs are assigned shorter codes. It does so by creating a binary tree from the run IDs by connecting the least probable run IDs first as subtrees. A run ID's ultimate code length corresponds to its depth in the resulting tree. FIG.4illustrates a Lazy-Leveled LSM-tree (This tree's parameters are T=5, K=4, Z=1) with labeled run IDs, each with a corresponding frequency from Equation (6). We feed these run IDs and their frequencies into a Huffman encoder to obtain the Huffman tree shown alongside. The code for a run is given by concatenating the tree's edge labels on the path from the root node to the given run ID's leaf node. For instance, the codes for run IDs 4, 8 and 9 are 011011, 010 and 1, respectively. With Huffman coding, no code is a prefix of another code. This property allows for unique decoding of an input bit stream by traversing the Huffman tree starting at the root until we reach a leaf, outputting the run ID at the given leaf, and then restarting at the root. For example, the input bit stream 11001 gets uniquely decoded into run IDs 9, 9 and 7 based on the Huffman tree inFIG.4. This property allows us to uniquely decode all run IDs within a bucket without the need for delimiting symbols. We measure the encoded run IDs' size using their average code length (ACL) as defined in Equation (8), where ljis the code length assigned to the j'th run. For example, this equation computes 1.52 bits for the Huffman tree inFIG.4. This is a saving of 62% relative to binary encoding, which would require four bits to represent each of the nine run IDs uniquely. ACL=∑j=1A-lj*fj(8) It is well-known in information theory that an upper bound on a Huffman encoding's ACL is the entropy plus one. The intuition for adding one is that each code length is rounded up to an integer. We express this as ACL≥H+1, where H is the entropy from Equation (7). We therefore expect the ACL in our case to converge and become independent of the data size, the same as Equation (7). We verify this inFIG.5by increasing the number of levels for the example inFIG.4and illustrating the Huffman ACL, which indeed converges. The intuition is that while runs at smaller levels get assigned longer codes, these codes are exponentially less frequent. In contrast, a binary encoding requires more bits to represent all run IDs uniquely. Thus, Huffman encoding allows to better scale memory footprint. Among compression methods that encode one symbol at a time, Huffman coding is known to be optimal in that it minimizes the ACL. However, the precise ACL is difficult to analyze because the Huffman tree structure is difficult to predict from the onset. Instead, we can derive an even tighter upper bound on Equation(8) than before by assuming a less generic coding method and observing that the Huffman ACL will be at least as short. For example, we can represent each run ID using (1) a unary encoded prefix of length L−i+1 bits to represent Level i followed by (2) a truncated binary encoding suffix of length about log_2(Ai) to represent each of the Airuns at Level i uniquely. This is effectively a Golomb encoding, which is also applicable to our problem and easier to analyze. However, we focus on Huffman encoding as it allows to encode multiple symbols at a time. We harness this capability momentarily. We derive this encoding's average length in Equation (9) as ACL_UB and Illustrate it inFIG.5as a reasonably tight upper bound of the Huffman ACL. A⁢C⁢LU⁢B=∑i=1L⁢pi*(L-i+1+log2⁢(Ai))=TT-1+log2⁢(ZT-1T*K1T)(9) FIG.5further plots the entropy of the run IDs' frequency distribution from Equation (7). As shown, there is a gap between the Huffman ACL and the entropy. In fact, inFIG.6we show that as we increase the LSM-tree's size ratio T, the gap between the ACL and the entropy grows. (The figure is drawn for a Leveled LSM-tree (i.e., K=1 and Z=1)). The reason is that so far we have been encoding one run ID at a time, meaning that each run ID requires at least one bit to represent with a code. Hence, the ACL cannot drop below one bit per run ID. On the other hand, the entropy continues to drop towards zero as the probability distribution becomes more skewed since the information content (i.e., the amount of surprise) in the distribution decreases. A general approach in information theory to overcome this limitation is to encode multiple symbols at a time, as we now continue to explore. A common technique for an FF to achieve a high load factor at a modest FPR sacrifice is to store multiple fingerprints per bucket. We now show how to leverage this FF design decision to collectively encode all run IDs within a bucket to further push compression. FIG.7gives an example of how to encode permutations of two run IDs at a time for a Leveled LSM-tree (with two levels and size ratio T of 10). The probability of a permutation is the product of its constituent run IDs' probabilities from Equation (6). For example, the probability of permutations 21 and 22 are 10/11*1/11 and (10/11){circumflex over ( )}2, respectively. By feeding all possible run IDs permutations of size two along with their probabilities into a Huffman encoder, we obtain the Huffman tree labeled Perms with an ACL of 0.63 inFIG.7. This is an improvement over encoding one run ID at a time. The intuition for the improvement is that we can represent the most common permutations with fewer bits than the number of symbols in the permutation. FIG.6shows that as we increase the permutation size, the ACL of the resulting Huffman tree approaches the entropy. % which approximates the ACL well with permutations of size four or more. In the example inFIG.7, there are two permutations of the same run IDs: 21 and 12. For a query that encounters either permutation, the same lookup process ensues: we check Run 1 for the key (i.e., first the fingerprint and in case of a positive also in storage) and if we did not find it we proceed to check Run 2. The fact that both permutations trigger the same process implies that permutations encode redundant information about order. Instead, we can encode combinations of Run IDs, as shown inFIG.7, where the combination 12 replaces the two prior permutations. As there are fewer combinations than permutations (S+A-1S) as opposed to A{circumflex over ( )}S, we need fewer bits to represent them, and so the ACL can drop even lower than before. To lower bound the ACL with encoded combinations, we derive a new entropy expression Hcombin Equation 10 by subtracting all information about order from our original entropy expression H (from Equation(7)). This order information amounts to log_2(S!) bits to permute S run IDs while binomially discounting log_2(j!) bits for any run ID that repeats j times. Since combinations are multinomially distributed, an alternative approach for deriving the same expression is through the entropy function of the multinomial distribution. We divide by S to normalize the expression to be per entry rather than per bucket. Hc⁢o⁢m⁢b=H-1s*(log2⁢(S!)-∑i=1A⁢∑j=0S*(Sj)*(1-fi)S-j*log2⁢(j!)(10) FIG.8compares Hcombto H as we increase the number of collectively encoded run IDs. (This example uses a Leveled LSM-tree with T=10, K=1, Z=1 and L=6). We observe that the more collectively encoded run IDs, the more Hcombdrops as it eliminates more redundant information about order relative to H. To use encoded combinations in practice, we must sort the fingerprints within each bucket by their run IDs to be able to identify which fingerprint corresponds to which run ID. To do the actual encoding, we feed all possible combinations along with their probabilities into a Huffman encoder. We express the probability cprobof a combination c in Equation (11) using the multinomial distribution, where c(j) denotes the number of occurrences of the j{circumflex over ( )}\textth run ID within the combination. For example, for the combination 12 inFIG.7, we have S=2, c(1)=1 and c(2)=1. Hence, the probability is 2! *(1/11)*(10/11)=10/121. cp⁢r⁢o⁢b=S!*∏j=1A⁢fjc⁡(j)c⁡(j)!(11) With combinations, the ACL is Σc∈CIc*cprob/S, where C is the set of all combinations and Icis the code length for Combination c (we divide by S to express the ACL per run ID rather than per bucket). We observe that the combinations ACL dominates the permutations ACL inFIG.8, and that it converges with the combinations entropy as we increase the number of collectively encoded run IDs. In the rest of the paper, we continue with encoded combinations as they achieve the best compression. Aligning Codes with Fingerprints With run ID codes being variable-length due to compression, aligning them along with fingerprints within FF buckets becomes a challenge. We illustrate this inFIG.9Aby aligning one run ID combination code for two entries along with two five-bit fingerprints (FPs) within sixteen-bit FF buckets. This example is based on the LSM-tree instance inFIG.4except we now encode run ID combinations instead of encoding every run ID individually. The term Ix,yin the figure is the code length assigned to a bucket with coinciding run IDs x and y. We observe that while some codes and fingerprints perfectly align within a bucket (RowI), others exhibit underflows (RowII) and overflows (RowsIII and IV). Underflows occur within buckets with frequent run IDs as a result of having shorter codes. They are undesirable as they waste bits that could have otherwise been used for increasing fingerprint sizes. On the other hand, overflows occur in buckets with less frequent run IDs as a result of having longer codes. They are undesirable as they require storing the rest of the bucket content elsewhere, thereby increasing memory overheads. We illustrate the contention between overflows and underflows inFIG.10with the curve labeled uniform fingerprints. The figure is drawn for a Lazy-Leveled LSM-tree with configuration T=5, K=4, Z=1, L=6 and an FF with 32 bit buckets containing 4 entries. The figure varies the maximum allowed fraction of overflowing FF buckets and measures the maximum possible corresponding fingerprint size. As shown, with a uniformly sized fingerprints, the fingerprint size has to rapidly decrease to guarantee fewer overflows. To address this, our insight that the run ID combination distribution (in Equation (11)) is heavy-tailed since the underlying run ID distribution is approximately geometric. Our approach is to therefore to guarantee that codes and fingerprints perfectly align within the most probable combinations by adjusting their sizes, while allowing all the other combinations along the distribution's heavy tail to overflow. We achieve this in two steps using two complementary techniques: Malleable Fingerprinting (MF) and Fluid Alignment Coding (FAC). Malleable Fingerprinting (MF) To facilitate alignment, MF allows entries from different LSM-tree levels to have different fingerprint sizes. However, an individual entry's fingerprint length stays the same even if it gets swapped across buckets by the FF's collision resolution method. This means that no fingerprint bits ever need to be dynamically chopped or added. Once an entry is moved into a new level, MF assigns it a new fingerprint size if needed while it is brought to memory to be sort-merged. The question that emerges with MF is how to choose a fingerprint length for each level to strike the best possible balances between fingerprint sizes and overflows. We frame this as an integer programming problem. Whereby FPidenotes the (positive integer) length of fingerprints of entries at Level i. The objective is to maximize the average fingerprint size as expressed in Equation (12): Maximize⁢∑i=1L⁢F⁢Pi*pi(12) We constrain the problem using an additional parameter NOV for the fraction of non-overflowing buckets we want to guarantee (ideally at least 0.9999). We use this parameter to define Cfreqas a subset of C that contains only the most probable run ID combinations in C whose cumulative probabilities fall just above NOV. We add it to the problem in Equation 13 as a constraint requiring that for all c in Cfreq, the code length (denoted as Ic) plus the cumulative fingerprint length (denoted as cFP) do not exceed the number of bits B in the bucket: ∀C=Cfreq:CFP+lc≤B(13) While integer programs are NP-complete and thus difficult to globally optimize, we exploit the particular structure of our problem with an effective hill-climbing approach shown in Algorithm 1. The algorithm initializes all fingerprint sizes to zero. It then increases larger levels' fingerprint size as much as possible, moving to a next smaller level when the overflow constraint in Equation 13 is violated. The rationale for lengthening larger levels' fingerprints first is that their entries are more frequent. In this way, the algorithm follows the steepest ascent.FIG.9shows how MF reduces the severity of underflows (Row II) while at the same time eliminating some overflows (Row III). As a result, it enables better balances between overflows and average fingerprint size as shown inFIG.10. Fluid Alignment Coding (FAC).FIG.9Billustrates that even with MF, underflows and overflows can still occur (Rows II and IV, respectively). To further mitigate them, we introduce FAC. FAC exploits a well-known trade-off in information theory that the smaller some codes are set within a prefix code, the longer other codes must be for all codes to remain uniquely decodable. This trade-off is embodied in the Kraft-McMillan inequality, which states that for a given set of code lengths L, all codes can be uniquely decodable if 1≥Σl∈L2−l. The intuition is that code lengths are set from a budget amounting to 1, and that smaller codes consume a higher proportion of this budget. To exploit this trade-off, FAC assigns longer codes that occupy the underflowing bits for very frequent bucket combinations. As a result, the codes for all other bucket combinations can be made shorter. This creates more space in less frequent bucket combinations, which can be exploited to reduce overflows and to increase fingerprint sizes for smaller levels. We illustrate this idea inFIG.9C. The combination in Row II, which is the most frequent in the system, is now assigned a longer code than before. This allows reducing the code lengths for all other combinations, which in turn allows setting longer fingerprints to entries at Levels 1 and 2 as well as to eliminate the bucket overflow in Row IV. We implement FAC on top of MF as follows. First, we replace the previous overflows constraint (Equation (13)) by a new constraint, shown in Equation\refeq:constraint3. Expressed in terms of the Kraft-McMillan inequality, it ensures that the fingerprint sizes stay short enough such that it is still possible to construct non-overflowing buckets with uniquely decodable codes for all combinations in Cfreq. Furthermore, it ensures that all other buckets combinations not in Cfreqcan be uniquely identified using unique codes that are at most the size of a bucket B. 1≥∑c∈C⁢2-(B-cFP),for⁢c∈Cfreq2-B,else(14) Note that Equation 14 does not rely on knowing Huffman codes in advance (i.e., as Equation (13) does). Thus, we can run the Huffman encoder after rather than before finding the fingerprint lengths using Algorithm 1. Third, we run the Huffman encoder only on combinations in Cfreqwhile setting the frequency input for a combination c as 2−(B-cFP)as opposed to using its multinomial probability (in Equation (11)) as before. This causes the Huffman encoder to generate codes that exactly fill up the leftover bits B-cFP. Fourth, for all combinations not in Cfreqwe set uniformly sized binary codes of size B bits, which consist of a common prefix in the Huffman tree and a unique suffix. In this way, we can identify and decode all codes across both sets uniquely. % which consist of a common prefix that's not in the Huffman tree and a unique suffix. FIG.10shows that MF and FAC eliminate the contention between overflows and fingerprint size when applied together. In fact, they keep the average fingerprint size close (within half a bit in the figure) of the theoretical maximum, obtained by subtracting the combinations entropy (in Eq. (10)) from the number of bits per entry M. We use MF and FAC by default for the rest of the paper. Algorithm 1's run-time is O(L*M*|C|), where L*M is the number of iterations and |C| is the cost of evaluating the constraint in Equation (14). In addition, the time complexity of the Huffman encoder is O(|C| *log_2(|CI)). This workflow is seldom invoked (i.e., only when number of LSM-tree levels changes), and it can be performed offline. Its run-time is therefore practical (each of the points inFIG.10takes a fraction of a second to generate). Chucky's FPR is tricky to precisely analyze because the fingerprints have variable sizes that are not known from the onset. Instead, we give a conservative approximation to still allow reasoning about system behavior. First, we observe that with FAC, the average code length is always at least one bit per entry, and so we use our upper bound ACL_UB from Equation (9) to slightly overestimate it. Hence, we approximate the average fingerprint size as M−ACL_UB and thus the FPR over a single fingerprint as 2{circumflex over ( )}−(M−ACLUB). We multiply this expression by a factor of Q, which denotes average number of fingerprints searched by the underlying FF per probe (e.g., for a Cuckoo filter with four entries per bucket Q about 8). Thus, we obtain Equation (15), for which the interpretation is the expected number of false positives for a query to a non-existing key. Practically, the actual FPR tends to be off from this expression by a factor of at most two. FPRapprox=Q*2−M+ACUUB(15) We now discuss the data structures needed to decode run IDs on application reads and to recode them on writes. Specifically, we show how to prevent these structures from becoming bottlenecks. Since Huffman codes are variable-length, we cannot generally decode them in constant time (e.g., using a lookup table) as we do not know from the onset how long a given code in question is. Hence, decoding a Huffman code is typically done one bit at a time by traversing the Huffman tree from the root to a given leaf based on the code in question. A possible problem is that if the Huffman tree is large, traversing it can require up to one memory I/O per node visited. To restrict this cost, we again use the insight that the bucket combination distribution in Equation 11 is heavy-tailed. Hence, it is feasible to store a small Huffman Tree partition in the CPU caches to allow to quickly decode only the most common combination codes. To control the cached Huffman tree's size, we set the parameter NOV from the last subsection to 0.9999 so that the set of combinations Cfreqfor which we construct the Huffman tree includes 99.99% of all combinations we expect to encounter. FIG.11measures the corresponding tree's size. We continue here with the LSM-tree configuration fromFIG.4. Each Huffman tree node is eight bytes. Since it occupies a few tens of kilobytes, it is small enough to fit in the CPU caches. In fact, the figure highlights an important property that as we increase the data size, the cached Huffman tree's size converges. The reason is that the probability of a given bucket combination (in Equation (11)) is convergent with respect to the number of levels, and so any set whose size is defined in terms of its constituent combinations' cumulative probabilities is also convergent in size with respect to the number of levels. This property ensures that the Huffman tree does not exceed the CPU cache size as the data grows. In addition to the Huffman tree, we use a Decoding Table in main memory for all other combination codes not in Cfreq. To ensure fast decoding speed for DT, we exploit the property given in the last subsection that all bucket combinations not in Cfreqare assigned uniformly sized codes of size B bits. As these codes all have the same size, we know from the onset how many bits to consider, and so we can map these codes to labels in a lookup array as opposed to a tree. This guarantees decoding speed in at most one memory I/O. FIG.11measures the DT size as we increase the number of levels on the x-axis (each DT entry is eight bytes). As the DT contains about ❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"=(S+A-1S) entries, its size grows slowly as we increase the number of levels (and thus the number of runs A). We observe that it stays smaller than a megabyte even for a very large LSM-tree instance with ten levels. To handle bucket overflows, we use a small hash table to map from an overflowing bucket's ID to the corresponding fingerprints. Its size is (1-NOV)=0.0001 of the FF size. It is accessed seldom, i.e., only for infrequent bucket combinations, and it supports access in O(1) memory I/O. To find the correct code for a given combination of run IDs while handling application writes, we employ a Recoding Table. We use a fixed-width format to represent a run ID combination, and so the RT can also be structured as a lookup array. It costs at most one memory I/O to access and its size scales the same as the Decoding Table inFIG.11. Note that the most frequent RT entries are in the CPU caches during run-time and thus cost no memory I/Os to access. FIG.11also illustrates the FF size as we increase the number of LSM-tree levels. We observe that all auxiliary data structures are comparatively small, and we've seen that they entail few memory accesses. Thus, Chucky prevents de/recoding from becoming a performance or space bottleneck. Integration with Cuckoo Filter We now show how to integrate Chucky with Cuckoo Filter (CF), which we employ due to its design simplicity and ease of implementation. CF consists of an array of buckets, each with four fingerprint slots. During insertion, an entry with key x is hashed to two buckets b_1 and b_2 using Equations (16) and (17). A fingerprint of key x is then inserted into whichever bucket has space. b1=hash(x)  (16) b2=b1⊕hash(x's fingerprint)  (17) If both buckets are full, however, some fingerprint y from one of these buckets is evicted to clear space. The fingerprint y is swapped into its alternative bucket using Equation\refeq:cuckooC, which does not rely on the original key (by virtue of using the xor operator) but only on the fingerprint and the bucket i that currently contains y. j=i⊕hash(y)  (18) The swapping process continues recursively either until a free bucket slot is found for all fingerprints or until a swapping threshold is reached, at which point the original insertion fails. Querying requires at most two memory I/Os as each entry is mapped to two possible buckets. Henceforth in the paper, we employ a Cuckoo filter with four slots per bucket. Such a tuning is known to be able to reach 95% capacity with high probability without incurring insertion failures and with only 1-2 amortized swaps per insertion. To implement Chucky on top of CF, we place a combination code at the start of each CF bucket followed by variable-sized fingerprints. We represent empty fingerprint slots using a reserved all-zero fingerprint coupled with the most frequent run ID to minimize the corresponding combination code length. In addition, we make the following adjustments. Since Cuckoo filter relies on the XOR operator to locate an entry's alternative bucket, the number of buckets must be a power of two. This can waste up to 50% of the allotted memory, specifically whenever LSM-tree's capacity just crosses a power of two. To fix this, we borrow from Vacuum filter the idea of partitioning a CF into multiple independent CFs, each of which is a power of two, but where the overall number of CFs is flexible. In this way, capacity becomes adjustable by varying the number of CFs, and we map each key to one of the constituent CFs using a hash modulo operation. We set each CF to be 8 MB. When Chucky reaches capacity, it needs to be resized to accommodate new data. However, a CF cannot be resized efficiently. The simplest approach is to rebuild Chucky from scratch when it reaches capacity. However, this approach forces an expensive scan over the dataset to reinsert all entries into the new instance of Chucky. Instead, we exploit the fact that merge operations into the largest level of the LSM-tree pass over the entire dataset. We use this opportunity to also build a new instance of Chucky and thereby obviate the need for an additional scan. We set the size of the new instance of Chucky to be larger by a factor of TT-1*1.0⁢5 than the current data size to accommodate data growth until the next full merge and to always maintain 5% spare capacity across all the CFs to prevent insertion failures. Since Chucky assigns variable fingerprint sizes to entries at different levels, a problem arises whereby the CF can map different versions of an entry from across different levels to more than two CF buckets. We resolve this by ensuring that all fingerprints comprise at least X bits, and we adapt the CF to determine an entry's alternative bucket based on its first X bits. This forces all versions of the same entry to reside in the same pair of CF buckets. In accordance with the Cuckoo filter paper, we set the minimum fingerprint size to 5 bits to ensure that an entry's two buckets are independent enough to achieve a 95% load factor. Since a CF maps multiple versions of the same entry from different LSM-tree runs into the same pair of CF buckets, a bucket overflow can take place if there are more than eight versions of a given entry. Some FFs can address this problem out-of-the-box using embedded fingerprint counters (e.g., Counting Quotient Filter). For our CF design, however, we address this issue using an additional hash table (AHT), which maps from bucket IDs to the overflowing entries. With insertion-heavy workloads, AHT stays empty. Even with update-heavy workloads, AHT stays small since LSM-tree by design limits space-amplification and thus the average number of versions per entry (e.g., at most TT-1≤2 with Leveling or Lazy Leveling). We check AHT for every full FF bucket that's encountered during a query or update thus adding to them at most O(1) additional memory access. For each run, we persist its entries' fingerprints in storage. During recovery, we read only the fingerprints from storage and thus avoid a full scan over the data. We insert each fingerprint along with its run ID into a brand new CF series at a practically constant amortized memory I/O cost per entry. In this way, recovery is efficient in terms of both storage and memory I/Os. Evaluation We now give an expression to generically approximate the expected I/O arising due to the FF. We use a machine with 32 GB DDR memory and four 2.7 GHz cores with 8 MB L3 caches running Ubuntu 18.04 LTS and connected to a 512 GB SSD through PCIe. We use our own LSM-tree implementation, designed based on Dostoevsky, and which we are gearing towards commercial use. We added as baselines blocked cache and non-blocked BFs with uniform false positive rates (FPRs) to represent design decisions in RocksDB and Cassandra, respectively. We also support optimal FPRs. The default setup consists of a Lazy-Leveled LSM-tree with a 1 MB buffer, a size ratio of five, and with six levels amounting to about 16 GB of data. Each entry is 64B. There is a 1 GB block cache, and the data structure block size is 4 KB. Chucky uses ten bits per entry and 5% over-provisioned space. Hence, all BF baselines are assigned a factor of 1/0.95 more memory to equalize memory across the baselines. Every point in the figures is an average of three experimental trials. We use a uniform workload distribution to represent worst-case performance and a Zipfian distribution to create skew and illuminate performance properties when the most frequently accessed data is in the block cache. FIG.12Acompares read/write latency with Chucky against blocked and non-blocked BFs (both with optimal FPRs) with a uniform workload as the data grows. Write latency is measured by dividing the overall time spent on filter maintenance by the number of writes issued by the application. Read latency is measured just before a full merge operation (when there are the most runs in the system) to highlight worst-case performance. Non-blocked BFs exhibit the fastest growing latency as they require multiple memory I/Os per filter across a growing number of filters. We drop non-blocked BFs henceforth in the evaluation as they are noncompetitive. With blocked BFs, read/write latency grows more slowly as they require at most one memory I/O per read or write. Chucky's write latency also grows slowly with data as there are more levels across which run IDs need to be updated. Crucially, we observe that Chucky is the only baseline that's able to keep read latency stable with data size as each read requires a constant number of memory I/Os. FIG.12Bstacks read and write latency with Chucky against blocked BFs with different LSM-tree variants. Chucky offers better cost balances across the board, mostly for its lower read latency. Nevertheless, Chucky also improves write cost for Leveled LSM-tree designs. The reason is that with Leveling, merging is greedy and so BFs are rapidly reconstructed leading to multiple BF insertions per entry per level. In contrast, Chucky always requires just one update per entry per level. Overall, Chucky not only improves the filter read/write cost balances but also makes them independent of the underlying LSM-tree variant. This makes the system easier to reason about and tune. FIG.12Ccompares the FPR for Chucky with both compressed and uncompressed run IDs to blocked BFs with both uniform and optimal space allocation. As we increase the data size, the FPR of Chucky with uncompressed run IDs increases since the run IDs grow and steal bits from the fingerprints. With uniform BFs, the FPR also grows with data size as there are more filters across which false positives can take place. In contrast, with optimal BFs, smaller levels are assigned exponentially lower FPRs, and so the sum of FPRs converges to a constant that's independent of the number of levels. Similarly, Chucky's FPR stays constant as the data grows since the average run ID code length converges, thus allowing most fingerprints to stay large. The figure also includes the FPR model of Chucky from Equation (15) to show that it gives a reasonable approximation of the FPR in practice. FIG.12Dshows that Chucky requires at least eight bits per entry to work (i.e., for codes and minimum fingerprint sizes). However, with eleven bits per entry and above Chucky offers better memory/FPR trade-offs than all BF variants. The reason is that BFs are known to exhibit suboptimal space use, which effectively reduces the memory budget by a factor of \ ln(2). Thus, Chucky scales the FPR better with respect to memory. To allow Chucky to operate with fewer than eight bits per entry while also keeping the FPR low, it is possible to use a BF at the largest level of the LSM-tree and an FF for all smaller levels. We keep such a design out of scope for now due to space constraints. FIG.12FandFIG.12Gmeasure end-to-end read latency with uniform and Zipfian (with parameter s=1) workloads, respectively. Read latency is broken in three components: (1) storage I/Os, (2) in-memory search across the fence pointers, buffer, and block cache, and (3) filter search. In Part (F), relevant data is most often in storage and so storage I/Os dominates read cost. Since our SSD is fast, however, the BFs probes still impose a significant latency overhead that Chucky is able to eliminate. In Part (G), on the other hand, the workload is skewed, meaning that target data is most often in the block cache. In this case, the BFs become a bottleneck as they must be searched before the relevant block in the cache can be identified. Chucky alleviates this bottleneck thus significantly improving read latency. FIG.12Hshows how throughput scales as we increase the data size for a workload consisting of 95% Zipfian reads and 5% Zipfian writes (modeled after WorkloadB). The BF baselines do not scale well as they issue memory I/Os across a growing number of BFs. Chucky with uncompressed run IDs also exhibits deteriorating performance as its FPR grows and leads to more storage I/Os. Chucky with compressed run IDs also exhibits deteriorating performance, mostly because the of the growing cost of the binary search across the fence pointers. However, Chucky provides better throughput with data size than all baselines because it scales the filter's FPR and memory I/Os at the same time. FIG.13illustrates an example of a method300. Method300is for managing a log structured merged (LSM) tree of key value (KV) pairs. The LSM tree is stored in a non-volatile memory, the method may include. Method300may start by step310. Step310may include generating or receiving current fingerprints that are indicative of current KV pairs. Current KV pairs are included in a current run. Step310may be followed by step320of writing the current run from a buffer to a current run location within the LSM tree, the current run may include current KV pairs. The current run may be sorted. Steps310and320may be followed by step330of performing a run writing update of a management data structure (MDS) by adding to the MDS, mappings between the current KV pairs, the current fingerprints and a current run identifier. The run writing update of the MDS reflects the execution of step310. Step330may be executed without checking an existence a previous version of a current KV pair within the LSM tree. Step330may be executed regardless of an existence or a lack of existence of a previous version of a current KV pair within the LSM tree. Step330may be followed by step310and/or320. Method300may include step340of updating the LSM tree by merging at least some runs of the LSM tree. Step340may include merging a first run of the LSM tree that may include first KV pairs, with a second run of the LSM that may include second KV pairs. Step340may include adding the second KV pairs to the first run, and wherein the performing of the merge update may include updating run identifiers associated with the second KV pairs while maintaining run identifiers associated with the first KV pairs. Step340may include writing the first KV pairs and the second KV pairs to a third run of the LSM tree, wherein the performing of the merge update may include updating run identifiers associated with the first KV pairs and with the second KV pairs. Step340may include deleting a previous version of a KV pair when a newer version of the KV pair may include a value that represents a delete command. Step340may include merging at least two runs that belong to different levels of the LSM tree. Step340may include merging at least two runs that belong to a same level of the LSM tree. Step340may be followed by step350of performing a merge update of the MDS to represent the merging. Step350may be followed by step340. Method300may include triggering the merging of runs of one or more layers of the LSM tree whenever a run is written to the non-volatile memory. Method300may include triggering the merging of runs of one or more layers of the LSM tree whenever the one or more layers reach a fullness level. The merging may be executed according to any method such as leveling, lazy-leveling and tiering. The MDS may include multiple buckets, each bucket may be configured to store metadata related to two or more KV pairs. Method300may include step360of receiving a request to access a requested KV pair stored in the non-volatile memory. The access request may be a request to read the requested KV pair. The KV pair is referred to as a requested KV pair because it is included in the request. Step360may be followed by step370of accessing the MDS, using a key of the requested KV pair to obtain a location of a relevant run. Step370may be followed by step380of retrieving the relevant run when a relevant run exists. It should be noted that a dedicated value (tombstone) may be allocated for indicating to delete a previous KV pair. When the relevant run includes the key with such dedicated value the response is that the requested KV pair does not exist in the LSM tree. Step380may be followed by waiting to receive a new request and jumping to step360when a request is received. FIG.14illustrates an example of a method400. Method400is for managing a log structured merged (LSM) tree of key value (KV) pairs, the LSM tree is stored in a non-volatile memory. Method400may include step410of merging runs of the LSM tree to provide merged runs. Method400may include step420of adding new runs to the LSM tree, wherein the adding may include writing runs to the non-volatile memory. Step410and/or step420may be followed by step430of updating at least one management data structure (MDS) to reflect the merging and the adding. One MDS of the at least one MDS stores a mapping between keys of the KV pairs of the LSM tree, fingerprints associated with the KV pairs of the LSM tree, and compressed run identifiers that identify runs of the LSM tree. The compressed run identifiers may be compressed using a variable length code such as but not limited a Huffman code. Step430may include step440of compressing run identifiers, by applying a variable length encoding, to provide the compressed run identifiers. The LSM tree may include a first layer and a last layer. The first layer is smaller than the last layer. There may be a factor T that defines the ration between a layer and a previous layer. Step440may include allocating compressed run identifiers of runs of the last layer that are shorter than compressed run identifiers of runs of the first layer. Step430may include step450of calculating combination run identifier codes that represent combination of run identifiers. The each combination run identifier code is associated with fingerprints of each of the run identifiers the form the combination represented by the combination run identifier. Method400may include step452of determining, per layer of the LSM tree, a length of each one of the fingerprints. Step454may include maximizing a sum, over all layers of the LSM tree, or a product of a multiplication of a length of a fingerprint of the layer by a fraction, out of the LSM tree, of the layer. Step430may include step456of storing within buckets of the MDS, multiple sets, wherein each set may include a combination run identifier code and fingerprints of each of the run identifiers the form the combination represented by the combination run identifier code. These may provide aligned sets. Step430may include step458of storing overflow metadata not included in the buckets in an overflow MDS. Step450may include calculating a compressed combination run identifier codes that represent combination of run identifiers. Step450may include step451of imposing constraints of a minimal length of the compressed combination run identifier code. Step540may include step453of imposing constraints of a minimal length of a compressed combination run identifier code and determining, per layer of the LSM tree, a length of each one of the fingerprints. Method400may include step460of receiving a request to access a requested KV pair stored in the non-volatile memory. The access request may be a request to read the requested KV pair. The KV pair is referred to as a requested KV pair because it is included in the request. Step460may be followed by step470of accessing the MDS, using a key of the requested KV pair to obtain a location of a relevant run. This may include obtaining a compressed run ID and decompressing it (decoding it) to provide a non-compressed run ID. Step470may be followed by step480of retrieving the relevant run when a relevant run exists. It should be noted that a dedicated value (tombstone) may be allocated for indicating to delete a previous KV pair. When the relevant run includes the key with such dedicated value the response is that the requested KV pair does not exist in the LSM tree. Step480may be followed by waiting to receive a new request and jumping to step460when a request is received. FIG.15illustrates a buffer10, an SSD30, a first MDS50, and a management unit100for controlling the writing of runs, maintaining the first MDS, and the like. The management unit may be a controller, a processor, may be hosted by the controller and/or the processor and the like. It is assumed that many runs are generated and sent to the SSD30.FIG.15illustrates the generation and storage of an n'th run, n being an positive integer that may represent an n'th point of time. Buffer10stores buffered content12. When the buffer12is full (or any other triggering event occurs) a current run20(n) is send to the SSD30. The current run20(n) includes a sorted buffered content includes current KV pairs with current keys. The SSD stores an SSD content32. It includes a LSM tree40that includes I layers42(1)-42(I). At the n'th point of time the LSM tree includes R runs-runs20(1)-20(R). R is a positive integer. The value of R may change over time. First MDS50stores a mapping between keys, fingerprints and run IDs52. Once the current run is written to the SSD the first MDS is updated by adding current entries54. The First MDS50already stores (at the n'th point of time) previous entries-one entry per previous key of each run (reflecting current state of LSM tree). FIG.16illustrates a merge operation. Of the SSD content32—a selected level (or a part of the selected level) of the LSM tree is sent to a volatile memory, a merge operation occurs in which runs of the selected level42(i) are merged to provide a modified level42′(i). The modified level may replace the selected level. The merging may be executed between runs of multiple levels. The modification may be executed one part of run (or one part of a level) after the other. The modification is followed by updating (52) the first MDS30. FIG.17illustrates the first MDS50as including multiple buckets52(1)-52(S), S being a positive integer. Each bucket may include one or more sets of a fingerprint and a run ID (RUNID)—see, for example fingerprint FP53(1,1), run ID54(1,1), fingerprint FP53(1,2) and run ID54(1,2) of first bucket. Yet for another example—see, for example fingerprint FP53(S,1), run ID54(S,1), fingerprint FP53(S,2) and run ID54(S,2). The number of sets per bucket may differ from two. FIG.18illustrates the first MDS50as including multiple buckets52(1)-52(S), S being a positive integer. Each bucket may include one or more sets of a fingerprint and a compressed run ID (C_RUNID)—see, for example fingerprint FP53(1,1), compressed run ID55(1,1), fingerprint FP53(1,2) and compressed run ID55(1,2) of first bucket. Yet for another example—see, for example fingerprint FP53(S,1), compressed run ID55(S,1), fingerprint FP53(S,2) and compressed run ID55(S,2). FIG.19illustrates the first MDS50as including multiple buckets52(1)-52(S), S being a positive integer. Each bucket may include one or more sets of fingerprints and a compressed combination run ID (CC_RUNID). A compressed combination run identifier represents combination of run identifiers. Each compressed combination run identifier is associated with fingerprints of each of the run identifiers the form the combination represented by the combination run identifier. The compressed combination run identifier, and these fingerprints form a set. Each bucket may store multiple sets. See, for example first bucket52(1) that stores (a) a first set that includes fingerprints FP53(1,1) and53′(1,1) and compressed combination run ID56(1,1)—and (b) a second set that includes fingerprints FP53(1,2) and53′(1,2) and compressed combination run ID56(1,2). FIG.20illustrates underflows and overflows of sets. A set may include fingerprints FP53(1,1) and53′(1,1) and compressed combination run ID56(1,1).FIG.20also illustrates a fixed size allocated per set for alignment purposes. FIG.20also illustrates example of using malleable fingerprinting (steps61,62and63, and also shows a combination of malleable fingerprinting and fluid alignment coding (steps61,64and65). FIGS.21and22illustrates various example of management data structures and their content. First MDR50that stores a mapping52′ between keys, fingerprints and compressed run identifiers. First MDR50that stores a mapping52″ between keys, fingerprints and compressed combination run identifiers. A combination of first MDR50and a second MDR70(for example a decoding table). The first MDR50may store a mapping52″ between keys, fingerprints and compressed combination run identifiers—but only for compressed combination run identifiers that do not exceed a predefined size. The second MDR stores a mapping between keys, fingerprints and combination run identifiers—but only for combination run identifiers that (in a compressed form) exceed the predefined size. A combination of a first MDR50and an overflow data structure72. The first MDR50may store a mapping52″ between keys, fingerprints and compressed combination run identifiers—but any content that may cause a bucket overflow may be stored in the first overflow data structure72. FIG.22also illustrates a recording table80that maps individual run IDs (fields82(x)) that should be represented by a single compressed combination run ID and their compressed combination run ID (field84(x)). Index x ranges between 1 and X, X being the number of entries on recording table80. X may change over time. The recording table receives a request to determine the compressed combination run ID and outputs the selected CC_RUNID. The recording table80is provided in addition to the first MDR that stores mapping52″. While the foregoing written description of the invention enables one of ordinary skill to make and use what may be considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the invention as claimed. In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality. Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality. Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner. However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. It is appreciated that various features of the embodiments of the disclosure which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the embodiments of the disclosure which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable sub-combination. It will be appreciated by persons skilled in the art that the embodiments of the disclosure are not limited by what has been particularly shown and described hereinabove. Rather the scope of the embodiments of the disclosure is defined by the appended claims and equivalents thereof.
79,445
11860845
DETAILED DESCRIPTION OF THE INVENTION Various embodiments now will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the invention may be practiced. The embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the embodiments to those skilled in the art. Among other things, the various embodiments may be methods, systems, media or devices. Accordingly, the various embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense. Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments may be readily combined, without departing from the scope or spirit of the invention. In addition, as used herein, the term “or” is an inclusive “or” operator, and is equivalent to the term “and/or,” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” For example embodiments, the following terms are also used herein according to the corresponding meaning, unless the context clearly dictates otherwise. As used herein the term, “engine” refers to logic embodied in hardware or software instructions, which can be written in a programming language, such as C, C++, Objective-C, COBOL, Java™, PHP, Perl, JavaScript, Ruby, VB Script, Microsoft .NET™ languages such as C#, or the like. An engine may be compiled into executable programs or written in interpreted programming languages. Software engines may be callable from other engines or from themselves. Engines described herein refer to one or more logical modules that can be merged with other engines or applications, or can be divided into sub-engines. The engines can be stored in non-transitory computer-readable medium or computer storage device and be stored on and executed by one or more general purpose computers, thus creating a special purpose computer configured to provide the engine. As used herein, the term “data source” refers to databases, applications, services, file systems, or the like, that store or provide information for an organization. Examples of data sources may include, RDBMS databases, graph databases, spreadsheets, file systems, document management systems, local or remote data streams, or the like. In some cases, data sources are organized around one or more tables or table-like structure. In other cases, data sources be organized as a graph or graph-like structure. As used herein the term “data object” refers to one or more data structures that comprise data models. In some cases, data objects may be considered portions of the data model. Data objects may represent individual instances of items or classes or kinds of items. As used herein the term “configuration information” refers to information that may include rule based policies, pattern matching, scripts (e.g., computer readable instructions), or the like, that may be provided from various sources, including, configuration files, databases, user input, built-in defaults, or the like, or combination thereof. As used herein the term “histogram” refers to a data structure used to track the distribution of a plurality of values for a variable. A variety of implementations are available for a histogram data structure and can include program code or instructions to control access of the histogram data structure. A histogram provides a representation of the distribution of numerical data by providing an estimate of the probability distribution of a continuous variable. To construct a histogram, the first step is to “bin” (or “bucket”) the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent, and are often (but not required to be) of equal size. The number of bins may be static or dynamic based upon the variable being tracked. One or more embodiments of histograms may use HDR histograms or sparse version thereof. A histogram may be employed to compute metrics for data values included in the bins. If the bins are of equal size, a rectangle is erected over the bin with height proportional to the frequency—the number of cases in each bin. A histogram may also be normalized to display “relative” frequencies. It then shows the proportion of cases that fall into each of several categories, with the sum of the heights equaling one. Additionally, a histogram may be embodied in a graphical display that represents the distribution of sampled data. A histogram is commonly made from a table such as an array with a plurality of categories, which can inform a count of the sample data in each category. The following briefly describes embodiments of the invention in order to provide a basic understanding of some aspects of the invention. This brief description is not intended as an extensive overview. It is not intended to identify key or critical elements, or to delineate or otherwise narrow the scope. Its purpose is merely to present some concepts in a simplified form as a prelude to the more detailed description that is presented later. Briefly stated, various embodiments are directed to data processing using one or more processors that execute one or more instructions to perform as described herein. In one or more of the various embodiments, sampled data objects are stored in a tree data structure that is employed to compute histogram information. Each node in the tree includes sufficient statistics and a particular value corresponding to one or more sampled data objects. The nodes of the tree can be two or more types, such as exponential nodes and regular nodes. Also, each histogram tree may be precalculated to start with an empty exponential node, and a defined number of regular nested nodes that correlate to a precision value for the histogram, i.e., a number of significant figures for sampled data values that can be stored in the tree. Additionally, when the histogram tree is not empty and a new data object is inserted into the tree that corresponds to one or more nodes that already populated, the sufficient statistics and values at such a node are added together, component-wise, and any unique data values not present in populated nodes are new nodes necessary to represent the newly inserted data object are created. Furthermore, to merge two populated histogram trees together, if a branch is unique between the two trees, it appears in the result, if there's an overlap, each overlapping node is represented in the output by a node with the same value, but the sum is based on the relevant counts. Using a tree data structure, all of the sufficient statistics can be computed for a histogram in almost a logarithmically (ten times) faster amount of time than a traditional computation of histogram statistics based on data objects stored in an array. It is noteworthy that a size of a tree data structure storing sampled data objects for computation of a histogram is typically a small fraction of another size for a typical array typically used to store raw data objects to compute a histogram. Illustrated Operating Environment FIG.1shows components of one embodiment of an environment in which embodiments of the invention may be practiced. Not all of the components may be required to practice the invention, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of the invention. As shown, system100ofFIG.1includes local area networks (LANs)/wide area networks (WANs)—(network)110, wireless network108, client computers102-105, data source server computer116, or the like. At least one embodiment of client computers102-105is described in more detail below in conjunction withFIG.2. In one embodiment, at least some of client computers102-105may operate over one or more wired or wireless networks, such as networks108, or110. Generally, client computers102-105may include virtually any computer capable of communicating over a network to send and receive information, perform various online activities, offline actions, or the like. In one embodiment, one or more of client computers102-105may be configured to operate within a business or other entity to perform a variety of services for the business or other entity. For example, client computers102-105may be configured to operate as a web server, firewall, client application, media player, mobile telephone, game console, desktop computer, or the like. However, client computers102-105are not constrained to these services and may also be employed, for example, as for end-user computing in other embodiments. It should be recognized that more or less client computers (as shown inFIG.1) may be included within a system such as described herein, and embodiments are therefore not constrained by the number or type of client computers employed. Computers that may operate as client computer102may include computers that typically connect using a wired or wireless communications medium such as personal computers, multiprocessor systems, microprocessor-based or programmable electronic devices, network PCs, or the like. In some embodiments, client computers102-105may include virtually any portable computer capable of connecting to another computer and receiving information such as, laptop computer103, mobile computer104, tablet computers105, or the like. However, portable computers are not so limited and may also include other portable computers such as cellular telephones, display pagers, radio frequency (RF) devices, infrared (IR) devices, Personal Digital Assistants (PDAs), handheld computers, wearable computers, integrated devices combining one or more of the preceding computers, or the like. As such, client computers102-105typically range widely in terms of capabilities and features. Moreover, client computers102-105may access various computing applications, including a browser, or other web-based application. A web-enabled client computer may include a browser application that is configured to send requests and receive responses over the web. The browser application may be configured to receive and display graphics, text, multimedia, and the like, employing virtually any web-based language. In one embodiment, the browser application is enabled to employ JavaScript, HyperText Markup Language (HTML), eXtensible Markup Language (XML), JavaScript Object Notation (JSON), Cascading Style Sheets (CSS), or the like, or combination thereof, to display and send a message. In one embodiment, a user of the client computer may employ the browser application to perform various activities over a network (online). However, another application may also be used to perform various online activities. Client computers102-105also may include at least one other client application that is configured to receive or send content between another computer. The client application may include a capability to send or receive content, or the like. The client application may further provide information that identifies itself, including a type, capability, name, and the like. In one embodiment, client computers102-105may uniquely identify themselves through any of a variety of mechanisms, including an Internet Protocol (IP) address, a phone number, Mobile Identification Number (MIN), an electronic serial number (ESN), a client certificate, or other device identifier. Such information may be provided in one or more network packets, or the like, sent between other client computers, visualization server computer116, or other computers. Client computers102-105may further be configured to include a client application that enables an end-user to log into an end-user account that may be managed by another computer, such as data source server computer116, or the like. Such an end-user account, in one non-limiting example, may be configured to enable the end-user to manage one or more online activities, including in one non-limiting example, project management, software development, system administration, configuration management, search activities, social networking activities, browse various websites, communicate with other users, or the like. Also, client computers may be arranged to enable users to display reports, interactive user-interfaces, or results provided by visualization server computer116. Wireless network108is configured to couple client computers103-105and its components with network110. Wireless network108may include any of a variety of wireless sub-networks that may further overlay stand-alone ad-hoc networks, and the like, to provide an infrastructure-oriented connection for client computers103-105. Such sub-networks may include mesh networks, Wireless LAN (WLAN) networks, cellular networks, and the like. In one embodiment, the system may include more than one wireless network. Wireless network108may further include an autonomous system of terminals, gateways, routers, and the like connected by wireless radio links, and the like. These connectors may be configured to move freely and randomly and organize themselves arbitrarily, such that the topology of wireless network108may change rapidly. Wireless network108may further employ a plurality of access technologies including 2nd (2G), 3rd (3G), 4th (4G) 5th (5G) generation radio access for cellular systems, WLAN, Wireless Router (WR) mesh, and the like. Access technologies such as 2G, 3G, 4G, 5G, and future access networks may enable wide area coverage for mobile computers, such as client computers103-105with various degrees of mobility. In one non-limiting example, wireless network108may enable a radio connection through a radio network access such as Global System for Mobil communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Wideband Code Division Multiple Access (WCDMA), High Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), and the like. In essence, wireless network108may include virtually any wireless communication mechanism by which information may travel between client computers103-105and another computer, network, a cloud-based network, a cloud instance, or the like. Network110is configured to couple network computers with other computers, including, data source server computer116, client computers102, and client computers103-105through wireless network108, or the like. Network110is enabled to employ any form of computer readable media for communicating information from one electronic device to another. Also, network110can include the Internet in addition to local area networks (LANs), wide area networks (WANs), direct connections, such as through a universal serial bus (USB) port, Ethernet port, other forms of computer-readable media, or any combination thereof. On an interconnected set of LANs, including those based on differing architectures and protocols, a router acts as a link between LANs, enabling messages to be sent from one to another. In addition, communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may utilize analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, or other carrier mechanisms including, for example, E-carriers, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communications links known to those skilled in the art. Moreover, communication links may further employ any of a variety of digital signaling technologies, including without limit, for example, DS-0, DS-1, DS-2, DS-3, DS-4, OC-3, OC-12, OC-48, or the like. Furthermore, remote computers and other related electronic devices could be remotely connected to either LANs or WANs via a modem and temporary telephone link. In one embodiment, network110may be configured to transport information of an Internet Protocol (IP). Additionally, communication media typically embodies computer readable instructions, data structures, program modules, or other transport mechanism and includes any information non-transitory delivery media or transitory delivery media. By way of example, communication media includes wired media such as twisted pair, coaxial cable, fiber optics, wave guides, and other wired media and wireless media such as acoustic, RF, infrared, and other wireless media. Also, one embodiment of data source server computer116is described in more detail below in conjunction withFIG.3. AlthoughFIG.1illustrates data source server computer116, or the like, as a single computer, the innovations or embodiments are not so limited. For example, one or more functions of data source server computer116, or the like, may be distributed across one or more distinct network computers. Moreover, in one or more embodiments, data source server computer116may be implemented using a plurality of network computers. Further, in one or more of the various embodiments, data source server computer116, or the like, may be implemented using one or more cloud instances in one or more cloud networks. Accordingly, these innovations and embodiments are not to be construed as being limited to a single environment, and other configurations, and other architectures are also envisaged. Illustrative Client Computer FIG.2shows one embodiment of client computer200that may include many more or less components than those shown. Client computer200may represent, for example, one or more embodiment of mobile computers or client computers shown inFIG.1. Client computer200may include processor202in communication with memory204via bus228. Client computer200may also include power supply230, network interface232, audio interface256, display250, keypad252, illuminator254, video interface242, input/output interface238, haptic interface264, global positioning systems (GPS) receiver258, open air gesture interface260, temperature interface262, camera(s)240, projector246, pointing device interface266, processor-readable stationary storage device234, and processor-readable removable storage device236. Client computer200may optionally communicate with a base station (not shown), or directly with another computer. And in one embodiment, although not shown, a gyroscope may be employed within client computer200to measuring or maintaining an orientation of client computer200. Power supply230may provide power to client computer200. A rechargeable or non-rechargeable battery may be used to provide power. The power may also be provided by an external power source, such as an AC adapter or a powered docking cradle that supplements or recharges the battery. Network interface232includes circuitry for coupling client computer200to one or more networks, and is constructed for use with one or more communication protocols and technologies including, but not limited to, protocols and technologies that implement any portion of the OSI model for mobile communication (GSM), CDMA, time division multiple access (TDMA), UDP, TCP/IP, SMS, MMS, GPRS, WAP, UWB, WiMax, SIP/RTP, GPRS, EDGE, WCDMA, LTE, UMTS, OFDM, CDMA2000, EV-DO, HSDPA, or any of a variety of other wireless communication protocols. Network interface232is sometimes known as a transceiver, transceiving device, or network interface card (NIC). Audio interface256may be arranged to produce and receive audio signals such as the sound of a human voice. For example, audio interface256may be coupled to a speaker and microphone (not shown) to enable telecommunication with others or generate an audio acknowledgment for some action. A microphone in audio interface256can also be used for input to or control of client computer200, e.g., using voice recognition, detecting touch based on sound, and the like. Display250may be a liquid crystal display (LCD), gas plasma, electronic ink, light emitting diode (LED), Organic LED (OLED) or any other type of light reflective or light transmissive display that can be used with a computer. Display250may also include a touch interface244arranged to receive input from an object such as a stylus or a digit from a human hand, and may use resistive, capacitive, surface acoustic wave (SAW), infrared, radar, or other technologies to sense touch or gestures. Projector246may be a remote handheld projector or an integrated projector that is capable of projecting an image on a remote wall or any other reflective object such as a remote screen. Video interface242may be arranged to capture video images, such as a still photo, a video segment, an infrared video, or the like. For example, video interface242may be coupled to a digital video camera, a web-camera, or the like. Video interface242may comprise a lens, an image sensor, and other electronics. Image sensors may include a complementary metal-oxide-semiconductor (CMOS) integrated circuit, charge-coupled device (CCD), or any other integrated circuit for sensing light. Keypad252may comprise any input device arranged to receive input from a user. For example, keypad252may include a push button numeric dial, or a keyboard. Keypad252may also include command buttons that are associated with selecting and sending images. Illuminator254may provide a status indication or provide light. Illuminator254may remain active for specific periods of time or in response to event messages. For example, when illuminator254is active, it may back-light the buttons on keypad252and stay on while the client computer is powered. Also, illuminator254may back-light these buttons in various patterns when particular actions are performed, such as dialing another client computer. Illuminator254may also cause light sources positioned within a transparent or translucent case of the client computer to illuminate in response to actions. Further, client computer200may also comprise hardware security module (HSM)268for providing additional tamper resistant safeguards for generating, storing or using security/cryptographic information such as, keys, digital certificates, passwords, passphrases, two-factor authentication information, or the like. In some embodiments, hardware security module may be employed to support one or more standard public key infrastructures (PKI), and may be employed to generate, manage, or store keys pairs, or the like. In some embodiments, HSM268may be a stand-alone computer, in other cases, HSM268may be arranged as a hardware card that may be added to a client computer. Client computer200may also comprise input/output interface238for communicating with external peripheral devices or other computers such as other client computers and network computers. The peripheral devices may include an audio headset, virtual reality headsets, display screen glasses, remote speaker system, remote speaker and microphone system, and the like. Input/output interface238can utilize one or more technologies, such as Universal Serial Bus (USB), Infrared, WiFi, WiMax, Bluetooth™, and the like. Input/output interface238may also include one or more sensors for determining geolocation information (e.g., GPS), monitoring electrical power conditions (e.g., voltage sensors, current sensors, frequency sensors, and so on), monitoring weather (e.g., thermostats, barometers, anemometers, humidity detectors, precipitation scales, or the like), or the like. Sensors may be one or more hardware sensors that collect or measure data that is external to client computer200. Haptic interface264may be arranged to provide tactile feedback to a user of the client computer. For example, the haptic interface264may be employed to vibrate client computer200in a particular way when another user of a computer is calling. Temperature interface262may be used to provide a temperature measurement input or a temperature changing output to a user of client computer200. Open air gesture interface260may sense physical gestures of a user of client computer200, for example, by using single or stereo video cameras, radar, a gyroscopic sensor inside a computer held or worn by the user, or the like. Camera240may be used to track physical eye movements of a user of client computer200. GPS transceiver258can determine the physical coordinates of client computer200on the surface of the Earth, which typically outputs a location as latitude and longitude values. GPS transceiver258can also employ other geo-positioning mechanisms, including, but not limited to, triangulation, assisted GPS (AGPS), Enhanced Observed Time Difference (E-OTD), Cell Identifier (CI), Service Area Identifier (SAI), Enhanced Timing Advance (ETA), Base Station Subsystem (BSS), or the like, to further determine the physical location of client computer200on the surface of the Earth. It is understood that under different conditions, GPS transceiver258can determine a physical location for client computer200. In one or more embodiments, however, client computer200may, through other components, provide other information that may be employed to determine a physical location of the client computer, including for example, a Media Access Control (MAC) address, IP address, and the like. In at least one of the various embodiments, applications, such as, operating system206, client query engine222, other client apps224, web browser226, or the like, may be arranged to employ geo-location information to select one or more localization features, such as, time zones, languages, currencies, calendar formatting, or the like. Localization features may be used in display objects, data models, data objects, user-interfaces, reports, as well as internal processes or databases. In at least one of the various embodiments, geo-location information used for selecting localization information may be provided by GPS258. Also, in some embodiments, geolocation information may include information provided using one or more geolocation protocols over the networks, such as, wireless network108or network111. Human interface components can be peripheral devices that are physically separate from client computer200, allowing for remote input or output to client computer200. For example, information routed as described here through human interface components such as display250or keyboard252can instead be routed through network interface232to appropriate human interface components located remotely. Examples of human interface peripheral components that may be remote include, but are not limited to, audio devices, pointing devices, keypads, displays, cameras, projectors, and the like. These peripheral components may communicate over a Pico Network such as Bluetooth™, Zigbee™ and the like. One non-limiting example of a client computer with such peripheral human interface components is a wearable computer, which might include a remote pico projector along with one or more cameras that remotely communicate with a separately located client computer to sense a user's gestures toward portions of an image projected by the pico projector onto a reflected surface such as a wall or the user's hand. A client computer may include web browser application226that is configured to receive and to send web pages, web-based messages, graphics, text, multimedia, and the like. The client computer's browser application may employ virtually any programming language, including a wireless application protocol messages (WAP), and the like. In one or more embodiments, the browser application is enabled to employ Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScript, Standard Generalized Markup Language (SGML), HyperText Markup Language (HTML), eXtensible Markup Language (XML), HTML5, and the like. Memory204may include RAM, ROM, or other types of memory. Memory204illustrates an example of computer-readable storage media (devices) for storage of information such as computer-readable instructions, data structures, program modules or other data. Memory204may store BIOS208for controlling low-level operation of client computer200. The memory may also store operating system206for controlling the operation of client computer200. It will be appreciated that this component may include a general-purpose operating system such as a version of UNIX, or LINUX™, or a specialized client computer communication operating system such as Windows Phone™, or the Symbian® operating system. The operating system may include, or interface with a Java virtual machine module that enables control of hardware components or operating system operations via Java application programs. Memory204may further include one or more data storage210, which can be utilized by client computer200to store, among other things, applications220or other data. For example, data storage210may also be employed to store information that describes various capabilities of client computer200. The information may then be provided to another device or computer based on any of a variety of methods, including being sent as part of a header during a communication, sent upon request, or the like. Data storage210may also be employed to store social networking information including address books, buddy lists, aliases, user profile information, or the like. Data storage210may further include program code, data, algorithms, and the like, for use by a processor, such as processor202to execute and perform actions. In one embodiment, at least some of data storage210might also be stored on another component of client computer200, including, but not limited to, non-transitory processor-readable removable storage device236, processor-readable stationary storage device234, or even external to the client computer. Applications220may include computer executable instructions which, when executed by client computer200, transmit, receive, or otherwise process instructions and data. Applications220may include, for example, client query engine222, other client applications224, web browser226, or the like. Client computers may be arranged to exchange communications one or more servers. Other examples of application programs include calendars, search programs, email client applications, IM applications, SMS applications, Voice Over Internet Protocol (VOIP) applications, contact managers, task managers, transcoders, database programs, word processing programs, security applications, spreadsheet programs, games, search programs, visualization applications, and so forth. Additionally, in one or more embodiments (not shown in the figures), client computer200may include an embedded logic hardware device instead of a CPU, such as, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), Programmable Array Logic (PAL), or the like, or combination thereof. The embedded logic hardware device may directly execute its embedded logic to perform actions. Also, in one or more embodiments (not shown in the figures), client computer200may include one or more hardware micro-controllers instead of CPUs. In one or more embodiments, the one or more micro-controllers may directly execute their own embedded logic to perform actions and access its own internal memory and its own external Input and Output Interfaces (e.g., hardware pins or wireless transceivers) to perform actions, such as System On a Chip (SOC), or the like. Illustrative Network Computer FIG.3shows one embodiment of network computer300that may be included in a system implementing one or more of the various embodiments. Network computer300may include many more or less components than those shown inFIG.3. However, the components shown are sufficient to disclose an illustrative embodiment for practicing these innovations. Network computer300may represent, for example, one embodiment of data source server computer116, or the like, ofFIG.1. Network computers, such as, network computer300may include a processor302that may be in communication with a memory304via a bus328. In some embodiments, processor302may be comprised of one or more hardware processors, or one or more processor cores. In some cases, one or more of the one or more processors may be specialized processors designed to perform one or more specialized actions, such as, those described herein. Network computer300also includes a power supply330, network interface332, audio interface356, display350, keyboard352, input/output interface338, processor-readable stationary storage device334, and processor-readable removable storage device336. Power supply330provides power to network computer300. Network interface332includes circuitry for coupling network computer300to one or more networks, and is constructed for use with one or more communication protocols and technologies including, but not limited to, protocols and technologies that implement any portion of the Open Systems Interconnection model (OSI model), global system for mobile communication (GSM), code division multiple access (CDMA), time division multiple access (TDMA), user datagram protocol (UDP), transmission control protocol/Internet protocol (TCP/IP), Short Message Service (SMS), Multimedia Messaging Service (MMS), general packet radio service (GPRS), WAP, ultra-wide band (UWB), IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMax), Session Initiation Protocol/Real-time Transport Protocol (SIP/RTP), or any of a variety of other wired and wireless communication protocols. Network interface332is sometimes known as a transceiver, transceiving device, or network interface card (NIC). Network computer300may optionally communicate with a base station (not shown), or directly with another computer. Audio interface356is arranged to produce and receive audio signals such as the sound of a human voice. For example, audio interface356may be coupled to a speaker and microphone (not shown) to enable telecommunication with others or generate an audio acknowledgment for some action. A microphone in audio interface356can also be used for input to or control of network computer300, for example, using voice recognition. Display350may be a liquid crystal display (LCD), gas plasma, electronic ink, light emitting diode (LED), Organic LED (OLED) or any other type of light reflective or light transmissive display that can be used with a computer. In some embodiments, display350may be a handheld projector or pico projector capable of projecting an image on a wall or other object. Network computer300may also comprise input/output interface338for communicating with external devices or computers not shown inFIG.3. Input/output interface338can utilize one or more wired or wireless communication technologies, such as USB™, Firewire™, WiFi, WiMax, Thunderbolt™, Infrared, Bluetooth™, Zigbee™, serial port, parallel port, and the like. Also, input/output interface338may also include one or more sensors for determining geolocation information (e.g., GPS), monitoring electrical power conditions (e.g., voltage sensors, current sensors, frequency sensors, and so on), monitoring weather (e.g., thermostats, barometers, anemometers, humidity detectors, precipitation scales, or the like), or the like. Sensors may be one or more hardware sensors that collect or measure data that is external to network computer300. Human interface components can be physically separate from network computer300, allowing for remote input or output to network computer300. For example, information routed as described here through human interface components such as display350or keyboard352can instead be routed through the network interface332to appropriate human interface components located elsewhere on the network. Human interface components include any component that allows the computer to take input from, or send output to, a human user of a computer. Accordingly, pointing devices such as mice, styluses, track balls, or the like, may communicate through pointing device interface358to receive user input. GPS transceiver340can determine the physical coordinates of network computer300on the surface of the Earth, which typically outputs a location as latitude and longitude values. GPS transceiver340can also employ other geo-positioning mechanisms, including, but not limited to, triangulation, assisted GPS (AGPS), Enhanced Observed Time Difference (E-OTD), Cell Identifier (CI), Service Area Identifier (SAI), Enhanced Timing Advance (ETA), Base Station Subsystem (BSS), or the like, to further determine the physical location of network computer300on the surface of the Earth. It is understood that under different conditions, GPS transceiver340can determine a physical location for network computer300. In one or more embodiments, however, network computer300may, through other components, provide other information that may be employed to determine a physical location of the client computer, including for example, a Media Access Control (MAC) address, IP address, and the like. In at least one of the various embodiments, applications, such as, operating system306, assessment engine322, visualization engine324, modeling engine326, other applications329, or the like, may be arranged to employ geo-location information to select one or more localization features, such as, time zones, languages, currencies, currency formatting, calendar formatting, or the like. Localization features may be used in user interfaces, dashboards, visualizations, reports, as well as internal processes or databases. In at least one of the various embodiments, geo-location information used for selecting localization information may be provided by GPS340. Also, in some embodiments, geolocation information may include information provided using one or more geolocation protocols over the networks, such as, wireless network108or network111. Memory304may include Random Access Memory (RAM), Read-Only Memory (ROM), or other types of memory. Memory304illustrates an example of computer-readable storage media (devices) for storage of information such as computer-readable instructions, data structures, program modules or other data. Memory304stores a basic input/output system (BIOS)308for controlling low-level operation of network computer300. The memory also stores an operating system306for controlling the operation of network computer300. It will be appreciated that this component may include a general-purpose operating system such as a version of UNIX, or LINUX, or a specialized operating system such as Microsoft Corporation's Windows® operating system, or the Apple Corporation's OSX® operating system. The operating system may include, or interface with one or more virtual machine modules, such as, a Java virtual machine module that enables control of hardware components or operating system operations via Java application programs. Likewise, other runtime environments may be included. Memory304may further include one or more data storage310, which can be utilized by network computer300to store, among other things, applications320or other data. For example, data storage310may also be employed to store information that describes various capabilities of network computer300. The information may then be provided to another device or computer based on any of a variety of methods, including being sent as part of a header during a communication, sent upon request, or the like. Data storage310may also be employed to store social networking information including address books, buddy lists, aliases, user profile information, or the like. Data storage310may further include program code, data, algorithms, and the like, for use by a processor, such as processor302to execute and perform actions such as those actions described below. In one embodiment, at least some of data storage310might also be stored on another component of network computer300, including, but not limited to, non-transitory media inside processor-readable removable storage device336, processor-readable stationary storage device334, or any other computer-readable storage device within network computer300, or even external to network computer300. Data storage310may include, for example, data models314, data sources316, data catalogs318, or the like. Applications320may include computer executable instructions which, when executed by network computer300, transmit, receive, or otherwise process messages (e.g., SMS, Multimedia Messaging Service (MMS), Instant Message (IM), email, or other messages), audio, video, and enable telecommunication with another user of another mobile computer. Other examples of application programs include calendars, search programs, email client applications, IM applications, SMS applications, Voice Over Internet Protocol (VOIP) applications, contact managers, task managers, transcoders, database programs, word processing programs, security applications, spreadsheet programs, games, search programs, and so forth. Applications320may include data engine322, other applications329, or the like, that may be arranged to perform actions for embodiments described below. In one or more of the various embodiments, one or more of the applications may be implemented as modules or components of another application. Further, in one or more of the various embodiments, applications may be implemented as operating system extensions, modules, plugins, or the like. Furthermore, in one or more of the various embodiments, data engine322, other applications329, or the like, may be operative in a cloud-based computing environment. In one or more of the various embodiments, these applications, and others, that comprise the management platform may be executing within virtual machines or virtual servers that may be managed in a cloud-based based computing environment. In one or more of the various embodiments, in this context the applications may flow from one physical network computer within the cloud-based environment to another depending on performance and scaling considerations automatically managed by the cloud computing environment. Likewise, in one or more of the various embodiments, virtual machines or virtual servers dedicated to data engine322, other applications329, or the like, may be provisioned and de-commissioned automatically. Also, in one or more of the various embodiments, data engine322, other applications329, or the like, may be located in virtual servers running in a cloud-based computing environment rather than being tied to one or more specific physical network computers. Further, network computer300may also comprise hardware security module (HSM)360for providing additional tamper resistant safeguards for generating, storing or using security/cryptographic information such as, keys, digital certificates, passwords, passphrases, two-factor authentication information, or the like. In some embodiments, hardware security module may be employed to support one or more standard public key infrastructures (PKI), and may be employed to generate, manage, or store keys pairs, or the like. In some embodiments, HSM360may be a stand-alone network computer, in other cases, HSM360may be arranged as a hardware card that may be installed in a network computer. Additionally, in one or more embodiments (not shown in the figures), network computer300may include an embedded logic hardware device instead of a CPU, such as, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), Programmable Array Logic (PAL), or the like, or combination thereof. The embedded logic hardware device may directly execute its embedded logic to perform actions. Also, in one or more embodiments (not shown in the figures), the network computer may include one or more hardware microcontrollers instead of a CPU. In one or more embodiments, the one or more microcontrollers may directly execute their own embedded logic to perform actions and access their own internal memory and their own external Input and Output Interfaces (e.g., hardware pins or wireless transceivers) to perform actions, such as System On a Chip (SOC), or the like. Illustrative Logical System Architecture FIG.4illustrates a logical architecture of system400for applications that employ histograms to generate statistics, metrics and visualizations for sampled data from data streams and/or massive datasets. In one or more of the various embodiments, system400may be arranged to include one or more data sources, such as, data source402, one or more data engines, such as, data engine404, one or more histogram tree data structures, such as, histogram tree data structure406, one or more query engines, such as query engine408, or the like. In one or more of the various embodiments, data source402may be arranged to store one or more data objects. In one or more of the various embodiments, data source402may be a database, file system, repository, document management system, or the like. In one or more of the various embodiments, data engine404may be arranged to generate one or more histogram tree data structures406that store sampled data objects from data source402. Accordingly, in one or more of the various embodiments, data engine404may be arranged to sample data objects that are provided by as results to queries by query engine408. The sampled data objects may be employed to generate one or more entries and sufficient statistics at nodes of histogram tree data structures406. In one or more of the various embodiments, data engine404may be arranged to selectively generate one or more histogram tree data structures406that include the sampled data objects. In some embodiments, data engine404may be arranged to initially generate one or more unpopulated histogram tree data structures406off-line or otherwise in preparation for subsequent query activity. Also, in one or more of the various embodiments, data engine404may be arranged to generate one or more histogram tree data structures406on-the-fly as they may be needed for responding to queries of sampled data objects. In one or more of the various embodiments, nodes of histogram tree data structures406may be arranged by a precision, i.e., the significant figures, stored for a value of a data object along with sufficient statistics at corresponding nodes of the histogram tree data structures406. In one or more of the various embodiments, data engine404may be arranged to generate sufficient statistics at each node, such as a number of data values inserted at a node, a count of data values inserted at the node, and a sum of data values inserted at the node. In one or more of the various embodiments, query engine408may be arranged to answer data source queries, or the like. In some embodiments, query engine408may be considered to be part of a larger database engine or query planner designed for processing database table joins, another service or applications, or the like. Also, in one or more of the various embodiments, query engine408may be arranged to provide query information that includes identity information for one or more sampled data objects. Further, in one or more of the various embodiments, query engine408may be enabled to employ data engine404and histogram tree data structures406to determine whether to include one or more sampled data objects in a result set (or query plan) rather than having to scan the data source directly. Note, while database operations and network firewalls are presented herein as use cases, one of ordinary skill in the art will appreciate that set membership testing may be advantageous to many applications or problem domains. Accordingly, for brevity and clarity, the disclosure of these innovations will focus on histogram tree data structures rather than the larger systems that may benefit from improved performance due to the histogram tree data structures described herein. FIG.5Aillustrates an exemplary histogram tree data structure that is initially created with empty nodes but does provide for a precision, i.e., number of significant figures, for sampled data objects that may be stored in the nodes of the tree. FIG.5Bshows a histogram tree data structure that is populated with values from a data object at a precision of three significant figures even though the raw data object includes five significant figures, i.e., 34,2523, which is truncated by the precision of the tree to 34,2000. Also, the three significant figures of precision for the data object is mathematically represented by 3.42*10{circumflex over ( )}5. As shown, the exponent of the truncated data value is 5, which is inserted into a root level exponential node of the histogram tree. As for the three significant figures of 3.42, at a first regular node, the value of 3 is inserted. Next, at a second regular node, the value of 4 is inserted which is beneath the first regular node. Further, at a third regular node, the value of 2 is inserted which is beneath the second regular node. Also, sufficient statistics are computed at each populated node of the histogram tree data structure corresponding to one or more of a number of data values inserted at a populated node, a count of data values inserted at the node, and a sum of data values inserted at the node. In this way, the exponential node includes the statistic of (1,1,342000), the first regular node includes the statistic of (1,1,3.42), the second regular node includes the statistic of (1,1,4.2), and the third regular node includes the statistic of (1,1,2). The histogram algorithm is employed to compute the statistics and other metrics for data values that can be included at the nodes in the histogram tree data structure for improved performance in providing histogram generated statistics and/or metrics. FIG.5Cillustrates the histogram tree data structure ofFIG.5B, but another sampled data value of 64,7999 is added to the tree. 64,799 is truncated by the tree significant figures precision of the tree to 64,7000. Also, the three significant figures of precision for the data object is mathematically represented by 6.47*10{circumflex over ( )}5. As shown, the exponent of the truncated data value is 5, which is added to the root level exponential node of the histogram tree. The sufficient statistics at the exponential node are increased by one. As for the three significant figures of 6.47, a new first regular node is added where the value of 6 is inserted, which is a branch below the exponential node that includes 5. Next, a new second regular node is added where the value of 4 inserted, which is a branch below the new first regular node that includes 6. Also, a new third regular node is added where the value of 7 is inserted, which is positioned as a branch beneath the second previously populated regular node that includes 4. Further, sufficient statistics are computed for the histogram at the new first, second and third nodes. In this way, sampled data objects may be quickly inserted into the array and histogram information and sufficient statistics may be efficiently precomputed for each populated node in the histogram tree data structure. Although sufficient statistics are mentioned herein, the invention is not limited to just computing this information. In one or more embodiments, additional histogram information may be computed at each node that is created or added to within the histogram tree data structure. Furthermore, histogram tree data structures may be merged in substantially the same way as adding one new data object to the tree data structure as discussed above. Generalized Operations FIG.6illustrates an exemplary flow diagram that generates a histogram tree data structure populated with sampled data objects. Moving from a start block, the process advances to block602where data objects are sampled at intervals from one or more data sources such as data streams or data stores. In one or more embodiments, the intervals may be static, dynamic, and/or variable, e.g., vary based on a count of sampled data objects. At block604, the nodes of the histogram tree data structure are populated with inserted values from sampled data objects. At each populated node, sufficient statistics and/or other information is computed based on at least a selected precision for the tree, i.e., number of significant figures to be stored for the sampled data objects. In one or more embodiments, Stepping to block606, new sampled data objects may be added to the histogram tree data structure. New nodes are created and updated with their corresponding values and at least computed sufficient statistics for each new node and updates provided to related nodes above the new nodes in the histogram tree data structure. Moving to block608, the information stored at the populated nodes of the histogram tree data structure are employed to respond to queries regarding sufficient statistics, metrics and other histogram information. Next, the process returns to performing other actions. It will be understood that each block in each flowchart illustration, and combinations of blocks in each flowchart illustration, can be implemented by computer program instructions. These program instructions may be provided to a processor to produce a machine, such that the instructions, which execute on the processor, create means for implementing the actions specified in each flowchart block or blocks. The computer program instructions may be executed by a processor to cause a series of operational steps to be performed by the processor to produce a computer-implemented process such that the instructions, which execute on the processor, provide steps for implementing the actions specified in each flowchart block or blocks. The computer program instructions may also cause at least some of the operational steps shown in the blocks of each flowchart to be performed in parallel. Moreover, some of the steps may also be performed across more than one processor, such as might arise in a multi-processor computer system. In addition, one or more blocks or combinations of blocks in each flowchart illustration may also be performed concurrently with other blocks or combinations of blocks, or even in a different sequence than illustrated without departing from the scope or spirit of the invention. Accordingly, each block in each flowchart illustration supports combinations of means for performing the specified actions, combinations of steps for performing the specified actions and program instruction means for performing the specified actions. It will also be understood that each block in each flowchart illustration, and combinations of blocks in each flowchart illustration, can be implemented by special purpose hardware-based systems, which perform the specified actions or steps, or combinations of special purpose hardware and computer instructions. The foregoing example should not be construed as limiting or exhaustive, but rather, an illustrative use case to show an implementation of at least one of the various embodiments of the invention. Further, in one or more embodiments (not shown in the figures), the logic in the illustrative flowcharts may be executed using an embedded logic hardware device instead of a CPU, such as, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), Programmable Array Logic (PAL), or the like, or combination thereof. The embedded logic hardware device may directly execute its embedded logic to perform actions. In one or more embodiments, a microcontroller may be arranged to directly execute its own embedded logic to perform actions and access its own internal memory and its own external Input and Output Interfaces (e.g., hardware pins or wireless transceivers) to perform actions, such as System On a Chip (SOC), or the like.
56,949
11860846
DETAILED DESCRIPTION Efforts to achieve cooperation with collaborative-interconnected devices typically requires one or more distributed heterogeneous systems to retrieve, manage, store and/or distribute data having spatial and temporal parameters. The collaborative-interconnected devices, sometimes referred to herein as “agents” or “clients,” may independently perform data collection activities and/or environmental actions (e.g., drone flight path actions, robot movement actions, etc.). For example, one or more cameras and/or scanners placed within an area of interest (e.g., a warehouse) may analyze the area for object, debris and/or safe-path zones for the benefit of one or more agents (e.g., robots) operating in the area of interest (e.g., warehouse merchandising robots, sorting robots, material handling, etc.). The agents (e.g., robots) may utilize such collected camera/scanner information to determine whether a desired path of propagation is clear of obstacles at a particular time prior to traveling from a current location to a desired destination location. Traditional techniques to manage spatio-temporal data distribution with disparate agents typically require relatively large amounts of memory reserved to represent each spatial dimension within the area of interest. As such, for portions of the area of interest that are static, valuable memory resources are consumed nonetheless. Moreover, because some collaborative objectives are temporally dynamic, additional layers of temporal data must be accounted for in the memory resources so that accurate dynamic representations of movement (e.g., new obstacles entering a previously cleared pathway) can be identified. Thus, the traditional distributed storage and management systems may become inundated with memory management tasks for redundant spatial data points for different time-periods of interest, even when no spatial variation in those spatial data points occurs. Traditional techniques to manage data distribution with disparate agents also lack an ability to efficiently manage different resolutions that may be provided by one or more agents in an area of interest. For example, camera agents may collect spatiotemporal data (e.g., four-dimensional (4D) data made up of three dimensional (3D) axis spatial data and corresponding temporal data (3D spatial+1D temporal)) from a warehouse wall to identify path obstructions or changes that might impede the ability of warehouse robot propagation down a hallway. The collected image data may be stored with a resolution in meters. However, a separate agent (e.g., a camera agent on the propagating robot itself) may include a sensor with a more granular resolution to facilitate movement, analysis and/or control of the robot in a relatively more granular scale, such as centimeters (e.g., a robot arm instructed to place an item on a shelf). Unlike traditional distributed storage systems, examples disclosed herein facilitate a multi-resolution data management system to accommodate the different granularity requirements of different agents working together for a collaborative objective. Examples disclosed herein improve spatial-temporal data management activities to semantically, geometrically and temporally stitch together interconnected network agents with a unified space-time container (data structure). Examples disclosed herein improve storage efficiency, adapt to dynamic data changes, and facilitate solutions for future changes to the data structure as the collaborative objective changes. Examples disclosed herein facilitate a common coordinate system in space and time to facilitate participation by one or more agents that lack a common data granularity. FIG.1illustrates an example hyper-octree data structure100to store spatiotemporal data. In the illustrated example ofFIG.1, the hyper-octree data structure100includes sixteen (16) hypervoxels shown numbered from zero (0) to fifteen (15). In some examples, the hyper-octree data structure100is referred to herein as a “hexatree” and employed in a cloud environment and, accordingly, referred to herein as a “hexacloud.” Generally speaking, cloud computing refers to a coordinated network of computing devices. As used herein, “octree” refers to the resulting subspace after removing (fixing constant) one dimension and combining the hypervoxels that have the same indices on remaining dimensions. As used herein, a “quadtree” refers to the resulting subspace after removing (fixing constant) one dimension of an octree and combining the voxels that have the same indices on the remaining dimensions. In particular, a quadtree is a two-dimensional representation, while an octree is a three-dimensional representation, and a hexatree is a four-dimensional representation. The example hyper-octree data structure100includes a negative time octree102(e.g., a first octree after fixing the time dimension to t−1) and a positive time octree104(e.g., a second octree after fixing the time dimension to t) based on a relative temporal relationship to a hyperspace center point106. The example hyperspace center point106reflects a four-dimensional reference point in terms of space (e.g., three dimensions) and time (e.g., one dimension). The example positive time octree104represents temporal detail that occurs after the relative time associated with the example hyperspace center point106(e.g., “future” time), and the example negative time octree106represents temporal detail that occurs prior to the relative time associated with the example hyperspace center point106(e.g., “past” time). In some examples, the hyperspace center point106is a spatiotemporal reference point, such as an example application of the center of a supermarket having an x-axis, y-axis and z-axis spatial reference value (e.g., 0,0,0 distance units (e.g., kilometers, meters, centimeters, millimeters, etc.), and a temporal reference value (e.g., 0 hours, minutes, seconds, milliseconds, etc.). In the illustrated example ofFIG.1, each hypervoxel (e.g., see numerical identifiers zero (0) through fifteen (15)) represents one of sixteen indexes that define a spatiotemporal representation in view of (relative to) the example hyperspace center point106. An example first hypervoxel108(see index zero (0) surrounded by a square shape) represents spatiotemporal information occurring after the hyperspace center point106due to its proximity in the positive time octree104, and represents an x-axis subspace, y-axis subspace and z-axis subspace having values greater than the hyperspace center point106. In another example, a second hypervoxel110(see index one (1) surrounded by a triangle shape) represents spatiotemporal information occurring in future time, and represents a y-axis and z-axis positive subspace, but an x-axis negative subspace (e.g., x-axis spatial value lower than that of the hyperspace center point106). Each hypervoxel (sometimes referred to as a “hypernode”) describes a spatiotemporal partition subspace, which may include a leaf node (e.g., a lowest level of detail/resolution) and one or more intermediary resolution nodes (internal nodes). As such, each hypervoxel stores content of a bounded cube in space within a delimited time interval, in which each hypervoxel exposes one or more properties to determine if it is a leaf node, an internal node with content, an internal node without content, or the root. As such, the example hyper-octree data structure100facilitates a non-uniform resolution to improve scalability and flexibility beyond traditional distributed data techniques. Additionally, because the example hyper-octree data structure100is a data structure representation of index values that map real-world spatial areas, such representations of the spatial areas do not consume platform physical memory/storage for empty space within such spatial areas of interest. In this manner, an amount of memory and/or computational power used to manage spatiotemporal data can grow and adapt to different applications and nuances they may contain. The content of the example hypernodes is flexible, and can include spatiotemporal values and metadata, unlike the inherent structural attributes statically associated with conventional distributed data systems. As described in further detail below, extensible metadata may be combined with the example hypervoxels to accommodate improved scalability and flexibility with coordinated agents. In other words, example hypervoxels contain metadata representations (e.g., pointers, uniform resource locators (URLs), uniform resource identifiers (URIs), etc.). Returning to the illustrated example ofFIG.1, the hyper-octree data structure100establishes symmetry to improve computational efficiency for relatively fast bit-wise operation. As described above, temporal symmetry is established with the example positive time octree104and the example negative time octree102. Additionally, each octree can be sub-divided into two quadtrees that divide the resulting space in two parts along one of the remaining dimensions to represent spatial symmetry. Generally speaking, symmetry occurs on any dimension, and the space can be partitioned using any of the available four (4) dimensions. The hyper-octree data structure100can be divided into two octrees by fixing the value of any of the dimensions and dividing again using one of the remaining ones to obtain a quadtree, and this may be repeated to obtain a binary tree, and yet again to obtain a hypervoxel indexed by the four values used to reach/reference it. In particular, the example hyper-octree data structure100ofFIG.1includes a first quadtree112, which includes the hypervoxel element zero (0)108, the example hypervoxel element one (1)110, an example hypervoxel element two (2)114, and an example hypervoxel element three (3)116. The example first quadtree112reflects symmetry of the x-axis and y-axis subspaces (both negative and positive). The z-axis subspace associated with the example first quadtree112reflects the z-axis positive subspace, and its symmetric counterpart is represented in an example second quadtree118to reflect the z-axis negative subspace. Examples disclosed herein build one or more hyper-octree data structures, such as the example hyper-octree data structure100ofFIG.1, and manage input requests from any number of agents. As described in further detail below, new spatiotemporal data points to be added are analyzed to determine operational resolution and boundary conditions to allow non-uniform resolution management. Additionally, examples disclosed herein analyze input requests to determine whether a data point to be inserted is outside a current hyper-octree octree so that one or more additional roots are to be generated/expanded (recursively), thereby representing real world spatial and temporal targets that might reside beyond a currently established root, as described in further detail below. FIG.2is an example hexacloud system200to improve spatial-temporal data management. In the illustrated example ofFIG.2, the hexacloud system200includes a hexacloud engine202communicatively connected to a hypervoxel data store204, a platform memory206, and one or more networks208. The example hexacloud engine202is also communicatively connected to an example metadata store210and any number of agents212either directly and/or via the example networks208. The example hexacloud engine202ofFIG.2also includes an example hypervoxel data structure generator214, which includes an example octree manager216, an example quadtree manager218, and an example subspace assigner220. The example hexacloud engine202ofFIG.2also includes an example spatial-temporal point analyzer222, and an example insertion engine224, which includes an example node backup engine226, an example index calculator228, an example child node manager230, and an example metadata manager232. The example hexacloud engine202ofFIG.2also includes an example expansion engine234, which includes an example expansion direction analyzer236, an example offset engine238, and an example re-root engine240. In operation, the example hypervoxel data structure generator214generates a root hypervoxel data structure having sixteen (16) hypernodes. The example octree manager216improves a spatiotemporal data access efficiency by generating a first degree of symmetry that separates the hypernodes into two separate groups of eight (8) hypernodes to create a positive octree and a negative octree. As described above, the example positive octree (e.g., the example positive time octree104ofFIG.1) and the example negative octree (e.g., the example negative time octree102ofFIG.1) represent relative positive and negative time offsets from a hyperspace center point, such as the example hyperspace center point106ofFIG.1. The example octree manager216selects one of the octrees, and the example quadtree manager218further improves the spatiotemporal data access efficiency by generating a second degree of symmetry. In particular, the example quadtree manager218selects a quadtree from one of the octrees previously established. As described above, and in connection with the illustrated example ofFIG.1, each quadtree of a given octree represents one of either negative z-axis subspace or positive z-axis subspace. The example quadtree manager218designates and/or otherwise assigns one of the quadtrees as each type of positive or negative z-axis subspace, and the example subspace assigner220assigns hypernodes within each quadtree of interest with subspace types. In particular, the example subspace assigner220improves computational efficiency when accessing example hyper-octree data structures to enable bit-wise computation by assigning symmetric subspace types. Returning to the illustrated example ofFIG.1, the example subspace assigner220assigns node zero (0) and node three (3) with positive x-axis subspace, and symmetrically assigns node one (1) and node two (2) with negative x-axis subspace. Additionally, the example subspace assigner220assigns node zero (0) and node one (1) with positive y-axis subspace, and symmetrically assigns node two (2) and node three (3) with negative y-axis subspace. After the example subspace assigner220assigns nodes with symmetric subspace representations for one quadtree of interest, the example hypervoxel data structure generator214selects another available quadtree of interest (e.g., a previously unassigned quadtree), for which subspace assignments are generated. Additionally, upon completion of all quadtrees within an octree of interest, the example data structure generator214selects another available octree of interest, for which the subspace assignment repeats for all previously unassigned hypervoxels. During access operations of the example hyper-octree data structure100generated by the example hypervoxel data structure generator214, the example spatial-temporal point analyzer (STPA)222determines whether one or more agents attempt to add spatiotemporal data to the example hyper-octree data structure100. If so, it extracts and/or otherwise evaluates spatial value information related to x-axis, y-axis and z-axis spatial information. Additionally, the example STPA222extracts and/or otherwise evaluates time-based information, spatial resolution information (δ) and temporal resolution information (τ). In some examples, the STPA222evaluates the spatial resolution information and/or the temporal resolution information received from an agent (e.g., a robot, a sensor, a camera, a quadcopter, etc.) to determine a corresponding resolution type. Resolution types may include particular ranges of values in terms of, for example, millimeters (as compared to a relatively coarser centimeter resolution), centimeters (as compared to a relatively coarser meter resolution), etc. Although at least one hyper-octree data structure100has been created, as described above, it has a finite spatial and temporal boundary. For example, the hyperspace center point106may initially identify x, y and z coordinates of zero meters and a time of 12:00 on a specified date of interest. The spatial reach of the example hyper-octree data structure100may represent a maximum distance of spatial dimensions+/−1 meter from the hyperspace center point106, thereby representing a spatial cube of 2-meters in each axis. Similarly, the temporal boundary may have a finite limit of +/−one hour. An initial or current hyper-octree data structure is sometimes referred to herein as a root node and/or otherwise the current node. As such, in the event an input data point is within the spatial-temporal boundary of the root node, then the example hexacloud engine202enters data associated with the input data into the appropriate index (e.g., hypervoxel) of the example hyper-octree data structure at a resolution of interest. On the other hand, in the event the example spatial-temporal point analyzer222determines that an input point (P) is not within the spatial-temporal boundary of the current/root node, then the expansion engine234creates and/or otherwise generates one or more hyper-octree data structure(s) to accommodate the new input point data. For instance, and continuing with the example root structure100having a 2×2×2 meter dimension, if the new input data point includes an x-axis spatial value of +3 meters, then the example expansion engine234generates an expanded hyper-octree data structure having spatial and temporal boundaries to accommodate a relatively larger space. Stated differently, the example expansion engine234generates another hyper-octree data structure that grows symmetrically, such that a new root is created with a double size on each dimension, and the corresponding node of the newly created 16-nodes is assigned to the original hyper-octree. In response to the example spatial-temporal point analyzer222determining and/or otherwise calculating that a new input point (P) has spatiotemporal dimensions that are consistent, within, and/or otherwise align with the current/root node (e.g., a first hyper-octree data structure, such as the example hyper-octree data structure100ofFIG.1), the example hexacloud engine202determines whether the contents of P and/or its associated metadata are different from values that may already be stored in the example hyper-octree data structure100. If not, then the example hexacloud engine202conserves memory and/or computational resources by preventing and/or otherwise refraining from storing new input point (P). Conventional storage techniques, on the other hand, typically engage in write operations regardless of whether new input data has the same values as data already stored therein. Prior to adding input data associated with input point (P), the example node backup engine226assigns the current tree root node to an auxiliary node with a working variable (e.g., auxiliary node N). In particular, the example node backup engine226generates the auxiliary node as a precaution to prevent data loss during one or more operations, measurements and/or recursive calculations that might otherwise corrupt the current node. The example STPA222determines if the resolution of point P is compatible with the root node (now backup/auxiliary node N), or whether point P is a leaf node. As used herein, a root node is a node that has no parent and contains the entirety of a spatiotemporal representation of nodes beneath it for a given region. As used herein, a leaf node is a node that has no children and contains data at the lowest level (e.g., lowest resolution capability of the hyper-octree data structure). As used herein, internal nodes are intermediate nodes between the extremes of leaf nodes and root nodes. In some examples, the example STPA222performs a resolution test in a manner consistent with example Equation 1. N·δ<P·δ orN·τ<P·τ In the illustrated example of Equation 1, N refers to the backup copy of the current (root) node, δ refers to a spatial resolution value (e.g., in millimeters, in centimeters, in meters, in kilometers, etc.), in which the notation N, reflects the spatial resolution property of node “N,” and P·δ reflects the spatial resolution property of new point P. Additionally, r refers to a temporal resolution value (e.g., in microseconds, in milliseconds, in seconds, in minutes, etc.), in which the notation “N·τ” reflects the temporal resolution property of node “N,” and the notation “P·τ” reflects the temporal resolution property of new point P. Generally speaking, the illustrated example of Equation 1 tests and/or otherwise determines whether the insertion point P resides within the range of the root node where data is to be inserted. In other words, is the portion of the root node being pointed to matching a resolution of the point P to be added/inserted? For example, the current root node may be set to a spatial resolution of millimeters, while the new point P may have a spatial resolution of centimeters. In such a circumstance, the test of example Equation 1 fails and the example index calculator228resolves a resolution for node N. In particular, the example index calculator228generates a child index value “k” and determines which indexed hypervoxel the new point P is to be associated with (e.g., see hypervoxel index values zero (0) through fifteen (15) in the illustrated example ofFIG.1). The example index calculator228evaluates new point P to determine whether it is associated with past or future time (e.g., relative to the example hyperspace center point106), thereby identifying whether new point P is to be associated with a first or second octree of the example hyper-octree data structure100. The example index calculator228also determines whether a z-axis value, a y-axis value and an x-axis value of new point P is either negative or positive with respect to the example hyperspace center point106, thereby identifying which quadtrees and, ultimately, which hypervoxel the new point P is to be associated. Child index “k” is thereafter associated with the appropriate index value associated with the corresponding hypervoxel. The example child node manager230determines whether the identified hypervoxel has a child at index “k.” In the event a child node is available, then the example child node manager230sets the node N (working value for the root node) to the child at index k. On the other hand, if the example child node manager230determines that no child node exists, then any data entry attempt would encounter empty space, and the child node manager230creates a child node. However, before beginning any write-attempt at the particular indexed hypervoxel at the current resolution, the example STPA222re-evaluates point P in connection with the adjusted resolution. In the event one or more additional resolution “steps” are required, then the index calculator228repeats the analysis to identify the correct/corresponding hypernode at the modified resolution (e.g., going from millimeters to kilometers may require intermediate resolution shifts (a) from millimeters to centimeters, (b) from centimeters to meters and (c) from meters to kilometers). In some examples, the STPA222determines that either (a) the resolution of point P is compatible with the root node N or (b) a leaf node is encountered. If so, then the example metadata manager232is invoked to perform hypervoxel metadata insertion. In particular, the example metadata manager232retrieves the index location previously calculated and determines whether the metadata (Q) associated with point P is to be stored locally or remotely. In the event it is to be stored locally, then the metadata manager232inserts the data into a local storage of the agent making the write request, but if the metadata Q can (optionally) be stored in one or more external locations (e.g., an external database, a networked cloud storage, etc.), then an identifier is resolved with the external data source via one or more queries. In some examples, when metadata can be stored externally, such external storage is not mandatory and internal storage may still occur, as desired. In some examples, the identifier of the metadata, or the metadata itself is the identifier, such as a URI/URL. The example metadata manager232uses the identifier to obtain a proxy that can be used for metadata retrieval in the future, and the retrieved proxy may be used to apply an insertion/update of the metadata during the write operation of point P. Because in some examples, the intermediate nodes may store a degree of data, the example insertion engine224verifies whether the current node is a leaf node and, if not the index calculator228continues one or more recursive operations to reach the root node for data insertion. As discussed above, in some examples the new point P to be added to the example hyper-octree data structure100is outside the spatiotemporal boundaries of that data structure. In such circumstances, the example expansion engine234creates one or more additional hyper-octree data structures, as needed. In particular, the example expansion direction analyzer236initially determines an appropriate expansion direction based on the new input point (P). For example, if the current root node is a bounded hypervoxel having an origin of x=0, y=0 and z=0 and can represent spatial distances that are plus or minus one meter in length (e.g., total cube spatial range in each axis is 2 meters), and new point P includes an x-axis dimension of 3 meters, then the example expansion engine234identifies that the new hypervoxel expansion should occur. Additionally, the example offset engine238calculates space and time offset values for the new point (P) in the new (expanded) node after it is created. However, in addition to identifying the correct index location within the new (expanded) node in which the new point (P) is to reside, the example offset engine238also calculates a new node origin point for the expanded node and corresponding dimensions of the expanded node. In some examples, the offset engine238creates the new expanded root node in a manner consistent with example Equations 2, 3, 4 and 5. R′·δ·x=R·δ·x*2  Equation 2. R′·δ·y=R·δ·y*2  Equation 3. R′·δ·z=R·δ·z*2  Equation 4. R′·τ=R·τ*2  Equation 5. In the illustrated examples of Equations 2, 3, 4 and 5, R′ (R-prime) reflects the new (expanded) node in the desired spatial resolution of interest (δ) for a subspace of interest (e.g., x-axis, y-axis or z-axis). The desired temporal resolution of interest is represented by (τ). Additionally, example Equations 2, 3, 4 and 5 illustrate that the spatiotemporal parameters for the expanded node include the previous node spatiotemporal parameters multiplied by two (2). The new expanded node is to become the new current node and defined as a “parent” to the former “current” node, and the new expanded node includes a corresponding hyperspace center point (similar to the example hyperspace center point106ofFIG.1). In some examples, the offset engine238invokes example code300to calculate space-time delta values in a manner consistent withFIG.3. In the illustrated example ofFIG.3, space-time delta code300takes as inputs a node index value302previously calculated (e.g., resulting in one of sixteen index values (0 through 15) that identifies a corresponding hypervoxel), a spatial resolution value304, and a temporal resolution value306. The example space-time delta code300also includes spatial and temporal real values308associated with the new point P that triggered the expansion requirement. Based on the index value input302, the example space-time delta code300applies a switch function310to identify one of sixteen case values312with which to calculate delta/difference values. As can be seen from the illustrated example ofFIG.3, the space-time delta code300reflects a degree of symmetry, in which the first eight case values314represent positive time associated with a first octree, and the second eight case values316represent negative time associated with a second octree. The example offset engine240establishes and/or otherwise sets a space-time center of the new expanded node in a manner consistent with example Equations 6, 7, 8 and 9. R′·cx=R·xc+dxEquation 6. R′·cy=R·yc+dyEquation 7. R′·cz=R·zc+dzEquation 8. R′·ct=R·tc+tzEquation 9. In the illustrated examples of Equations 6, 7, 8 and 9. R′ (R-prime) reflects the new (expanded) node, R reflects the current node, xc, yc, zc and tc reflect spatial and temporal center points, respectively, and values for dx, dv, dz and tz are derived in a manner consistent withFIG.3. Now that a new expansion hyper-octree data structure has been generated having a corresponding hyperspace center point and an identified target hypervoxel with which to associate the new input data P, the example re-root engine240re-roots the current root node (e.g., the original root node “R”100) by assigning it as a child to the new expanded hyper-octree data structure “R′” (R-prime). Accordingly, further reference to a hyper-octree data structure will correspond to the newly expanded tree root R′ as the parent, while the one or more previously (e.g., originally) generated hyper-octree data structures (e.g., the example hyper-octree data structure100ofFIG.1) are deemed child nodes. While an example manner of implementing the hexacloud system200ofFIG.2is illustrated inFIGS.1through3, one or more of the elements, processes and/or devices illustrated inFIGS.1-3may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example hypervoxel data structure generator214, the example octree manager216, the example quadtree manager218, the example subspace assigner220, the example spatial-temporal point analyzer222, the example insertion engine224, the example node backup engine226, the example index calculator228, the example child node manager230, the example metadata manager232, the example expansion engine234, the example expansion direction analyzer236, the example offset engine238, the example re-root engine240, the example metadata store210, the example hypervoxel data store204, the example platform memory206and/or, more generally, the example hexacloud engine202ofFIG.2may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example hypervoxel data structure generator214, the example octree manager216, the example quadtree manager218, the example subspace assigner220, the example spatial-temporal point analyzer222, the example insertion engine224, the example node backup engine226, the example index calculator228, the example child node manager230, the example metadata manager232, the example expansion engine234, the example expansion direction analyzer236, the example offset engine238, the example re-root engine240, the example metadata store210, the example hypervoxel data store204, the example platform memory206and/or, more generally, the example hexacloud engine202ofFIG.2could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example hypervoxel data structure generator214, the example octree manager216, the example quadtree manager218, the example subspace assigner220, the example spatial-temporal point analyzer222, the example insertion engine224, the example node backup engine226, the example index calculator228, the example child node manager230, the example metadata manager232, the example expansion engine234, the example expansion direction analyzer236, the example offset engine238, the example re-root engine240, the example metadata store210, the example hypervoxel data store204, the example platform memory206and/or, more generally, the example hexacloud engine202ofFIG.2is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. Further still, the example hexacloud system200ofFIG.2may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.1-3, and/or may include more than one of any or all of the illustrated elements, processes and devices. Flowcharts representative of example machine readable instructions for implementing the hexacloud system200ofFIGS.1-3are shown inFIGS.4-9. In these examples, the machine readable instructions comprise a program for execution by a processor such as the processor1012shown in the example processor platform1000discussed below in connection withFIG.10. The program may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor1012, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor1012and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS.4-9, many other methods of implementing the example hexacloud system200may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. As mentioned above, the example processes ofFIGS.4-9may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “tangible computer readable storage medium” and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example processes ofFIGS.4-9may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended. The program400ofFIG.4begins at block402, where the example hypervoxel data structure generator214generates a root hypervoxel data structure having sixteen hypernodes. As described above, these sixteen hypernodes are generated having a symmetric orientation to improve computational efficiency when performing bit-wise operations on the example hypervoxel data structure (e.g., such as the example hypervoxel data structure100ofFIG.1) during access operation(s). In particular, the example hypervoxel data structure generator214generates and/or otherwise assigns a positive octree with a first group of eight hypernodes (block404) to represent temporal data that is positive (e.g., “future time”) with respect to a hyperspace center point. Additionally, the example hypervoxel data structure generator214generates and/or otherwise assigns a negative octree with a second (remaining) group of eight hypernodes (block406) to represent temporal data that is negative (e.g., “past time”) with respect to the hyperspace center point. The example hypervoxel data structure generator214establishes the hyperspace center point (e.g., the example hyperspace center point106ofFIG.1) at the intersection of the positive time octree (e.g., the example positive time octree104ofFIG.1) and the negative time octree (e.g., the example negative time octree102ofFIG.1) (block408). The example program400ofFIG.4further configures the example hyper-octree data structure on an octree-by-octree, quadtree-by-quadtree and hypervoxel-by-hypervoxel hierarchical manner. In particular, the example octree manager216selects a previously unexamined/unconfigured octree subset of hypernodes/hypervoxels from the 4D cuboid volume (block410) and the example quadtree manager218selects a previously unexamined/unconfigured quadtree subset of hypernodes/hypervoxels from the selected octree to be assigned one of positive z-axis subspace or negative z-axis subspace (block412). As described above in connection withFIG.1, the example first quadtree112is assigned to represent the positive z-axis subspace during a first iteration of the example program400. In view of the selected quadtree of interest, the example subspace assigner220assigns a hypernode/hypervoxel of the quadtree to represent positive x-axis and positive y-axis subspace (block414). In the illustrated example ofFIG.1, this is shown as hypervoxel element zero (0)108. The example subspace assigner220assigns a separate hypernode/hypervoxel of the selected quadtree to represent negative x-axis and positive y-axis subspace (block416), which is shown in the illustrated example ofFIG.1as hypervoxel element one (1)110. The example subspace assigner220assigns a remaining separate hypernode/hypervoxel of the selected quadtree to represent negative x-axis and negative y-axis subspace (block418), which is shown in the illustrated example ofFIG.1as hypervoxel element two (2)114. Finally, the example subspace assigner220assigns the remaining hypernode/hypervoxel of the selected quadtree to represent positive x-axis and negative y-axis subspace (block420), which is shown in the illustrated example ofFIG.1as hypervoxel element three (3)116. The example hypervoxel data structure generator214determines whether an additional quadtree of the selected octree has yet to be configured (block422) and, if so, control returns to block412. If not, the example hypervoxel data structure generator214determines whether an additional octree has yet to be configured (block424) and, if so, control returns to block410. If not, then the hyper-octree data structure generation is complete and the example hypervoxel data structure generator214stores the root 4D cuboid volume in the example hypervoxel data store204(block426). As described above, while the example generated hexatree (e.g., the example hexatree100ofFIG.1) is a data structure to represent spatiotemporal information of a real-world area of interest, the data structure itself does not pre-allocate explicit portions of the example platform memory206, thereby conserving valuable memory resources that would otherwise be consumed by empty space by conventional distributed database systems. FIG.5is an example program500to manage hypervoxel access (reads/writes) of the example hexatree. In the illustrated example ofFIG.5, the spatial-temporal point analyzer (STPA)222determines whether a new input point (P) is to be added to the hexatree (block502). If not, the example hexacloud engine202services one or more queries of data contained therein (block504). Otherwise, in response to detecting an input access request (block502), the example STPA222retrieves and extracts information associated with the new or modified input point (P) (block506). Extracted information includes, but is not limited to, spatial values (e.g., x-axis, y-axis, z-axis spatial information), temporal values (e.g., capture time of the spatial values), spatial resolution information and/or temporal resolution information. Because a current hyper-octree data structure (root) has a finite logical spatial and temporal boundary, the example STPA222determines whether the input point P information is within such boundaries of the current root node (block508). If not, the example expansion engine234is invoked to expand and/or otherwise build-out a new hexatree to accommodate the new input point P (block510), as described above and in further detail below. On the other hand, in the event the new input point P is within the spatiotemporal boundary of the current (root) node (block508), then the example hexacloud engine202determines whether the contents of input point P are different than the information already stored in the current node (block512). If not, then further storage attempts are deemed wasteful, and processing and/or storage conservation is improved by ignoring the input request and control returns to block502. On the other hand, in the event the input point P information is different (block512), then the example insertion engine224manages the insertion process (block514), as described above and in further detail below. FIG.6illustrates additional detail in connection with insertion management (block514) ofFIG.5. In the illustrated example ofFIG.6, the example node backup engine226assigns the current tree root node (R) to an auxiliary node variable N as a precaution to prevent potential corruption of the root node until after the appropriate insertion index point(s) can be determined (block602). The example STPA222determines whether the input point P resolution information (e.g., spatial resolution S, temporal resolution r) are consistent with the current root node (N), or whether the node is associated with a leaf node (block604). As described above, examples disclosed herein facilitate a multi-resolution or non-uniform tree management system in which one or more agents (e.g., quad-copters, robots, sensors, etc.) do not have to comply with rigid resolution constraints typically associated with traditional systems. Examples disclosed herein resolve resolution disparity between an input point P and a current node focus so that multi-resolution spatiotemporal management can be processed in an efficient bit-wise manner. In the event the example STPA222determines that the resolution is not compatible or the resolution does not indicate a leaf node (block604), the example index calculator228resolves a target resolution for node N (block606). In other words, the example index calculator228determines the appropriate index with which to resolve a desired resolution.FIG.7includes additional detail in connection with resolving target resolution for node N (block606) ofFIG.6. In the illustrated example ofFIG.7, the example index calculator228generates a child index value k (block702), and evaluates new point P to determine whether it is associated with past temporal subspace or future temporal subspace relative to a hyperspace center point of node N (e.g., the example hyperspace center point106ofFIG.1) (block704). The example index calculator228selects the example negative time octree102for past temporal subspace information (block706) and selects the example positive time octree104for future subspace information (block708). In other words, while initially there were sixteen (16) nodes/hypervoxels of interest to consider in view of the new input point P, half of those are eliminated from consideration in an efficient bit-wise test by the example index calculator228. The example index calculator228evaluates the new point P to determine whether it is associated with positive z-axis subspace or negative z-axis subspace (block710). For positive z-axis subspace, the example index calculator228selects candidate positive z-axis hypervoxels (block712), and for negative z-axis subspace, the example index calculator228selects candidate negative z-axis hypervoxels (block714). Again, the bit-wise test by the example index calculator further reduces a number of candidate hypervoxels under consideration for an appropriate data entry. The example index calculator228evaluates the new point P to determine whether it is associated with positive y-axis subspace or negative y-axis subspace (block716). For positive y-axis subspace, the example index calculator228selects candidate positive y-axis hypervoxels (block718), and for negative y-axis subspace, the example index calculator228selects candidate negative y-axis hypervoxels (block720). Again, the bit-wise test by the example index calculator further reduces a number of candidate hypervoxels under consideration for an appropriate data entry. The example index calculator228evaluates the new point P to determine whether it is associated with positive x-axis subspace or negative x-axis subspace (block722). For positive x-axis subspace, the example index calculator228selects candidate positive x-axis hypervoxels (block724), and for negative x-axis subspace, the example index calculator228selects candidate negative x-axis hypervoxels (block726). Again, the bit-wise test by the example index calculator further reduces a number of candidate hypervoxels under consideration for an appropriate data entry. Returning to the illustrated example ofFIG.6, the example child node manager230determines whether the node calculated above as index k has a corresponding child node (block608). If not, then any attempted data entry would be unsuccessful as the target would represent entry in empty space and the example child node manager230creates and/or otherwise generates a child node at index k for node N (block610). On the other hand, in the event the child node manager230determines that a child node exists for index k (block608), then node N is set to the corresponding child index k (block612). Control returns to block604to verify that the recursive steps to align the proper resolution are achieved and, if not, the example process may repeat at block606. However, when the resolution is aligned or when a leaf node is identified (block604), the example metadata manager232inserts metadata into the target hypervoxel (block614). FIG.8includes additional detail of inserting metadata into the target hypervoxel (block614) ofFIG.6. In the illustrated example ofFIG.8, the example metadata manager232retrieves and/or otherwise identifies (a) the metadata (Q) (having an identifier) and (b) the corresponding insertion index location for node N (block802). If the metadata is to be stored locally (block804), then the example metadata manager232inserts the data locally to the application/agent of interest (block806). On the other hand, in view of the limited storage capabilities of one or more agents that may utilize the example hexacloud system200, metadata and/or corresponding data repositories for data may be stored externally. In such circumstances, the example metadata manager232resolves the identifier with an external data source (block808), such as the example metadata store210. In some examples, efforts to resolve may occur via one or more network interfaces810with a query containing the identifier, for which the external data source provides a proxy. The example metadata manager232uses the retrieved and/or otherwise received proxy information to apply an informational insertion or update to an external data source (block812), such as the example metadata store210. Control then returns to block616of the illustrated example ofFIG.6to determine whether a leaf node exists and, if not control returns to block606. Otherwise, control returns to block502ofFIG.5. Returning to the illustrated example ofFIG.5, in the event the example STPA222determines that input point P includes spatiotemporal information that is outside the logical boundaries of a current root node (block508), then the example expansion engine234facilitates hexatree expansion (block510).FIG.9includes additional detail of hexatree expansion (block510) of the illustrated example ofFIG.5. In the illustrated example ofFIG.9, it has been previously determined that input point P is not within the boundaries of the current hexanode, and a representation greater than that of the current/original hexanode is needed. Examples disclosed herein generate a new root node that doubles the spatial and temporal boundaries of the original root node using internal nodes and/or the leaf node associated with the input point P. However, although examples disclosed herein facilitate a doubling (or more) of the previous spatiotemporal boundaries, such expansion does not burden the physical memory resources of the application/agent. Instead, the new hexanode is another data structure that indexes the expanded spatiotemporal boundaries so that the application/agent memory management system (e.g., operating system) can populate physical memory only as needed for actual data, thereby avoiding memory consumption for empty space. The example expansion direction analyzer236calculates an expansion direction based on the one or more spatiotemporal parameters of point P (block902). For example, if the x-axis subspace is identified as exceeding the boundaries of the current node, then the new hexatree origin will be expanded with potential for sixteen (16) children. The example offset engine238calculates space/time offset values for a new location of point P (block904). As described above in connection withFIG.7, the example offset engine238may invoke the example index calculator228to identify an appropriate index for the expanded hexatree. Based on the identified index, the example offset engine238calculates deltas to determine a center point (e.g., a hyperspace center point) of the expanded root node (block906) in a manner consistent withFIG.3, described above. The new expanded root node may be designated as R′ (R-prime) to distinguish it from the previous root node. The example re-root engine240re-roots the current root node (R) by assigning it as a child to the newly expanded root node R′ (block908). In the event the current iteration of the example program510ofFIG.9does not expand far enough to accommodate the new input point P, as determined by the example STPA222(block910), then control returns to block902. Otherwise control returns to the illustrated example ofFIG.5. FIG.10is a block diagram of an example processor platform1000capable of executing the instructions ofFIGS.4-9to implement the hexacloud system200ofFIGS.1-3. The processor platform1000can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a gaming console, a set top box, or any other type of computing device. The processor platform1000of the illustrated example includes a processor1012. The processor1012of the illustrated example is hardware. For example, the processor1012can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The processor1012of the illustrated example includes a local memory1013(e.g., a cache). The processor1012of the illustrated example is in communication with a main memory including a volatile memory1014and a non-volatile memory1016via a bus1018. The volatile memory1014may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM). RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory1016may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory1014,1016is controlled by a memory controller. The processor platform1000of the illustrated example also includes an interface circuit1020. The interface circuit1020may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. In the illustrated example, one or more input devices1022are connected to the interface circuit1020. The input device(s)1022permit(s) a user to enter data and commands into the processor1012. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a laser scanner, an environmental sensor (e.g., temperature, humidity, light, etc.) and/or a voice recognition system. One or more output devices1024are also connected to the interface circuit1020of the illustrated example. The output devices1024can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit1020of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor. The interface circuit1020of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network1026(e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.). The processor platform1000of the illustrated example also includes one or more mass storage devices1028for storing software and/or data. Examples of such mass storage devices1028include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The coded instructions1032ofFIGS.4-9may be stored in the mass storage device1028, in the volatile memory1014, in the non-volatile memory1016, and/or on a removable tangible computer readable storage medium such as a CD or DVD. From the foregoing, it will be appreciated that the above disclosed methods, apparatus and articles of manufacture improve a data access efficiency for spatiotemporal data types. In particular, examples disclosed herein apply different degrees of symmetry to a hexatree data structure that, in part, facilitates faster bit-wise operations. Additionally, examples disclosed herein reduce platform memory utilization by preventing the storage and/or pre-allocation of physical memory for empty spaces of an area of interest. Example methods, apparatus, systems and articles of manufacture to improve spatial-temporal data management are disclosed herein. Further examples and combinations thereof include the following. Example 1 is an apparatus to improve access efficiency for spatiotemporal data, including a hypervoxel data structure generator to generate a root hexatree data structure having sixteen hypernodes, an octree manager to improve a spatiotemporal data access efficiency by generating a first degree of symmetry in the root hexatree, the octree manager to assign a first portion of the hypernodes to a positive temporal subspace and to assign a second portion of the hypernodes to a negative temporal subspace, and a quadtree manager to improve the spatiotemporal data access efficiency by generating a second degree of symmetry in the root hexatree, the quadtree manager to assign respective hypernodes of the positive temporal subspace and the negative temporal subspace to respective positive and negative spatial subspaces. Example 2 includes the apparatus as defined in example 1, wherein the quadtree manager is to assign the respective positive and negative spatial subspaces to x-axis spatial dimensions, y-axis spatial dimensions and z-axis spatial dimensions. Example 3 includes the apparatus as defined in example 1, wherein the hypervoxel data structure is to establish a hyperspace center of the root hexatree data structure. Example 4 includes the apparatus as defined in example 3, wherein the first degree of symmetry is relative to the hyperspace center of the root hexatree data structure. Example 4b includes the apparatus as defined in examples 2 or 3, further including a spatial-temporal point analyzer to extract at least one of spatial coordinate data or temporal data having a corresponding spatial resolution and a temporal resolution, respectively. Example 5 includes the apparatus as defined in example 4, wherein the octree manager is to associate the first portion of the hypernodes with temporal data greater than a temporal value associated with the hyperspace center. Example 6 includes the apparatus as defined in example 4, wherein the octree manager is to associate the second portion of the hypernodes with temporal data less than a temporal value associated with the hyperspace center. Example 7 includes the apparatus as defined in example 3, wherein the quadtree manager is to associate the respective positive spatial subspaces with spatial data greater than a spatial value associated with the hyperspace center. Example 8 includes the apparatus as defined in example 3, wherein the quadtree manager is to associate the respective negative spatial subspaces with spatial data less than a spatial value associated with the hyperspace center. Example 9 includes the apparatus as defined in example 1, further including a spatial-temporal point analyzer to extract spatiotemporal data from an agent in response to a spatiotemporal data input request. Example 10 includes the apparatus as defined in example 9, wherein the spatiotemporal data includes at least one of spatial coordinate data or temporal data. Example 10b includes the apparatus as defined in examples 2, 3 or 9 further including an expansion direction analyzer to calculate an expansion direction for an expanded hexatree data structure in response to determining the spatiotemporal data input request is outside a spatial-temporal boundary of the root hexanode. Example 11 includes the apparatus as defined in example 9, wherein the spatial-temporal point analyzer is to determine at least one of a spatial resolution of the spatiotemporal data or a temporal resolution of the spatiotemporal data. Example 12 includes the apparatus as defined in example 9, wherein the spatial-temporal point analyzer is to calculate whether respective values of the spatiotemporal data input request are within spatial-temporal boundaries of the root hexanode. Example 13 includes the apparatus as defined in example 12, further including an expansion direction analyzer to calculate an expansion direction for an expanded hexatree data structure in response to the spatial-temporal point analyzer determining that the spatiotemporal data input request is outside the spatial-temporal boundaries of the root hexatree. Example 14 includes the apparatus as defined in example 13, further including an offset engine to calculate spatiotemporal offsets for a data insertion point for the spatiotemporal data input request to be placed in the expanded hexatree data structure. Example 15 includes the apparatus as defined in example 13, further including an offset engine to calculate dimensions of the expanded hexatree data structure based on the spatiotemporal data input request. Example 16 includes the apparatus as defined in example 13, further including a re-root engine to establish the expanded hexatree as a new root node, and to assign the root hexatree as a child of the expanded hexatree. Example 17 is a computer implemented method to improve access efficiency for spatiotemporal data, including generating, by executing an instruction with a processor, a root hexatree data structure having sixteen hypernodes, improving a spatiotemporal data access efficiency by generating, by executing an instruction with the processor, a first degree of symmetry in the root hexatree, assigning, by executing an instruction with the processor, a first portion of the hypernodes to a positive temporal subspace, assigning, by executing an instruction with the processor, a second portion of the hypernodes to a negative temporal subspace; and improving the spatiotemporal data access efficiency by generating, by executing an instruction with the processor, a second degree of symmetry in the root hexatree, and assigning respective hypernodes of the positive temporal subspace and the negative temporal subspace to respective positive and negative spatial subspaces. Example 18 includes the method as defined in example 17, further including assigning the respective positive and negative spatial subspaces to x-axis spatial dimensions, y-axis spatial dimensions and z-axis spatial dimensions. Example 19 includes the method as defined in example 17, further including establishing a hyperspace center of the root hexatree data structure. Example 20 includes the method as defined in example 19, wherein the first degree of symmetry is relative to the hyperspace center of the root hexatree data structure. Example 21 includes the method as defined in example 20, further including associating the first portion of the hypernodes with temporal data greater than a temporal value associated with the hyperspace center. Example 22 includes the method as defined in example 20, further including associating the second portion of the hypernodes with temporal data less than a temporal value associated with the hyperspace center. Example 23 includes the method as defined in example 19, further including associating the respective positive spatial subspaces with spatial data greater than a spatial value associated with the hyperspace center. Example 24 includes the method as defined in example 19, further including associating the respective negative spatial subspaces with spatial data less than a spatial value associated with the hyperspace center. Example 25 includes the method as defined in example 17, further including extracting spatiotemporal data from an agent in response to a spatiotemporal data input request. Example 26 includes the method as defined in example 25, wherein the spatiotemporal data includes at least one of spatial coordinate data or temporal data. Example 27 includes the method as defined in example 25, further including determining at least one of a spatial resolution of the spatiotemporal data or a temporal resolution of the spatiotemporal data. Example 28 includes the method as defined in example 25, further including calculating whether respective values of the spatiotemporal data input request are within spatial-temporal boundaries of the root hexanode. Example 29 includes the method as defined in example 28, further including calculating an expansion direction for an expanded hexatree data structure in response to determining that the spatiotemporal data input request is outside the spatial-temporal boundaries of the root hexatree. Example 30 includes the method as defined in example 29, further including calculating spatiotemporal offsets for a data insertion point for the spatiotemporal data input request to be placed in the expanded hexatree data structure. Example 31 includes the method as defined in example 29, further including calculating dimensions of the expanded hexatree data structure based on the spatiotemporal data input request. Example 32 includes the method as defined in example 29, further including establishing the expanded hexatree as a new root node, and to assign the root hexatree as a child of the expanded hexatree. Example 33 is a tangible computer-readable medium comprising instructions which, when executed, cause a processor to at least generate a root hexatree data structure having sixteen hypernodes, improve a spatiotemporal data access efficiency by generating a first degree of symmetry in the root hexatree, assign a first portion of the hypernodes to a positive temporal subspace, assign a second portion of the hypernodes to a negative temporal subspace, and improve the spatiotemporal data access efficiency by generating a second degree of symmetry in the root hexatree, and assigning respective hypernodes of the positive temporal subspace and the negative temporal subspace to respective positive and negative spatial subspaces. Example 34 includes the example tangible computer-readable medium as defined in claim33, wherein the instructions, when executed, further cause the processor to assign the respective positive and negative spatial subspaces to x-axis spatial dimensions, y-axis spatial dimensions and z-axis spatial dimensions. Example 35 includes the example tangible computer-readable medium as defined in claim33, wherein the instructions, when executed, further cause the processor to establish a hyperspace center of the root hexatree data structure. Example 36 includes the example tangible computer-readable medium as defined in claim35, wherein the instructions, when executed, further cause the processor to identify the first degree of symmetry is relative to the hyperspace center of the root hexatree data structure. Example 37 includes the example tangible computer-readable medium as defined in claim36, wherein the instructions, when executed, further cause the processor to associate the first portion of the hypernodes with temporal data greater than a temporal value associated with the hyperspace center. Example 38 includes the example tangible computer-readable medium as defined in claim36, wherein the instructions, when executed, further cause the processor to associate the second portion of the hypernodes with temporal data less than a temporal value associated with the hyperspace center. Example 39 includes the example tangible computer-readable medium as defined in claim35, wherein the instructions, when executed, further cause the processor to associate the respective positive spatial subspaces with spatial data greater than a spatial value associated with the hyperspace center. Example 40 includes the example tangible computer-readable medium as defined in claim35, wherein the instructions, when executed, further cause the processor to associate the respective negative spatial subspaces with spatial data less than a spatial value associated with the hyperspace center. Example 41 includes the example tangible computer-readable medium as defined in claim33, wherein the instructions, when executed, further cause the processor to extract spatiotemporal data from an agent in response to a spatiotemporal data input request. Example 42 includes the example tangible computer-readable medium as defined in claim41, wherein the instructions, when executed, further cause the processor to include at least one of spatial coordinate data or temporal data as the spatiotemporal data. Example 43 includes the example tangible computer-readable medium as defined in claim41, wherein the instructions, when executed, further cause the processor to determine at least one of a spatial resolution of the spatiotemporal data or a temporal resolution of the spatiotemporal data. Example 44 includes the example tangible computer-readable medium as defined in claim41, wherein the instructions, when executed, further cause the processor to calculate whether respective values of the spatiotemporal data input request are within spatial-temporal boundaries of the root hexanode. Example 45 includes the example tangible computer-readable medium as defined in claim44, wherein the instructions, when executed, further cause the processor to calculate an expansion direction for an expanded hexatree data structure in response to determining that the spatiotemporal data input request is outside the spatial-temporal boundaries of the root hexatree. Example 46 includes the example tangible computer-readable medium as defined in claim45, wherein the instructions, when executed, further cause the processor to calculate spatiotemporal offsets for a data insertion point for the spatiotemporal data input request to be placed in the expanded hexatree data structure. Example 47 includes the example tangible computer-readable medium as defined in claim45, wherein the instructions, when executed, further cause the processor to calculate dimensions of the expanded hexatree data structure based on the spatiotemporal data input request. Example 48 includes the example tangible computer-readable medium as defined in claim45, % wherein the instructions, when executed, further cause the processor to establish the expanded hexatree as a new root node, and to assign the root hexatree as a child of the expanded hexatree. Example 49 is a system to improve access efficiency for spatiotemporal data, including means for generating a root hexatree data structure having sixteen hypernodes, means for improving a spatiotemporal data access efficiency by generating a first degree of symmetry in the root hexatree, means for assigning a first portion of the hypernodes to a positive temporal subspace, means for assigning a second portion of the hypernodes to a negative temporal subspace, and means for improving the spatiotemporal data access efficiency by generating a second degree of symmetry in the root hexatree, and assigning respective hypernodes of the positive temporal subspace and the negative temporal subspace to respective positive and negative spatial subspaces. Example 50 includes the system as defined in example 49, further including means for assigning the respective positive and negative spatial subspaces to x-axis spatial dimensions, y-axis spatial dimensions and z-axis spatial dimensions. Example 51 includes the system as defined in example 49, further including means for establishing a hyperspace center of the root hexatree data structure. Example 52 includes the system as defined in example 51, wherein the first degree of symmetry is relative to the hyperspace center of the root hexatree data structure. Example 53 includes the system as defined in example 52, further including means for associating the first portion of the hypernodes with temporal data greater than a temporal value associated with the hyperspace center. Example 54 includes the system as defined in example 52, further including means for associating the second portion of the hypernodes with temporal data less than a temporal value associated with the hyperspace center. Example 55 includes the system as defined in example 51, further including means for associating the respective positive spatial subspaces with spatial data greater than a spatial value associated with the hyperspace center. Example 56 includes the system as defined in example 51, further including means for associating the respective negative spatial subspaces with spatial data less than a spatial value associated with the hyperspace center. Example 57 includes the system as defined in example 49, further including means for extracting spatiotemporal data from an agent in response to a spatiotemporal data input request. Example 58 includes the system as defined in example 57, wherein the spatiotemporal data includes at least one of spatial coordinate data or temporal data. Example 59 includes the system as defined in example 57, further including means for determining at least one of a spatial resolution of the spatiotemporal data or a temporal resolution of the spatiotemporal data. Example 60 includes the system as defined in example 57, further including means for calculating whether respective values of the spatiotemporal data input request are within spatial-temporal boundaries of the root hexanode. Example 61 includes the system as defined in example 60, further including means for calculating an expansion direction for an expanded hexatree data structure in response to determining that the spatiotemporal data input request is outside the spatial-temporal boundaries of the root hexatree. Example 62 includes the system as defined in example 61, further including means for calculating spatiotemporal offsets for a data insertion point for the spatiotemporal data input request to be placed in the expanded hexatree data structure. Example 63 includes the system as defined in example 61, further including means for calculating dimensions of the expanded hexatree data structure based on the spatiotemporal data input request. Example 64 includes the system as defined in example 61, further including means for establishing the expanded hexatree as a new root node, and to assign the root hexatree as a child of the expanded hexatree. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
73,569
11860847
DESCRIPTION OF EXAMPLE EMBODIMENTS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are not described in exhaustive detail, in order to avoid unnecessarily occluding, obscuring, or obfuscating the present invention. Example embodiments are described herein according to the following outline:1.0. General Overview2.0. Functional Overview2.1 PRN Generation and Data Change Production/Consumption2.2 Example System Configuration(s)2.3 Indexes for Data Change Production and Consumption2.4 Time Sequences of Invocation Numbers and PRNs3.0. Example Embodiments4.0 Implementation Mechanism—Hardware Overview5.0. Extensions and Alternatives 1.0 General Overview This overview presents a basic description of some aspects of an embodiment of the present invention. It should be noted that this overview is not an extensive or exhaustive summary of aspects of the embodiment. Moreover, it should be noted that this overview is not intended to be understood as identifying any particularly significant aspects or elements of the embodiment, nor as delineating any scope of the embodiment in particular, nor the invention in general. This overview merely presents some concepts that relate to the example embodiment in a condensed and simplified format, and should be understood as merely a conceptual prelude to a more detailed description of example embodiments that follows below. It is quite common for applications (e.g., client applications, desktop computer applications, mobile applications, etc.) to be interested in detecting data changes, made during a given time, to data values maintained in a database (or database tables therein), and accessing these changed values and attendant information, possibly as some function of the order in which these values changed. To do so, an index may be created based on system timestamps of the data changes. As a data change consumer of a database table (e.g., a Person table, an Accounts table, etc.), an application can find out all the data changes that have happened in a given time interval (e.g., t1→t3, where t1 and t3 are respectively beginning time and end time of the given time interval, etc.) by querying against this index for this interval. Similarly, an index may be created based on monotonically increasing (or decreasing) sequence numbers assigned to the changes. An application can find out all the changes that have happened in a given sequence number range (e.g., seq1→seq3, where seq1 and seq3 are respectively beginning and ending sequence numbers of the given sequence number range, etc.) by querying against this index for this range (which may be translated to or from a time interval corresponding to the sequence number range in some implementation examples). While these approaches seem simple and logically sound, they suffer from serious scalability issues as well as performance issues due to significant contentions and overload conditions in connection with the system-timestamp-based indexes and/or sequence-number-based indexes. Databases (e.g., Oracle, Postgres/Sayonara, etc.) often implement indexes using B-trees or a variation thereof. A B-tree index can easily run into hot block issues. Most if not all write accesses as well as many read accesses to a system-timestamp-based index or a sequence-number-based index goes to the last block of the index where the latest keys (and last processed index entries) such as the latest system timestamps or the latest sequence numbers are concentrated. Data change consumers cause read accesses—and even write accesses if the data change consumers dequeue or remove the latest data changes after data change retrievals/consumption—to the last block of the index, as these applications tend to retrieve or consume the latest data changes to the database table in order to update other application code of these latest data changes. Similarly, data change producers cause write accesses to the last block of the index, as system timestamps or sequence numbers are monotonically increasing or decreasing values over time and the latest data changes tend to concentrate in the last block. In operational scenarios in which frequent write and read accesses are made to the database table by these various data change consumers and producers, significant contention and overload condition can happen quickly on the rightmost block of the B-tree system-timestamp-based or sequence-number-based index of the database table. The contention and overload condition can exert a non-linear (e.g., exponential, etc.) impact on database access capabilities, as frequent repeated timeouts, retries and errors/failures occur in the overload condition. Consequently, the service rate of the entire platform (or instances thereof) deteriorates in a non-linearly (e.g., exponentially, etc.) manner. Time and cost per transaction is greatly increased. Customer experiences become seriously deteriorated or worsen. Many other problems may also occur with these approaches. For example, system timestamps may have a precision or granularity of a time unit such as one (1) millisecond. While the use of a system-timestamp-based index does not necessarily cause data loss, such an index may cause loss of information on the exact temporal order of data changes that occur relatively close in time, for example within a time (e.g., 1 ms, etc.) distinguishable by the highest time resolution of system timestamps. More specifically, a temporal order of two or more timewise adjacent data changes may not be established/distinguished with certainty if these data changes occur inside a time unit of a maximum precision or granularity such as one (1) millisecond, as these data changes are likely labeled with the same system timestamp. In addition, a clock on a system or device can get reset, drift, and/or be impacted by day light saving time changes. Different clocks of different systems or devices may be affected by clock resets, clock drifts, time changes, etc., differently. While a database system can use a universal time clock (UTC), other systems and devices interacting with the database system may or may not use the same clock when they interact with the database system. Using a sequence-number-based index to access data changes of a database table may avoid some of the problems associated with a system-timestamp-based index. However, at least the last block (or “hot block”) problem remains. In contrast, techniques as described herein can be implemented to avoid scalability and/or contention pitfalls associated with other approaches under which system timestamps or sequence numbers are used as the means to identify data changes in a temporal order. Data change production and consumption as described herein is not required to rely on actual system timestamps of data change events to identify, index or access data changes, and is thus free of problems arising out of clock synchronization, lags, shifts, resets, different clock sources, different time zones, day light saving related clock changes, etc., among different component systems, processors, devices, different geographic locations, etc., for the purpose of identifying and accessing the data changes. Instead of using system timestamps and/or sequence numbers to index data changes, pseudorandom numbers (PRNs) (or in general any numerically unordered numbers generated in a time sequential order or a temporal order) that do not observe a numeric order over time can be used to identify or index data changes made to a database table. These PRNs (or the unordered numbers) are deterministically reproducible through a PRN generator (or any unordered number generation mapping/function) using a preconfigured seed value and a time sequence of numerically ordered invocation number as input. In other words, the PRN generator—whether or not it is accessed or implemented by a data change consumer, a data change producer, etc.—can generate from a time sequence of numerically ordered numbers (e.g., numerically ordered invocation numbers, etc.) the same time sequence of pseudorandom numbers given the same seed value; the time sequence (or values sequence) may be completely unordered. It is this lack of order that prevents the last block issue or the hot index block issue as previously discussed that is associated with other approaches that do not implement techniques as described herein. The PRN generator may, but is not necessarily limited to only, be a seeded, mostly uniform, pseudo random number generator. Using numerically unordered pseudorandom numbers to identify data changes does not prevent a data change production and consumption system as described herein from still retaining ability to access the data changes made to the database table in the temporal order or some function thereof. A subsequence of consecutive data changes made to the database table in the temporal order can be identified or indexed (e.g., data capture code, by a data change producer operating with a PRN generator as described herein, etc.) with a subsequence of pseudorandom numbers generated by a subsequence of numerically ordered invocation numbers (from N1 to N2) in the temporal order. To detect the same subsequence of consecutive data changes made to the database table in the temporal order, a data change consumer (e.g., operating with a PRN generator as described herein, etc.) can generate the same subsequence of pseudorandom numbers with the same subsequence of numerically ordered invocation numbers (from N1 to N2) in the temporal order, and use the same subsequence of pseudorandom numbers to access or retrieve the subsequence of consecutive data changes made to the database table in the temporal order. Example database tables may include, but are not necessarily limited to only, any of: persisted database tables, system tables, user tables, change tables, materialized query tables, materialized views, temp tables, etc. As used herein, the term “temporal order” refers to a time sequential order in which data changes are made to a database table as described herein. One or more sync tables (e.g., consumer and producer sync tables, etc.) can be used by one or more actors in the data change production and consumption as described herein such as any, some or all of data change consumers, data change producers, PRN generators/servers. The latest or last run state can be saved into the one or more sync tables periodically or after a number of data changes have been consumed or produced. For example, after every N invocations are made (e.g., after every N consecutive invocation numbers are used to generate corresponding N pseudorandom numbers, etc.) by a data change producer to label or index corresponding data changes made to the data table, a PRN generator operating with the data change producer can save these invocation numbers (N→M) and the corresponding pseudorandom numbers to a producer sync table. Similarly, after every N invocations are made (e.g., after every N consecutive invocation numbers are used to generate corresponding N pseudorandom numbers, etc.) by a data change producer to retrieve or consume corresponding data changes made to the data table, a PRN generator operating with the data change consumer can save these invocation numbers (N→M) and the corresponding pseudorandom numbers to a consumer sync table. The latest or last run state as saved in the sync tables such as the saved invocation numbers and/or the saved pseudorandom numbers can be used by data change production and consumption operations to seamlessly recover from system or process restarts, failures, (scheduled or unscheduled) shutdowns, etc. A PRN generator operating with a data change consumer or a data change producer can reset or recover based on the latest or last run state saved in the sync table, when recovering from a system/application start (e.g., due to any reason, etc.). For example, the PRN generator can query a database to determine or retrieve a seed value for a database table upon restarting. Furthermore, the PRN generator can retrieve, from the sync tables, the last saved (or recorded) sequence or subsequence of invocation numbers (e.g., a set, a sequence, a subsequence, etc., of last recorded invocation numbers, etc.) and corresponding sequence or subsequence of pseudorandom numbers that have been used to index or to retrieve/consume corresponding data changes. The pseudorandom numbers can be retrieved from the sync tables in the temporal order if the pseudorandom numbers are so stored there. Additionally, optionally or alternatively, the PRN generator can be invoked a number of times (e.g., N times, etc) with the last saved invocation numbers (e.g., retrieved from the sync tables, etc.) to obtain the corresponding pseudorandom numbers in the temporal order. A check can be made to determine whether the pseudorandom numbers match pseudorandom numbers of the last processed data changes in the database table. If the sequences (or subsequences thereof) derived from the sync tables and from the last processed data changes in the database table match with each other, it may be inferred that the pseudorandom numbers derived from the sync tables and the invocation numbers used to derive these pseudorandom numbers represent the last known recorded sync state of the PRN generator. Thereafter, invocation numbers (immediately) following the invocation numbers in the sync tables can be used to generate new pseudorandom numbers for producing or consuming new data changes to the database table. On the other hand, if the sequences derived from the sync tables and from the last processed data changes in the database table match, it may be inferred that the pseudorandom numbers derived from the sync tables and the invocation numbers used to derive these pseudorandom numbers no longer represent the last known recorded sync state of the PRN generator. The PRN generator may use an additional number (e.g., z additional entries, where z is a positive integer, etc.) of temporally/numerically ordered invocation numbers (immediately) following the invocation numbers retrieved from the sync tables to generate an additional number (z entries) of pseudorandom numbers, and use these pseudorandom numbers to query for the last processed rows in the database table to see if the additional number of pseudorandom numbers matches with pseudorandom numbers of the last processed rows in the database table. This process can be continuously repeated until a match is found between the pseudorandom numbers starting from those generated based on the last saved invocation numbers in the sync tables and the pseudorandom numbers in the last processed rows (or row groups) in the database table. In some embodiments, if the additionally generated pseudorandom numbers go beyond the pseudorandom numbers in the last processed rows in the database table. A binary chop algorithm, method, procedure, etc., with a logarithmic computational complexity can be used to determine or establish the last used/assigned invocation number, the last used/assigned pseudorandom number, etc., for the last processed row (or the last processed row group) in the database table. Once such invocation number, pseudorandom number, etc., for the last processed row (or the last processed row group) in the database table is known, the data change consumer or the data change producer, which operates in conjunction with the PRN generator, can continue data change consumption and production as before. Example PRN generator or function/mapping implemented therein may include, but are not necessarily limited to only, a function or mapping (e.g., implemented with JAVA, APEX, saved procedure, callable statement, database procedure, etc.). A PRN generator may be implemented as a part of, or a separate entity operating in conjunction with, any, some or all of: data change consumers, data change producers, a PRN server, etc. A time sequence of pseudorandom numbers generated deterministically/reproducibly by PRN generator(s) under techniques as described herein by various actors in data change production and consumption can be used to avoid time synchronization, thereby eliminating many problems associated with other approaches based on system timestamps and sequence numbers, regardless of how many these actors are and regardless of whether these actors are implemented on different systems, different devices, different processes, different threads, different instances, etc. Such a time sequence of pseudorandom numbers can be used to automatically synchronize itself among all these actors, for example without any need to perform clock synchronizations and other error-prone operations. Various modifications to the preferred embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein. 2.0 Functional Overview FIG.1Aillustrates an example overall data change production and consumption framework100. Example computing systems that implement the data change production and consumption framework (100) may include, but are not necessarily limited to: any of: a large-scale cloud-based computing system, a system with multiple datacenters, multitenant data service systems, web-based systems, systems that support massive volumes of concurrent and/or sequential transactions and interactions, database systems, and so forth. Various system constituents may be implemented through software, hardware, or a combination of software and hardware. Any, some or all of these system constituents may be interconnected and communicated directly, or through one or more networks120. In some embodiments, the computing system that hosts the organizations may comprise a plurality of datacenters such as112-1,112-2,112-3, etc., which may be located at the same or different geographic locations such as the same or different continents, the same or different countries, the same or different states, the same or different regions, and so forth. Each data center may implement a set of system instances to host respective organizations. These organizations may contract with the owner of the computing system such as a multi-tenant computing system to host their respective (e.g., organization-specific, organization-common, etc.) application data, to provide their (e.g., organization-specific, organization-common, etc.) application services to their respective users and/or customers. Examples of application data may include, but not necessarily limited to only, organization-specific application data, organization-common application data, application configuration data, application data, application metadata, application code, etc., specifically generated or configured for (e.g., organization-specific, organization-common, etc.) application services of an individual organization. As used herein, the term “organization” may refer to some or all of (e.g., complete, original, a non-backup version of, a non-cached version of, an online version of, original plus one or more backup or cached copies, an online version plus one or more offline versions of, etc.) application data of an organization hosted in the computer system and application services of the organization based at least in part on the application data. As illustrated inFIG.1A, each datacenter (e.g.,112-1,112-2,112-3, etc.) may comprise a set of one or more system instances. A first datacenter112-1comprises first system instances110-1-1,110-1-2, etc.; a second datacenter112-2comprises second system instances110-2-1,110-2-2, etc.; a third datacenter112-3comprises third system instances110-3-1,110-3-2, etc. Each system instance (e.g.,110-1-1,110-1-2,110-2-1,110-2-2,110-3-1,110-3-2, etc.) in the hosting computing system can host up to a maximum number of organizations such as 5,000 organizations, 10,000 organizations, 15,000+ organizations, etc. As illustrated inFIG.1A, the system instance (110-1-1) in the datacenter (112-1) may host a first organization114-1and a second organization114-2, among others; the system instance (110-1-2) in the datacenter (112-1) may host a third organization114-3, among others. The multitenant computing system may comprise application servers and database servers in system instances for processing data access/service operations related to application services invoked by user applications running on user devices118(and/or other data manipulation operations and/or queries originated elsewhere) including but not limited to a first user device118-1, a second user device118-2, a third user device118-3, and so forth. These data access/service operations may be serviced by the application servers operating in conjunction with the database servers. The data access/service operations may cause data changes to be made (e.g., persistently, etc.) to standard and/or custom objects maintained by the system instances for one or more organizations in the plurality of organizations hosted in the multitenant computing system. Additionally, optionally or alternatively, the data access/service operations may cause the (e.g., persistent, etc.) data changes made to the standard and/or custom objects to be consumed by data change consumers, for example for the purpose of propagating the data changes to some or all of the user applications running on the user devices (118). 2.1 PRN Generation and Data Change Production/Consumption FIG.1Billustrates an example system configuration for data change production and consumption in the multitenant computing system. In some embodiments, a system instance (e.g.,110-1, etc.) may comprise one or more data change consumers102, one or more data change producers104, a pseudorandom number (PRN) server106, etc. Various system constituents as illustrated inFIG.1Bmay be implemented through software, hardware, or a combination of software and hardware. Any, some or all of these system constituents may be interconnected and communicated directly, or through one or more networks (e.g.,120, etc.). In some embodiments, a single server such as a database server, a database engine, and so forth, may implement some or all of the data change consumers (102), the data change producers (104), the pseudorandom number (PRN) server (106), etc. In some embodiments, two or more servers such as a database server and an application server operating with the database server, and so forth, may collectively (e.g., in combination, etc.) implement some or all of the data change consumers (102), the data change producers (104), the pseudorandom number (PRN) server (106), etc. By way of example but not limitation, one or more first servers may implement one or more instances of the data change consumers (102), whereas one or more second servers may implement one or more instances of combinations of the data change producers (104) and the PRN server (106). In some embodiments, some or all of the data change consumers (102) and the data change producers (104) may interact with user applications (e.g., desktop applications, mobile apps, etc.) running on the user devices (118), platform applications of the multitenant computing system, external (e.g., cloud-based, etc.) applications running on external (e.g., cloud-based, etc.) systems outside the multitenant computing system, etc. Some of these interactions cause the data change producers (104) to make time-ordered data changes to database tables maintained with a database108. Some of all of these data changes may be made according to one or more schedules, on demand, periodically, and so forth. Some of these interactions cause the data change consumers (102) to query or consume some or all of the data changes made by the data change producers (104) to the database tables maintained with the database (108). Some of all of these data changes may be queried or consumed according to one or more schedules, on demand, periodically, and so forth. 2.2 Example System Configuration(s) FIG.1Cillustrates example data change production and consumption in the multitenant computing system. Various system constituents as illustrated inFIG.1Cmay be implemented through software, hardware, or a combination of software and hardware. Any, some or all of these system constituents may be interconnected and communicated directly, or through one or more networks (e.g.,120, etc.). In some embodiments, a database (e.g.,108, etc.) in a system instance (e.g.,110-1, etc.) of the multitenant computing system may store database tables for an organization (e.g.,114-1, etc.). In some embodiments, one or more data change consumers (e.g.,102, etc.) may comprise (e.g., one or more respective instances of, etc.) a data change retriever130, a PRN generator132, a data change consumer recovery handler134, etc. The data change retriever (130) may be implemented through software, hardware or a combination thereof. The data change retriever (130) may perform data change queries to retrieve some or all of data changes to some or all of the database table of the organization (114-1) as maintained in the database (108), up to a certain logical or wall clock time as corresponding to, or represented by, a certain invocation number. The PRN generator (132) may be implemented through software, hardware or a combination thereof. The PRN generator (132) may implement the same PRN generation function implemented by a PRN server (e.g.,106, etc.) that provides pseudorandom numbers, to the data change producers (104), to label data changes to a (e.g., each, any, etc.) database table of the organization (114-1). Thus, given the same seed value assigned to the database table and the same invocation number, both the PRN generator (132) used/implemented/accessed by the data change consumers (102) and the PRN servers used/implemented/accessed by the data change producers (104) computes/provides/generates the same pseudorandom number. The data change consumer recovery handler (134) may be implemented through software, hardware or a combination thereof. The data change consumer recovery handler (134) may implement process recovery functionality to handle process restarts, process failures, etc., of some or all processes (e.g., the data change consumers (102), the data change producers (104), etc.) in connection with data change production and consumption. In some embodiments, one or more data change producers (e.g.,104, etc.) may comprise (e.g., one or more respective instances of, etc.) a data change capturer136, a PRN server (e.g.,106, etc.), a data change producer recovery handler138, etc. The data change capturer (136) may be implemented through software, hardware or a combination thereof. The data change capturer (136) may perform data change operations to make or commit some or all of data changes to some or all of the database table of the organization (114-1) as maintained in the database (108). The data change producer recovery handler (138) may be implemented through software, hardware or a combination thereof. The data change producer recovery handler (138) may implement process recovery functionality to handle process restarts, process failures, etc., of some or all processes (e.g., the data change consumers (102), the data change producers (104), etc.) in connection with data change production and consumption. A data change producer in the data change producers (104) may receive a first request (e.g., from a user device, from an application operating with a user device, etc.) for performing a first specific data change relative to one or more table rows of a database table maintained with the database (108). The database table comprises a plurality of table columns. Under techniques as described herein, the plurality of table columns of the database table comprises (e.g., at least, etc.) an indexed table column used to store pseudorandom numbers to be assigned (e.g., by the PRN server (106), etc.) to, and to distinguish among, different data changes made to the database table in the database (108). In response to receiving the first request for performing the first specific data change (e.g., the latest data change to the database table, etc.), the data change producer may request the PRN server (106) to assign a first specific pseudorandom number to the first specific data change to be made to the database table in the database (108). The PRN server (106) can invoke a PRN generator (as a part of the PRN server (106) or operating in conjunction with the PRN server (106)) to generate the first specific pseudorandom number to be assigned to identify the first specific data change. The first specific pseudorandom number is generated by the PRN generator based on a seed value and a current invocation number maintained by the PRN server (106) and/or the PRN generator. The seed value may be among one or more seed values126(stored in the database (108)0for one or more databases table (in the database (108)) including but not necessarily limited to only the above-mentioned database table; each of the one or more seed values (126) is assigned to a respective database table in the one or more database tables. For example, the seed value for the above-mentioned database table may be previously (e.g., before the request for performing the first specific data change is received, before any data changes are made to the database table, etc.) assigned to, or preconfigured for, the database table prior to and independent of receiving any request (e.g., the first request, etc.) for performing a data change. In some embodiments, after using the current invocation number to generate the first specific pseudorandom number for the first specific data change, the PRN server (106) and/or the PRN generator increment the current invocation number (e.g., by one (1), etc.). The incremented current invocation number is to be used for the next data change, to the same database table, immediately following (in time or in a temporal order) the first specific data change among all data changes to the database table. Upon receiving the first specific PRN number from the PRN server (106) as generated by the PRN generator, the data change producer (or the data change capturer (136) in the one or more data change producers (104) therein) causes or makes the first specific data change to be performed relative to (e.g., in, etc.) the one or more table rows of the database table. As a part of or in addition to the first specific data change, the first specific pseudorandom number is (e.g., caused to be, etc.) stored in the indexed table column for the one or more table rows of the database table. As a result, a PRN-based index generated based on the indexed table column storing pseudorandom numbers corresponding to respective data changes is updated to include the first specific pseudorandom number as a part of index entries in the PRN-based index. The PRN-based index may be among one or more PRN-based indexes122(e.g., stored in the database (108), etc.) for the one or more database tables maintained in the database (108). A data change consumer can use the first specific pseudorandom number as a key to look up in the PRN-based index to access the first specific data change made to the database table. Subsequently, in response to receiving a second request for performing a second specific data change (e.g., the latest data change to the database table, etc.), the data change producer (or another data change producer) may request the PRN server (106) to assign a second specific pseudorandom number to the second specific data change to be made to the database table in the database (108). The second specific data change is to be made the database table and immediately follows (in time or in a temporal order) the first specific data change among all data changes made to the database table. The PRN server (106) can invoke the PRN generator to generate the second specific pseudorandom number to be assigned to identify the second specific data change. The second specific pseudorandom number is generated by the PRN generator based on the seed value and the current invocation number as maintained by the PRN server (106) and/or the PRN generator. The seed value is a constant for the database table and hence is the same as that used for generating the first specific pseudorandom number. In contrast, the current invocation number is a monotonically increasing value (or in some other embodiments a monotonically decreasing value) over time for the database table. A unique value of the current invocation number is assigned to each data change among a plurality of data changes made to the database table. Upon receiving the second specific PRN number from the PRN server (106) as generated by the PRN generator, the data change producer (or the data change capturer (136) in the one or more data change producers (104)) causes or makes the second specific data change to be performed relative to (e.g., in, etc.) the database table. As a part of or in addition to the second specific data change, the second specific pseudorandom number is caused or made to be stored in the indexed table column for the one or more table rows of the database table. As a result, the PRN-based index using the indexed table column storing pseudorandom numbers for respective data changes is updated to include the second specific pseudorandom number as a part of the index entries in the PRN-based index. A data change consumer can use the second specific pseudorandom number as a key to look up in the PRN-based index to access the second specific data change made to the database table. The above described procedure in connection with making the first specific data change and/or the second specific data change can be repeated for any number of (e.g., single, multiple, over time, periodically, on demand, etc.) times. As a result, even though invocation number values (e.g., a time sequence of values of the current invocation number, etc.) used for generating PRNs for a time sequence of data changes made to the database table are in numeric order (either a numeric order of monotonically increasing or in some other embodiments a numeric order of monotonically decreasing), the PRNs for the time sequence of data changes are not in a numeric order but rather a deterministic sequence of an unordered numeric values as determined by the seed value in combination with the invocation number values in the numeric order. By way of illustration but not limitation, the PRN server (106), which may or may not be implemented as a part of the one or more data change producers (104) assigns a time sequence of numerically unordered pseudorandom numbers from a time sequence of numerically ordered invocation numbers pseudorandom numbers to a time sequence of data changes (or data changes in a temporal order or in a time order) to a database table in the database (108). The PRN server (106) can determine or retrieve a seed value assigned to the database table in the database (108), which may store one or more seed values (126) that have already assigned to one or more database tables including but not necessarily limited to only the database table. Over time, the PRN server (106) may be invoked or requested (e.g., sporadically, frequently, periodically, on demand, etc.) by the one or more data change producers (104) for assigning the time sequence of numerically ordered invocation numbers pseudorandom numbers to the time sequence of data changes to the database table in the database (108). From request (or invocation) information accompanying requests to (or invocations of) the PRN server (106) from the one or more data change producers (104), the PRN server (106) determines or identifies the time sequence of numerically ordered invocation numbers in the form of a plurality of numerically ordered invocation numbers. These invocation numbers are assigned to the time sequence of data changes in the form of a plurality of temporally ordered data changes to be made by the one or more data change producers (104) relative to (or in) the database table. Using the seed value assigned to the database table and the time sequence of numerically ordered invocation numbers in the form of the plurality of numerically ordered invocation numbers, the PRN server (106) generates the time sequence of numerically unordered pseudorandom numbers in the form of a plurality of sequentially generated but numerically unordered pseudorandom numbers. The PRN server (106) provides, to the one or more data change producers (104), the sequentially generated but numerically unordered pseudorandom numbers, which are used to index the temporally ordered data changes to the database table, respectively. One or more sync tables (124) may be maintained, for example in the database (108), to keep track of pseudorandom numbers that have been assigned to various database tables in the database (108). The PRN server (106) may internally (e.g., in main memory, in RAM, in cache memory, in one or more data structures, in one or more data constructs, etc.) maintain a sequence of yet-to-be-saved pseudorandom numbers for assigned pseudorandom numbers (assigned to data changes to the database table) that have yet to be saved or recorded in a corresponding sync table (among the one or more sync tables (124)) for the database table. The PRN server (106) (e.g., periodically, at each invocation by a data change producer for generating a pseudorandom number, etc.) determines whether a total number of pseudorandom numbers in the sequence of yet-to-be-saved pseudorandom numbers including the plurality of sequentially generated but numerically unordered pseudorandom numbers assigned to the plurality of temporally ordered data changes reaches a maximum total number threshold. Here, each pseudorandom number in the sequence of yet-to-be-saved pseudorandom numbers is generated based on the seed value and a respective invocation number in the sequence of corresponding invocation numbers. The pseudorandom number was used to index a respective data change in a sequence of temporally ordered data changes to which the sequence of corresponding invocation numbers is assigned. The respective invocation number in the sequence of corresponding invocation numbers is assigned to the respective data change in the sequence of temporally ordered data changes. In response to determining that the total number of pseudorandom numbers in the sequence of yet-to-be-saved pseudorandom numbers reaches a maximum total number threshold, the PRN server (106) saves the sequence of yet-to-be-saved pseudorandom numbers and a sequence of corresponding invocation numbers in the sync table maintained in the database (108). Otherwise, the PRN server (106) continues to internally maintain the yet-to-be-saved pseudorandom numbers and/or corresponding invocation numbers. In some embodiments, a data change consumer in the one or more data change consumers (102) retrieve data changes to the database table. The data change consumer (or the data change retriever (130) in the one or more data change consumers (102)) determines a first invocation number and a second invocation number greater than the first invocation number. The database change retriever (130) generates a sequence of pseudorandom numbers for all invocation numbers ranging between the first invocation number and the second invocation number. The sequence of pseudorandom numbers begins with a first pseudorandom number generated based on the seed value assigned to the database table and the first invocation number, and ends with a second pseudorandom number generated based on the seed value and the second invocation number. The data change retriever (130) uses the sequence of pseudorandom numbers to retrieve a sequence of data changes to the database table. The sequence of data changes begins with a first data change indexed with the first pseudorandom number. The sequence of data changes ends with a second data change indexed with a third pseudorandom number. The third pseudorandom number is in the sequence of pseudorandom numbers. The data change retriever (130) determines whether the third pseudorandom number is the same as the second pseudorandom number. In response to determining that the third pseudorandom number is not the same as the second pseudorandom number, the data change retriever (130) saves a subsequence of pseudorandom numbers in the sequence of pseudorandom numbers to be used in a subsequent request for retrieving data changes to the database table. The subsequence of pseudorandom numbers begins with a fourth pseudorandom number, which is immediately after the third pseudorandom number in the sequence of pseudorandom numbers. In some embodiments, the data change consumer saves the third pseudorandom number in a sync table (among one or more sync tables124maintained with the database (108)) so that on process restart/recovery (e.g., restarting some or all of the one or more data change consumers (102), etc.), one or more new data change consumers can use the sync table to determine the last invocation number (in the previous life cycle) for which a corresponding data change has been retrieved. The third pseudorandom number and/or the corresponding invocation number, as saved in the sync table can be used to facilitate recovery from process failures and/or process restarts. A new PRN server can use saved pseudorandom numbers and/or saved corresponding invocation numbers (including but not necessarily limited to the third pseudorandom number and/or the corresponding invocation number) in the sync table to determine the last saved pseudorandom number and/or the last saved invocation number in the previous life cycle of the PRN server. Based on the last saved pseudorandom number and/or the last saved invocation number, the new PRN server can start probing in the database table and determining whether the last saved pseudorandom number and/or the last saved invocation number corresponding to the actual last data change to the database table, or whether any additional data change(s) occur subsequent to the data change to which the last saved pseudorandom number and/or the last saved invocation number are assigned. By way of illustration but not limitation, the PRN server (106) (or a new instance thereof, a new life cycle thereof) can recover from a process restart or a process failure as follows. On restarting from the process restart/failure, the PRN server (106) can determine/retrieve the last saved assigned pseudorandom number and/or the last saved assigned invocation number and a seed value assigned to a database table, some or all of which may be persisted locally or remotely (e.g., stored in the database (108), etc.). It should be noted that, in some embodiments, the last saved assigned pseudorandom number that indexes a data change made to the database table may be derived (e.g., instead of retrieved) based on the seed value and the last saved assigned invocation number. The PRN server (106) may generate a test (e.g., candidate, probing, etc.) pseudorandom number based on the seed value and a test invocation number generated by incrementing the last saved assigned invocation number. Using the test pseudorandom number, the PRN server can determine whether a specific data change indexed by the test pseudorandom number was made to the database table in the previous life cycle by examining the database table to determine whether the test pseudorandom number is stored with the data change in the database table. In response to determining the test pseudorandom number was used to index the specific data change made to the database table, the PRN server (106) can perform the following steps, repeatedly, recursively and/or iteratively. First, the PRN server (106) can generate a second test (or probing) pseudorandom number based on the seed value and a second test invocation number generated by incrementing (e.g., by one (1), by five (5), by one hundred (100), . . . by twenty thousands (20,000), by a step corresponding or scalable to how frequently the database table is updated/changed) the test (or probing) invocation number previously mentioned. Second, the PRN server (106) can determine whether the second test pseudorandom number was used to index a second specific data change made to the database table, similar to the steps performed in connection with the previously mentioned test pseudorandom number. The foregoing can be performed repeatedly, recursively and/or iteratively, until the PRN server (106) encounters an N-th test (or probing) pseudorandom number or an N-th test (or probing) invocation number to which a corresponding data change is not found in the database table, where N is a positive integer. In operational scenarios in which a step value greater than one (1) is used to increment the foregoing test (or probing) invocation number(s) each time, the (N−1)-th test (or probing) pseudorandom number or the (N−1)-th test (or probing) invocation number may not actually indicate the last assigned pseudorandom number or the last assigned invocation number; correspondingly, the N-th test (or probing) pseudorandom number or the N-th test (or probing) invocation number may not actually indicate the first unassigned pseudorandom number or the first unassigned invocation number. In some embodiments, a binary chop algorithm (e.g., with a logarithmic computational complexity, etc.), method, procedure, etc., may be implemented and/or performed by the PRN server (106) to identify the last actually assigned pseudorandom number or the last actually assigned invocation number to the database table. For example, in response to determining the N-th test pseudorandom number and/or the N-th invocation number were not used to index a specific data change made to the database table, the PRN server (106) can perform the following operations. In the first operation (or step), the PRN server (106) generates a (N+1)-th test pseudorandom number based on the seed value and a (N+1)-th test (probing) invocation number. The (N+1)-th test invocation number is an intermediate value (or whole number) between the (N−1)-th saved assigned invocation number and the N-th test invocation number. In the second operation (or step), the PRN server (106) can determine whether the (N+1)-th test pseudorandom number was used to index a specific data change made to the database table. In response to determining that the (N+1)-th test pseudorandom number was used to index a specific data change made to the database table, the PRN server (106) generates a (N+2)-th test pseudorandom number based on the seed value and a (N+2)-th test (probing) invocation number. The (N+2)-th test invocation number is an intermediate value (or whole number) between the (N+1)-th saved assigned invocation number and the N-th test invocation number. The first and second operations (or steps) as previously described are then repeated. Otherwise, in response to determining that the (N+1)-th test pseudorandom number was not used to index a specific data change made to the database table, the PRN server (106) generates a (N+2)-th test pseudorandom number based on the seed value and a (N+2)-th test (probing) invocation number. The (N+2)-th test invocation number is an intermediate value (or whole number) between the (N−1)-th saved assigned invocation number and the (N+1)-th test invocation number. The first and second operations (or steps) as previously described are then repeated. In some embodiments, a data change consumer as described herein also can save pseudorandom numbers and/or corresponding invocation numbers for which corresponding data changes to the database table have been retrieved in a (e.g., consumer, etc.) sync table, which may be persisted locally or remotely, for example in the database (108). These saved pseudorandom numbers and/or the corresponding invocation numbers in the (consumer) sync table can be used to facilitate recovery from process failures and/or process restarts. A new data change consumer can use saved pseudorandom numbers and/or saved corresponding invocation numbers in the (consumer) sync table to determine the last pseudorandom number and/or the last invocation number in the previous life cycle of the data change consumer and further determine the last retrieved data change to which the last pseudorandom number and/or the last invocation number are assigned. 2.3 Indexes for Data Change Production and Consumption FIG.2Aillustrates an example database table (denoted as “PersonTable1”) comprising a plurality of table columns such as “name”, “description”, “system timestamp,” etc. A first data change may be made to the “PersonTable1” table at a first time as indicated by a first “system timestamp” value of “t1”. The first data change comprises a first “name” value of “A1” and a first “description” value of “D1”. A second data change may be made to the “PersonTable1” table at a second time, subsequent to the first time, as indicated by a second “system timestamp” value of “t2”. The second data change comprises a second “name” value of “A2” and a second “description” value of “D2”. A third data change may be made to the “PersonTable1” table at a third time, subsequent to the second time, as indicated by a third “system timestamp” value of “t3”. The third data change comprises a third “name” value of “A3” and a third “description” value of “D3”. The “system timestamp” values such as “t1”, “t2”, “t3”, etc., may be generated automatically to indicate specific time points (e.g., logical time points, wall clock times, beginning times of database transactions that make the data changes, commit times of database transactions that make the data changes, etc.) at which the data changes are made to the database table. The “system timestamp” values are monotonically increasing over time. Thus, the “system timestamp” values over time exhibit a numeric order that is the same as a temporal order in which the data changes are made to the database table “PersonTable1”. As used herein, the term “data change” may refer to a change to data in a database table since the last processed change to the database table. A system timestamp or a system timestamp value for a data change refers to a specific time point such as a logical time point, a wall clock time, the beginning time of a database transaction that makes the data change, the commit time of database transaction that makes the data change, etc. In some embodiments, a table column in a database table, such as the “system timestamp” table column of the database table “PersonTable1” as illustrated inFIG.2A, may be used as an index column. To access data changes to a database table in a temporal order in which the data changes are made, a data change consumer of the database table (e.g., a Person table, an Accounts table, etc.) can use a system-timestamp-based index to find out all the data changes that have happened in a given time interval. For example, a data change consumer may query all data changes that fall within a time range such as t1→t3 as illustrated inFIG.2Ausing the sequence-number-based index of the database table. FIG.2Billustrates an example database table (denoted as “PersonTable2”) comprising a plurality of table columns such as “name”, “description”, “sequence number,” etc. A first data change may be made to the “PersonTable2” table at a first logical time as indicated by a first “sequence number” value of “seq1”. InFIG.2B, as inFIG.2A, the first data change comprises a first “name” value of “A1” and a first “description” value of “D1”. A second data change may be made to the “PersonTable2” table at a second logical time, subsequent to the first logical time, as indicated by a second “sequence number” value of “seq2”. The second data change comprises a second “name” value of “A2” and a second “description” value of “D2”. A third data change may be made to the “PersonTable2” table at a third logical time, subsequent to the second logical time, as indicated by a third “sequence number” value of “seq3”. The third data change comprises a third “name” value of “A3” and a third “description” value of “D3”. The “sequence number” values such as “seq1”, “seq2”, “seq3”, etc., may be generated automatically to indicate specific logical times at which the data changes are made to the database table. In some embodiments, the “sequence number” values are monotonically increasing (may or may not increment with a constant step) over time, such as 1, 2, 7, 9, 13, 20, etc., along a time direction. In these embodiments, the “sequence number” values over time exhibit a numeric order that is the same as a temporal order in which the data changes are made to the database table “PersonTable2”. In some other embodiments, the “sequence number” values are monotonically decreasing (may or may not decrement with a constant step) over time, such as . . . 200, . . . 13, 12, 11, 1, etc., along a time direction. In these embodiments, the “sequence number” values over time exhibit a numeric order that is the opposite or inverse to a temporal order in which the data changes are made to the database table “PersonTable2”. In some embodiments, a “sequence number” table column in a database table (e.g., the database table “PersonTable2” as illustrated inFIG.2B, etc.) may be used as an index column. To access data changes to a database table in a temporal order in which the data changes are made, a data change consumer of a database table can use a sequence-number-based index to find out all the data changes that have happened in a given sequence number range. For example, a data change consumer may query all data changes that fall within a sequence number range such as seq1→seq3 as illustrated inFIG.2Busing the system-timestamp-based index of the database table. While these approaches such as illustrated withFIG.2AandFIG.2Bseem logically sound and simple, they suffer from a serious scalability and performance issue due to significant contention on system-timestamp-based indexes and/or sequence-number-based indexes for database tables in the database, among other problems. FIG.2Cillustrates an example database table (denoted as “PersonTable3”) comprising a plurality of table columns such as “name”, “description”, “PseudoRandomNumber”, etc. In some embodiments, a seed value is (e.g., before any data change is made to the “PersonTable3” table, upon the very first data change made to the “PersonTable3” table, etc.) configured for, or assigned to, the “PersonTable3” table. A first data change may be made to the “PersonTable3” table at a first logical time as indicated by a first invocation number in a time sequence of numerically ordered invocation numbers (e.g., in a temporal order, in a time order, etc.). The first invocation number may be used (e.g., by a PRN generator, a PRN server106, etc.) to generate a first pseudorandom number (denoted as “prn1”) in a time sequence of numerically unordered pseudorandom numbers (e.g., in a temporal order, in a time order, etc.). InFIG.2C, as inFIG.2AandFIG.2B, the first data change comprises a first “name” value of “A1” and a first “description” value of “D1”. A second data change may be made to the “PersonTable3” table at a second logical time, subsequent to the first logical time, as indicated by a second invocation number in the time sequence of numerically ordered invocation numbers. The second invocation number may be used (e.g., by a PRN generator, a PRN server106, etc.) to generate a second pseudorandom number (denoted as “prn2”) in the time sequence of numerically unordered pseudorandom numbers. The second data change comprises a second “name” value of “A2” and a second “description” value of “D2”. A third data change may be made to the “PersonTable3” table at a third logical time, subsequent to the second logical time, as indicated by a third invocation number in the time sequence of numerically ordered invocation numbers. The third invocation number may be used (e.g., by a PRN generator, a PRN server106, etc.) to generate a third pseudorandom number (denoted as “prn3”) in the time sequence of numerically unordered pseudorandom numbers. The third data change comprises a third “name” value of “A3” and a third “description” value of “D3”. The pseudorandom number values such as “prn1”, “prn2”, “prn3”, etc., may be generated (e.g., by a PRN generator, a PRN server106, etc.) automatically. The invocation number values (corresponding to the pseudorandom number values) and/or the pseudorandom number values may be used to indicate logical times at which respective data changes are made to the database table. Under techniques as described herein, the invocation number values are monotonically increasing (or decreasing in some other embodiments) over time, such as 1, 2, 3, 4, 5, 6, etc., along a time direction. In these embodiments, the invocation number values over time exhibit a numeric order that is the same as a temporal order in which the data changes are made to the database table “PersonTable3”. However, the pseudorandom number values are numerically unordered over time, such as 10, 2, 4, 3, 0, 20, etc., along a time direction. In these embodiments, the pseudorandom number values over time does not exhibit a numeric order that is the same as a temporal order in which the data changes are made to the database table “PersonTable3”. In some embodiments, a “pseudorandom number” table column in a database table (e.g., the database table “PersonTable3” as illustrated inFIG.2C, etc.) may be used as an index column. As the pseudorandom number values are numerically unordered over time, the last processed data changes to the “PersonTable3” table have different numeric values that are not ordered. As a result, index entries for these last processed data changes are not concentrated in the last block of a B-tree index implementing the pseudorandom-number-based index. Thus, when data change consumers access these last processed data changes, and/or when data change producers generate further processed data changes, these read and write data accesses are distributed over many blocks of the B-tree index rather than concentrated on the last block of the B-tree index. Techniques as described do not preclude other types of indexes to be generated for a database table. For example, the pseudorandom-number-based index may be implemented in place of or in addition to another index such as a system-timestamp-based index, a sequence-number-based index, etc. In addition, a pseudorandom-number-based index may incorporate zero or more other index columns. For example, a pseudorandom number table column may be used as a primary index column. Zero or more other non-primary index columns such as an invocation number column, a system timestamp column, a sequence number column, etc., may be used in a pseudorandom-number-based index as described herein. FIG.2Dillustrates an example database table (denoted as “PersonTable4”) comprising a plurality of table columns such as “name”, “description”, “PseudoRandomNumber”, “System Timestamp”, etc. The “System Timestamp” table column may be used in a separate index other than a pseudorandom-number-based index of the “PersonTable4” table, or in a pseudorandom-number-based index of the “PersonTable4” table as one of the index columns of the pseudorandom-number-based index. 2.4 Time Sequences of Invocation Numbers and PRNs FIG.3illustrates an example time sequence306of pseudorandom numbers such as “prn1”, “prn2”, . . . “prn8”, etc., generated by a PRN generator300using a seed value300and a time sequence304of invocation numbers such as “inv1”, “inv2”, . . . “inv8”, etc. The PRN generator (300) may be any function or mapping capable of generating a sequence (e.g., the time sequence (306) of pseudorandom numbers, etc.) of numerically unordered numbers such as random numbers from a sequence (e.g., the time sequence (304) of invocation numbers, etc.) of numerically ordered numbers in a deterministic or reproducible manner given a seed value. As used herein, “deterministic” or “reproducible” means that the same sequence of numerically unordered numbers (e.g., the same time sequence of numerically unordered pseudorandom numbers, etc.) is determined/reproduced with respect to the same database table in all data change producers and consumers, so long as the same seed value and the same sequence of ordered numbers (e.g., the same time sequence of invocation numbers, etc.) are used as input and the same function/mapping is used for generating the pseudorandom numbers from the seed value and the ordered numbers (e.g., the invocation numbers, etc.). For example, one or more PRN generators (e.g.,302, etc.) using a specific function or mapping to generate pseudorandom numbers may be implemented or accessed by a data change consumer, a data change producer, and/or a PRN server. Given the seed value (300) and the time sequence of invocation numbers such as 1, 2, 3, 4, 5, 6, etc., respectively for “inv1”, “inv2”, “inv3”, “inv4”, “inv5”, inv6”, etc., all the PRN generators (e.g.,302, etc.) generate the same time sequence of pseudorandom numbers such as 10, 2, 4, 3, 0, 20, etc., respectively for “prn1”, “prn2”, “prn3”, “prn4”, “prn5”, prn6”, etc., so long as the same specific function or mapping is used to generate the pseudorandom numbers, regardless of whether or not these PRN generators are accessed or implemented by the data change consumer, the data change producer, and/or the PRN server. Time sequences of invocation numbers and PRNs as described herein can be used in a wide variety of operational scenarios associated with data change production and consumption. In an example, in some operational scenarios, a data change consumer can access a PRN generator to generate a corresponding subsequence of numerically unordered pseudorandom numbers (e.g., in the time sequence (306) of PRNs, etc.) using the seed value (300) and a subsequence of numerically ordered invocation numbers (e.g., in the time sequence (304) of invocation numbers, etc.) as input. Subsequently, the data change consumer can use the subsequence of (e.g., five, 100, twenty thousand, etc.) numerically ordered invocation numbers to query for or retrieve a corresponding subsequence of data changes to the database table. Data retrieval operations can use the corresponding subsequence of numerically unordered pseudorandom numbers to access corresponding index entries of a pseudorandom-number-based index of the database table to access and retrieve corresponding data changes made to the database table. By way of illustration but not limitation, assume that the data retrieval operations return corresponding three data changes for some but not all of the pseudorandom numbers such as the first three pseudorandom numbers. However, data changes for the fourth and fifth pseudorandom numbers are not found, for example as these data changes may not have occurred or may not have been committed to the database table. In some embodiments, unused pseudorandom numbers in the data retrieval operations, namely the fourth and fifth pseudorandom numbers in the present example, are saved for the next round of data retrieval operations for example internally (e.g., in a data table, a data structure, a list, an array, etc.) by the data change consumer. When the next round of data retrieval operations is to be made (e.g., on a data change retrieval schedule, on demand, etc.), the fourth and fifth pseudorandom numbers, as well as any newly generated subsequent pseudorandom numbers based on new subsequent invocation numbers (e.g., immediately, etc.) following the fourth and fifth invocation numbers, may be used to access the pseudorandom-number-based index for retrieving corresponding new data changes from the database table corresponding to the fourth and fifth pseudorandom numbers as well as the newly generated subsequent pseudorandom numbers if applicable. In another example, in some operational scenarios, data changes returned from the database table by way of the pseudorandom-number-based index may or may not be ordered along a temporal order. If the return results (e.g., data changes, etc.) from the database table are ordered, then the order set forth in the return results may be directly used. On the other hand, if the returned results from the database table are unordered, the subsequence of invocation numbers and/or the corresponding subsequence of PRNs that are in the temporal order already can be used to line up the data changes in the temporal order. In a further example, in some operational scenarios, some or all of data change consumers, data change producers, and/or a PRN server, can experience process restarts or process failures. A data change producer as described herein may save a (e.g., fixed, preconfigured, every hundred, every 20 thousands, when a timer is fired, up to a certain time interval, etc.) number of assigned invocation numbers and/or a (e.g., fixed, preconfigured, every hundred, every 20 thousands, when a timer is fired, up to a certain time interval, etc.) number of assigned pseudorandom numbers into a (e.g., producer, PRN server, etc.) sync table, which may be persisted locally and/or remotely, for example in the database (108). When the data change producer restarts (or a new data change producer instance starts), the data change producer can determine the last saved invocation numbers and/or the saved pseudorandom numbers in the sync table. Using these last saved invocation numbers and/or pseudorandom numbers, the data change producer can further probe the actual last processed data change made in the database table based on the binary chop algorithm, method, procedure, etc., as previously described. Additionally, optionally or alternatively, a data change consumer as described herein may also save invocation numbers and/or pseudorandom numbers, for which data changes have been retrieved or consumed, into a (e.g., consumer, etc.) sync table, which may be persisted locally and/or remotely, for example in the database (108). On process restart/recovery from a process failure, the saved invocation numbers and/or pseudorandom numbers in the sync table can be used as a starting point by a new data change consumer (instance) to retrieve all data changes to the database table that have yet to be retrieved/consumed, up to the very last processed data change in the database table. While invocation numbers are sequentially ordered over time and distinct from one another, pseudorandom numbers generated from the invocation numbers in a temporal order may or may contain duplicates. In some embodiments, in which pseudorandom numbers may contain duplicates, the invocation numbers may be used to disambiguate a specific temporal order from the pseudorandom numbers, so long as the pseudorandom numbers contain sufficiently long unrepetitive subsequences (or not repetitive beyond a certain subsequence length such as 5, 10, 20, etc.) within the entire time sequence of the pseudorandom numbers, even if the entire sequence of pseudorandom numbers may contain occasional duplicates. In some embodiments, an invocation number (or a corresponding pseudorandom number generated therefrom) may be used to label a data change, whether or not the data change represents an addition, an insertion, an update, a deletion, etc. A deletion-type of data change may be logical or physical. In some embodiments, if a row is logically deleted, the deleted row may be indicated by a table column value of zero (0) rather than being physically removed from a data block of the database table; in comparison, a not-yet-deleted row may be indicated by a table column value of one (1). Data changes to the database table may be assigned their respective invocation numbers and/or respective pseudorandom numbers. The respective pseudorandom numbers may be used to generate a pseudorandom-number-based index directly for the database table. In some embodiments, if a deletion in the database table is physically deleted (e.g., a row is physically removed from a data block previously containing the row, etc.), a data change table attendant to the database table may be used to track data changes made to the database table. Data changes as tracked in the data change table of the database table may be assigned their respective invocation numbers and/or respective pseudorandom numbers. The respective pseudorandom numbers may be used to generate a pseudorandom-number-based index (indirectly for the database table) for the data change table attendant to the database table. In some embodiments, a data change consumer may remove data changes, which have been consumed/retrieved by the data change consumer, from the data change table. Thus, in many operational scenarios, the data change table may contain zero or very few data changes. Upon process restart/failure, (e.g., a new instance of, etc.) the data change consumer can determine whether the data change is empty. If so, the data change consumer determines or infers that all last processed data changes have been consumed in the previous life cycle, and that all later added data changes to the data change table are yet to be retrieved/consumed data changes made to the database table. An invocation number and/or a pseudorandom number as described herein may be assigned to a data change at a single row level, at multiple row level, at an individual row group level, etc., depending on specific implementations for data change production and consumption. Additionally, optionally or alternatively, multiple pseudorandom-number-based indexes may be created for a database table or a data change table thereof. For example, one of the multiple pseudorandom-number-based indexes may identify a data change at a single row level, whereas another of the multiple pseudorandom-number-based indexes may identify a data change at a single row group level, at multiple row level, etc. Additionally, optionally or alternatively, a pseudorandom-number-based index as described herein may comprise a single index table column or multiple index table columns. For example, a pseudorandom-number-based index may comprise a PRN table column as well as one or more non-PRN table columns. All of the PRN table column and the one or more non-PRN table columns in the index can be used collectively (e.g., in data retrieval operations, etc.) to identify data change at the single row level. Some but not all of the PRN table column and the one or more non-PRN table columns can be used (e.g., in data retrieval operations, etc.) to identify data change at the row group level, at the multiple row level, etc. For the purpose of illustration only, it has been described that a time sequence of pseudorandom numbers may be used to index data changes made to a database table maintained in a database. It should be noted that, in various embodiments, data changes to a data construct other than a database table maintained in a database may also be indexed through a pseudorandom-number-based index. Example data constructs to be indexed by such a index may include, but are not necessarily limited to only, any of data structures or data tables whether they are maintained in memory, in database, locally or remotely. In some embodiments, a pseudorandom-number-based index as described herein may be used to index data changes in connection with a materialized view, a materialized query table, etc. 3.0 Example Embodiments FIG.4Aillustrates an example process flow that may be implemented by one or more computing devices such as a PRN server (or generator) in one or more of: a database server, an application server, a combination of database and application servers, etc., as described herein. In block402, the PRN server determines a seed value assigned to a database table in a database. The seed value is to be used to generate a time sequence of numerically unordered pseudorandom numbers from a sequence of numerically ordered invocation numbers. In block404, the PRN server determines a plurality of numerically ordered invocation numbers to be assigned to a plurality of temporally ordered data changes to be made by one or more data change producers relative to the database table. In block406, the PRN server generates a plurality of sequentially generated but numerically unordered pseudorandom numbers based on the seed value and the plurality of numerically ordered invocation numbers. The plurality of sequentially generated but numerically unordered pseudorandom numbers is used to index the plurality of temporally ordered data changes to the database table. In block408, the PRN server provides, to the one or more data change producers, the plurality of sequentially generated but numerically unordered pseudorandom numbers. In block410, the PRN server determines whether a total number of pseudorandom numbers in a sequence of yet-to-be-saved pseudorandom numbers including the plurality of sequentially generated but numerically unordered pseudorandom numbers assigned to the plurality of temporally ordered data changes reaches a maximum total number threshold. In block412, in response to determining that the total number of pseudorandom numbers in the sequence of yet-to-be-saved pseudorandom numbers including the plurality of sequentially generated but numerically unordered pseudorandom numbers assigned to the plurality of temporally ordered data changes reaches the maximum total number threshold, the PRN server saves the sequence of yet-to-be-saved pseudorandom numbers and a sequence of corresponding invocation numbers. In an embodiment, (e.g., the last, etc.) assigned pseudorandom numbers such as the third pseudorandom number as mentioned above are saved in a sync table so that on recovery from process restart/failure, one or more of a data change consumer, a data change producer and/or a PRN server/generator (or one or more new instances thereof) are enabled to use saved information in the sync table to determine the last invocation number, the last pseudorandom number and/or the last data change in the previous life cycle. In an embodiment, each pseudorandom number in the sequence of yet-to-be-saved pseudorandom numbers as previously mentioned is generated based on the seed value and a respective invocation number in the sequence of corresponding invocation numbers; each such pseudorandom number was used to index a respective data change in a sequence of temporally ordered data changes to which the sequence of corresponding invocation numbers is assigned; the respective invocation number in the sequence of corresponding invocation numbers is assigned to the respective data change in the sequence of temporally ordered data changes. FIG.4Billustrates an example process flow that may be implemented by one or more computing devices such as a data change producer in one or more of: a database server, an application server, a combination of database and application servers, etc., as described herein. In block422, the data change producer receives a request for performing a specific data change relative to one or more table rows of a database table that comprises a plurality of table columns. The plurality of table columns of the database table comprising an indexed table column used to store pseudorandom numbers. In block424, the data change producer invokes a pseudorandom number generator to generate a specific pseudorandom number to be assigned to identify the specific data change. The specific pseudorandom number is generated based on a seed value and a current invocation number maintained by the pseudorandom number generator. The seed value is previously assigned to the database table prior to and independent of receiving the request for performing the specific data change. In block426, the data change producer causes the specific data change to be performed relative to the one or more table rows of the database table. The specific pseudorandom number is caused to be stored in the indexed table column for the one or more table rows of the database table. FIG.4Cillustrates an example process flow that may be implemented by one or more computing devices such as a data change consumer in one or more of: a database server, an application server, a combination of database and application servers, etc., as described herein. In block442, the data change consumer determines a first invocation number and a second invocation number greater than the first invocation number. In block444, the data change consumer generates a sequence of pseudorandom numbers for all invocation numbers ranging between the first invocation number and the second invocation number. The sequence of pseudorandom numbers begins with a first pseudorandom number generated based on a seed value and the first invocation number. The sequence of pseudorandom numbers ends with a second pseudorandom number generated based on the seed value and the second invocation number. In block446, the data change consumer retrieves a sequence of data changes to the database table with the sequence of pseudorandom numbers. The sequence of data changes begins with a first data change indexed with the first pseudorandom number. The sequence of data changes ends with a second data change indexed with a third pseudorandom number. The third pseudorandom number is in the sequence of pseudorandom numbers. In block448, the data change consumer determines whether the third pseudorandom number is the same as the second pseudorandom number. In block450, in response to determining that the third pseudorandom number is not the same as the second pseudorandom number, the data change consumer saves a subsequence of pseudorandom numbers in the sequence of pseudorandom numbers to be used in a subsequent request for retrieving data changes to the database table. The subsequence of pseudorandom numbers begins with a fourth pseudorandom number, which is immediately after the third pseudorandom number in the sequence of pseudorandom numbers. FIG.4Dillustrates an example process flow that may be implemented by one or more computing devices such as a PRN server (or generator) in one or more of: a database server, an application server, a combination of database and application servers, etc., as described herein. In block462, the PRN server (e.g., on a process restart/failure, etc.) determines the last saved assigned invocation number used, along with a seed value assigned to a database table, to generate a pseudorandom number (e.g., the last saved pseudorandom number, etc.) that indexes a data change made to the database table. In block464, the PRN server generates a test pseudorandom number based on the seed value and a test invocation number generated by incrementing the last saved assigned invocation number. In block466, the PRN server determines whether the test pseudorandom number was used to index a specific data change made to the database table. In block468, in response to determining the test pseudorandom number was used to index the specific data change made to the database table, the PRN server generates a second test pseudorandom number based on the seed value and a second test invocation number generated by incrementing the test invocation number. In block470, the PRN server determines whether the second test pseudorandom number was used to index a second specific data change made to the database table. In an embodiment, the test invocation number is incremented from the last saved assigned invocation number by more than one (1). In an embodiment, the PRN server is further configured to, in response to determining the test pseudorandom number was not used to index the specific data change made to the database table, perform: generating a third test pseudorandom number based on the seed value and a third test invocation number, the third test invocation number being an intermediate value between the last saved assigned invocation number and the test invocation number; determining whether the third test pseudorandom number was used to index a third specific data change made to the database table. In some embodiments, process flows involving operations, methods, etc., as described herein can be performed through one or more computing devices or units. In an embodiment, an apparatus comprises a processor and is configured to perform any of these operations, methods, process flows, etc. In an embodiment, a non-transitory computer readable storage medium, storing software instructions, which when executed by one or more processors cause performance of any of these operations, methods, process flows, etc. In an embodiment, a computing device comprising one or more processors and one or more storage media storing a set of instructions which, when executed by the one or more processors, cause performance of any of these operations, methods, process flows, etc. Note that, although separate embodiments are discussed herein, any combination of embodiments and/or partial embodiments discussed herein may be combined to form further embodiments. 4.0 Implementation Mechanisms—Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.5is a block diagram that illustrates a computer system500upon which an embodiment of the invention may be implemented. Computer system500includes a bus502or other communication mechanism for communicating information, and a hardware processor504coupled with bus502for processing information. Hardware processor504may be, for example, a general purpose microprocessor. Computer system500also includes a main memory506, such as a random access memory (RAM) or other dynamic storage device, coupled to bus502for storing information and instructions to be executed by processor504. Main memory506also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor504. Such instructions, when stored in non-transitory storage media accessible to processor504, render computer system500into a special-purpose machine that is device-specific to perform the operations specified in the instructions. Computer system500further includes a read only memory (ROM)508or other static storage device coupled to bus502for storing static information and instructions for processor504. A storage device510, such as a magnetic disk or optical disk, is provided and coupled to bus502for storing information and instructions. Computer system500may be coupled via bus502to a display512, such as a liquid crystal display (LCD), for displaying information to a computer user. An input device514, including alphanumeric and other keys, is coupled to bus502for communicating information and command selections to processor504. Another type of user input device is cursor control516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor504and for controlling cursor movement on display512. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system500may implement the techniques described herein using device-specific hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system500to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system500in response to processor504executing one or more sequences of one or more instructions contained in main memory506. Such instructions may be read into main memory506from another storage medium, such as storage device510. Execution of the sequences of instructions contained in main memory506causes processor504to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device510. Volatile media includes dynamic memory, such as main memory506. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor504for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system500can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus502. Bus502carries the data to main memory506, from which processor504retrieves and executes the instructions. The instructions received by main memory506may optionally be stored on storage device510either before or after execution by processor504. Computer system500also includes a communication interface518coupled to bus502. Communication interface518provides a two-way data communication coupling to a network link520that is connected to a local network522. For example, communication interface518may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface518may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface518sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link520typically provides data communication through one or more networks to other data devices. For example, network link520may provide a connection through local network522to a host computer524or to data equipment operated by an Internet Service Provider (ISP)526. ISP526in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”528. Local network522and Internet528both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link520and through communication interface518, which carry the digital data to and from computer system500, are example forms of transmission media. Computer system500can send messages and receive data, including program code, through the network(s), network link520and communication interface518. In the Internet example, a server530might transmit a requested code for an application program through Internet528, ISP526, local network522and communication interface518. The received code may be executed by processor504as it is received, and/or stored in storage device510, or other non-volatile storage for later execution. 5.0 Equivalents, Extensions, Alternatives and Miscellaneous In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
90,926
11860848
DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Examples of these specific embodiments are illustrated in the accompanying drawings, and specific details are set forth in the following description in order to provide a thorough understanding of the subject matter. It will be understood that these examples are not intended to limit the scope of the claims to the illustrated embodiments. On the contrary, they are intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the disclosure. Data platforms are widely used for data storage and data access in computing and communication contexts. Concerning architecture, a data platform could be an on-premises data platform, a network-based data platform (e.g., a cloud-based data platform), a combination of the two, and/or include another type of architecture. With respect to type of data processing, a data platform could implement online transactional processing (OLTP), online analytical processing (OLAP), a combination of the two, and/or another type of data processing. Moreover, a data platform could be or include a relational database management system (RDBMS) and/or one or more other types of database management systems. In a typical implementation, a data platform includes one or more databases that are maintained on behalf of a customer account. The data platform may include one or more databases that are respectively maintained in association with any number of customer accounts, as well as one or more databases associated with a system account (e.g., an administrative account) of the data platform, one or more other databases used for administrative purposes, and/or one or more other databases that are maintained in association with one or more other organizations and/or for any other purposes. A data platform may also store metadata in association with the data platform in general and in association with, as examples, particular databases and/or particular customer accounts as well. The database can include one or more objects, such as tables, functions, and so forth. Users and/or executing processes that are associated with a given customer account may, via one or more types of clients, be able to cause data to be ingested into the database, and may also be able to manipulate the data, add additional data, remove data, run queries against the data, generate views of the data, and so forth. In an example implementation of a data platform, a given database is represented as an account-level object within a customer account, and the customer account may also include one or more other account-level objects such as users, roles, and/or the like. Furthermore, a given account-level database object may itself contain one or more objects such as tables, schemas, views, streams, tasks, and/or the like. A given table may be organized as records (e.g., rows or a collection of rows) that each include one or more attributes (e.g., columns). A data platform may physically store database data in multiple storage units, which may be referred to as blocks, micro-partitions, and/or by one or more other names. In an example, a column of a database can be stored in a block and multiple blocks can be grouped into a single file. That is, a database can be organized into a set of files where each file includes a set of blocks. Consistent with this example, for a given column, all blocks are stored contiguously and blocks for different columns are row aligned. Data stored in each block can be compressed to reduce its size. A block storing compressed data may also be referred to as a “compression block” herein. As referred to herein, a “record” is defined as a collection of data (e.g., textual data) in a file that is organized by one or more fields, where each field can include one or more respective data portions (e.g., textual data, such as strings). Each field in the record can correspond to a row or column of data in a table that represents the records in the file. It should be understood that the terms “row” and “column” are used for illustration purposes and these terms are interchangeable. Data arranged in a column of a table can similarly be arranged in a row of the table. To simplify and expedite the database generation for a given entity, certain systems perform automated data processing operations on a set of input documents. The data processing operations can recognize text in the documents, infer the type of text, and generate columns of data and entries in a database table using the recognized text. Usually, these systems generate the tables in a top-down and left-to-right approach. In these cases, data entries are generated sequentially one after another in a same sequence. If an entry is unable to be discerned (e.g., the value is unknown) as the table is generated, the typical systems can propagate errors or stop generating the table altogether. This can introduce extreme inefficiencies as data needs to be manually reviewed and corrected intermittently. As such, these systems cannot be applied on a large scale to generate tables for large datasets automatically. The process of manually guiding the systems for database object generation is time consuming, inefficient, and prone to human error, which can result in waste of time, network, and processing device resources. Aspects of the present disclosure include systems, methods, and devices to address, among other problems, the aforementioned shortcomings of conventional data platforms by intelligently and automatically processing a corpus of documents in a non-sequential manner to populate a table. Particularly, the disclosed model exploits regularities and relationships within the output data and employs a grammar-constrained decoding process to generate a table from a document of text. The disclosed techniques focus on the text-to-table inference with applications to problems such as extraction of line items, key information extraction of multiple properties, joint entity and relation extraction, or knowledge base population. The disclosed techniques provide a model that is end-to-end trainable, thus simplifying the pipeline and reducing the accumulation of errors along the way. At the same time, since extracted data is already in the form the end user requires, one is able to use it directly to the downstream application without further processing steps which improves the overall efficiency of the system. In particular, the disclosed model provides a decoder capable of modeling spatial relationships between cells in the table. The decoder performs a sequential, grammar-constrained decoding mechanism which generates table content cell-by-cell, in a dynamic, data-dependent order rather than in a top-down and left-to-right sequence. Prior approaches can represent the same output structure in various valid forms. Consequently, valid responses may be penalized during the training phase, expecting only one reference representation. Due to the grammar constrained decoding process of the disclosed techniques, this problem is mitigated without increasing the computational requirements that otherwise might result from use of permutation-invariant loss functions. Additionally, conventional approaches are limited to copying words from the input document and thus cannot perform normalization or return values that are not present in the text explicitly but can be inferred from it. Namely, by knowing relationships between cells and entries of a table, certain entry values can be inferred and populated without actually retrieving the values from the input text itself, for example by process of elimination or inference. The disclosed model performs a cell-decoding step by sampling all cells at once and then choosing the best-scored cell or cells to be inserted at its location while disregarding others. Then, the disclosed model resets and repopulates the previously discarded cells with new values and again scores the repopulated cells. The model again chooses the best-scored cell or cells to be inserted at its location while disregarding others until all cells of the table are generated and processed in this manner. In this way, the model delays generating the most challenging and complex answers (e.g., values of certain cells or entries) to later stages or iterations and conditions those complex answers on the already generated answers of other cells or entries. Instead of generating the cell values in a top-down, left-to-right manner as done by conventional systems, the disclosed model is pretrained by maximizing the expected log-likelihood of the sequence of cell values over all possible prediction orders. In some examples, the disclosed techniques access a text document including a plurality of strings. The disclosed techniques process the text document by a machine learning model to generate a table that includes a plurality of entries that organizes the plurality of strings into rows and columns over a plurality of iterations. The disclosed techniques, at each of the plurality of iterations, estimate by the machine learning model, a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration. By performing these operations, the data platform increases utilization of execution node processing capability and avoids waste of resources and inefficient use of resources. Specifically, rather than having a human manually create and manage table generation from a corpus of text, which wastes a great deal of time and effort, the disclosed system can automate this process to improve the overall efficiency of the system, which improves the overall functioning of the device. FIG.1illustrates an example computing environment100that includes a data platform in the example form of a network-based data platform102, in accordance with some embodiments of the present disclosure. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.1. However, a skilled artisan will readily recognize that various additional functional components may be included as part of the computing environment100to facilitate additional functionality that is not specifically described herein. In other embodiments, the computing environment may comprise another type of network-based database system or a cloud data platform. For example, in some aspects, the computing environment100may include a cloud computing platform101with the network-based data platform102and a storage platform104(also referred to as a cloud storage platform). The cloud computing platform101provides computing resources and storage resources that may be acquired (purchased) or leased and configured to execute applications and store data. The cloud computing platform101may host a cloud computing service103that facilitates storage of data on the cloud computing platform101(e.g., data management and access) and analysis functions (e.g., structured query language (SQL) queries, analysis), as well as other processing capabilities (e.g., parallel execution of sub-plans, as described herein). The cloud computing platform101may include a three-tier architecture: data storage (e.g., storage platforms104and122), an execution platform110(e.g., providing query processing), and a compute service manager108providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (e.g., one or more external storage locations). For example, a company could be a customer of a particular data platform and also separately maintain storage of any number of files—be they unstructured files, semi-structured files, structured files, and/or files of one or more other types—on, as examples, one or more of their servers and/or on one or more cloud-storage platforms such as AMAZON WEB SERVICES™ (AWS™), MICROSOFT® AZURE®, GOOGLE CLOUD PLATFORM™, and/or the like. The customer's servers and cloud-storage platforms are both examples of what a given customer could use as what is referred to herein as an external storage location. The cloud computing platform101could also use a cloud-storage platform as what is referred to herein as an internal storage location concerning the data platform. The techniques described in this disclosure pertain to non-volatile storage devices that are used for the internal storage location and/or the external storage location. From the perspective of the network-based data platform102of the cloud computing platform101, one or more files that are stored at one or more storage locations are referred to herein as being organized into one or more of what is referred to herein as either “internal stages” or “external stages.” Internal stages are stages that correspond to data storage at one or more internal storage locations, and external stages are stages that correspond to data storage at one or more external storage locations. In this regard, external files can be stored in external stages at one or more external storage locations, and internal files can be stored in internal stages at one or more internal storage locations, which can include servers managed and controlled by the same organization (e.g., company) that manages and controls the data platform, and which can instead or in addition include data-storage resources operated by a storage provider (e.g., a cloud-storage platform) that is used by the data platform for its “internal” storage. The internal storage of a data platform is also referred to herein as the “storage platform” of the data platform. It is further noted that a given external file that a given customer stores at a given external storage location may or may not be stored in an external stage in the external storage location. For example, in some data-platform implementations, it is a customer's choice whether to create one or more external stages (e.g., one or more external-stage objects) in the customer's data-platform account as an organizational and functional construct for conveniently interacting via the data platform with one or more external files. As shown, the network-based data platform102of the cloud computing platform101is in communication with the cloud storage platforms104and122(e.g., Amazon Web Services (AWS)®, Microsoft Azure Blob Storage®, or Google Cloud Storage). The network-based data platform102is a network-based system used for reporting and analysis of integrated data from one or more disparate sources including one or more storage locations within the cloud storage platform104. The cloud storage platform104comprises a plurality of computing machines and provides on-demand computer system resources such as data storage and computing power to the network-based data platform102. The network-based data platform102comprises a compute service manager108, an execution platform110, and one or more metadata databases112. The network-based data platform102hosts and provides data reporting and analysis services to multiple client accounts. The compute service manager108coordinates and manages operations of the network-based data platform102. The compute service manager108also performs query optimization and compilation as well as managing clusters of computing services that provide compute resources (also referred to as “virtual warehouses”). The compute service manager108can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager108. The compute service manager108is also in communication with a client device114. The client device114corresponds to a user of one of the multiple client accounts supported by the network-based data platform102. A user may utilize the client device114to submit data storage, retrieval, and analysis requests to the compute service manager108. Client device114(also referred to as user device114) may include one or more of a laptop computer, a desktop computer, a mobile phone (e.g., a smartphone), a tablet computer, a cloud-hosted computer, cloud-hosted serverless processes, or other computing processes or devices that may be used to access services provided by the cloud computing platform101(e.g., cloud computing service103) by way of a network106, such as the Internet or a private network. In the description below, actions are ascribed to users, particularly consumers and providers. Such actions shall be understood to be performed concerning client device (or devices)114operated by such users. For example, notification to a user may be understood to be a notification transmitted to client device114, input or instruction from a user may be understood to be received by way of the client device114, and interaction with an interface by a user shall be understood to be interaction with the interface on the client device114by a data consumer115. In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing such actions by the cloud computing service103in response to an instruction from that user. Some database operations performed by the compute service manager108can include an operation to generate a table or database from one or more input text documents. The compute service manager108accesses the text document that includes a plurality of strings. The compute service manager108processes the text document by a machine learning model (e.g., a trained artificial neural network (ANN)) to generate a table comprising a plurality of entries that organizes the plurality of strings into rows and columns over a plurality of iterations. At each of the plurality of iterations, the compute service manager108estimates by the machine learning model a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration. For example, the compute service manager108generates a first table instance that includes a first set of entries based on the plurality of strings and, at a first iteration, generates, by the machine learning model, a first plurality of confidence scores for values in each entry of the first set of entries of the first table instance. The compute service manager108selects a first subset of the first set of entries of the first table instance based on the first plurality of confidence scores. Then, at a second iteration, the compute service manager108generates a second table instance that includes a second set of entries by resetting the values associated with a remaining set of entries that are excluded from the first subset of the first set of entries and retaining, maintaining or keeping the values associated with the first subset of the first set of entries in the second table instance. The compute service manager108processes the second table instance by the machine learning model, to generate a second plurality of confidence scores for each entry of the second set of entries of the second table instance and selects a second subset of the second set of entries based on the second plurality of confidence scores. The compute service manager108is also coupled to one or more metadata databases112that store metadata about various functions and aspects associated with the network-based data platform102and its users. The metadata database112can store the table that provides the mapping between sessions, references to objects, identity of objects, and/or access privileges of the objects. For example, a metadata database112may include a summary of data stored in remote data storage systems as well as data available from a local cache. Additionally, a metadata database112may include information regarding how data is organized in remote data storage systems (e.g., the cloud storage platform104) and the local caches. Information stored by a metadata database112allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device. In some embodiments, metadata database112is configured to store account object metadata. The compute service manager108is further coupled to the execution platform110, which provides multiple computing resources that execute various data storage and data retrieval tasks. As illustrated inFIG.3, the execution platform110comprises a plurality of compute nodes. The execution platform110is coupled to storage platform104and cloud storage platforms122. The storage platform104comprises multiple data storage devices120-1to120-N. In some embodiments, the data storage devices120-1to120-N are cloud-based storage devices located in one or more geographic locations. For example, the data storage devices120-1to120-N may be part of a public cloud infrastructure or a private cloud infrastructure. The data storage devices120-1to120-N may be hard disk drives (HDDs), solid-state drives (SSDs), storage clusters, Amazon S3™ storage systems, or any other data-storage technology. Additionally, the cloud storage platform104may include distributed file systems (such as Hadoop Distributed File Systems (HDFS)), object storage systems, and the like. In some embodiments, at least one storage device cache126(e.g., an internal cache) may reside on one or more of the data storage devices120-1to120-N, and at least one external stage124may reside on one or more of the cloud storage platforms122. In some examples, a single storage device cache126can be associated with all of the data storage devices120-1to120-N so that the single storage device cache126is shared by and can store data associated with any one of the data storage devices120-1to120-N. In some examples, each data storage device of storage devices120-1to120-N can include or implement a separate storage device cache126. A cache manager128handles the transfer of data from the data storage devices120-1to120-N to the storage device cache126. The cache manager128handles the eviction of data from the storage device cache126to the respective associated data storage devices120-1to120-N. The storage platform104can include one or more hard drives and/or can represent a plurality of hard drives distributed on a plurality of servers in a cloud computing environment. In some embodiments, communication links between elements of the computing environment100are implemented via one or more data communication networks. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-networks) coupled to one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol. The compute service manager108, metadata database(s)112, execution platform110, and storage platform104are shown inFIG.1as individual discrete components. However, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104may be implemented as a distributed system (e.g., distributed across multiple systems/platforms at multiple geographic locations). Additionally, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104can be scaled up or down (independently of one another) depending on changes to the requests received and the changing needs of the network-based data platform102. Thus, in the described embodiments, the network-based data platform102is dynamic and supports regular changes to meet the current data processing needs. During a typical operation, the network-based data platform102processes multiple jobs (e.g., operators of sub-plans) determined by the compute service manager108. These jobs (e.g., caller processes) are scheduled and managed by the compute service manager108to determine when and how to execute the job. For example, the compute service manager108may divide the job into multiple discrete tasks (e.g., caller processes) and may determine what data is needed to execute each of the multiple discrete tasks. The compute service manager108may assign each of the multiple discrete tasks to one or more nodes of the execution platform110to process the task. The compute service manager108may determine what data is needed to process a task and further determine which nodes within the execution platform110are best suited to process the task. Some nodes may have already cached the data needed to process the task (e.g., in a storage device cache126, such as an HDD cache or random access memory (RAM)) and, therefore, be a good candidate for processing the task. Metadata stored in a metadata database112assists the compute service manager108in determining which nodes in the execution platform110have already cached at least a portion of the data needed to process the task. One or more nodes in the execution platform110process the task using data cached by the nodes and, if necessary, data retrieved from the cloud storage platform104. It is desirable to retrieve as much data as possible from caches within the execution platform110because the retrieval speed is typically much faster than retrieving data from the cloud storage platform104. According to various embodiments, the execution platform110executes a query according to a query plan determined by the compute service manager108. As part of executing the query, the execution platform performs a table scan in which one or more portions of a database table are scanned to identify data that matches the query. More specifically, the database table can be organized into a set of files where each file comprises a set of blocks (or records) and each block (or record) stores at least a portion of a column (or row) of the database. Each execution node provides multiple threads of execution, and in performing a table scan, multiple threads perform a parallel scan of the set of blocks (or records) of a file, which may be selected from a scan set corresponding to a subset of the set of files into which the database is organized. The query plan, in some cases, can include a request to organize data from a structured or unstructured text file into one or more tables. The cloud computing platform101of the computing environment100separates the execution platform110from the storage platform104. In this arrangement, the processing resources and cache resources in the execution platform110operate independently of the data storage devices120-1to120-N in the cloud storage platform104. Thus, the computing resources and cache resources are not restricted to specific data storage devices120-1to120-N. Instead, all computing resources and all cache resources may retrieve data from, and store data to, any of the data storage resources in the cloud storage platform104. FIG.2is a block diagram illustrating components of the compute service manager108, in accordance with some embodiments of the present disclosure. As shown inFIG.2, the compute service manager108includes an access manager202and a credential management system204coupled to an access metadata database206, which is an example of the metadata database(s)112. Access manager202handles authentication and authorization tasks for the systems described herein. The credential management system204facilitates the use of remotely stored credentials to access external resources such as data resources in a remote storage device. As used herein, the remote storage devices may also be referred to as “persistent storage devices,” “non-volatile storage devices,” “cloud storage devices,” or “shared storage devices.” For example, the credential management system204may create and maintain remote credential store definitions and credential objects (e.g., in the access metadata database206). A remote credential store definition identifies a remote credential store and includes access information to access security credentials from the remote credential store. A credential object identifies one or more security credentials using non-sensitive information (e.g., text strings) that are to be retrieved from a remote credential store for use in accessing an external resource. When a request invoking an external resource is received at run time, the credential management system204and access manager202use information stored in the access metadata database206(e.g., a credential object and a credential store definition) to retrieve security credentials used to access the external resource from a remote credential store. A request processing service208manages received data storage requests and data retrieval requests (e.g., jobs to be performed on database data). For example, the request processing service208may determine the data to process a received query (e.g., a data storage request or data retrieval request). The data may be stored in a cache within the execution platform110, in a storage device cache126, or in a data storage device in storage platform104. A management console service210supports access to various systems and processes by administrators and other system managers. Additionally, the management console service210may receive a request to execute a job and monitor the workload on the system. The compute service manager108also includes a job compiler212, a job optimizer214, and a job executor216. The job compiler212parses a job into multiple discrete tasks and generates the execution code for each of the multiple discrete tasks. The job optimizer214determines the best method to execute the multiple discrete tasks based on the data that needs to be processed. Job optimizer214also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the job. The job executor216executes the execution code for jobs received from a queue or determined by the compute service manager108. A job scheduler and coordinator218sends received jobs to the appropriate services or systems for compilation, optimization, and dispatch to the execution platform110. For example, jobs may be prioritized and then processed in that prioritized order. In an embodiment, the job scheduler and coordinator218determines a priority for internal jobs that are scheduled by the compute service manager108with other “outside” jobs such as user queries that may be scheduled by other systems in the database but may utilize the same processing resources in the execution platform110. In some embodiments, the job scheduler and coordinator218identifies or assigns particular nodes in the execution platform110to process particular tasks. A virtual warehouse manager220manages the operation of multiple virtual warehouses implemented in the execution platform110. For example, the virtual warehouse manager220may generate query plans for executing received queries by one or more execution nodes of the execution platform110. In some cases, the compute service manager108includes a table generation module400, discussed in more detail below, to handle jobs of the job executor216. Additionally, the compute service manager108includes a configuration and metadata manager222, which manages the information related to the data stored in the remote data storage devices and the local buffers (e.g., the buffers in execution platform110). The configuration and metadata manager222uses metadata to determine which data files need to be accessed to retrieve data for processing a particular task or job. A monitor and workload analyzer224oversees processes performed by the compute service manager108and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in the execution platform110. The monitor and workload analyzer224also redistributes tasks, as needed, based on changing workloads throughout the network-based data platform102and may further redistribute tasks based on a user (e.g., “external”) query workload that may also be processed by the execution platform110. The configuration and metadata manager222and the monitor and workload analyzer224are coupled to a data storage device226. The data storage device226inFIG.2represents any data storage device within the network-based data platform102. For example, data storage device226may represent buffers in execution platform110, storage devices in storage platform104, or any other storage device. FIG.3is a block diagram illustrating components of the execution platform110, which can be implemented by any of the virtual warehouses of the execution platform110, in accordance with some embodiments of the present disclosure. As shown inFIG.3, the execution platform110includes multiple virtual warehouses, including virtual warehouse1(or301-1), virtual warehouse2(or301-2), and virtual warehouse N (or 301-N). Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. The virtual warehouses can execute multiple tasks in parallel by using multiple execution nodes. As discussed herein, the execution platform110can add new virtual warehouses and drop existing virtual warehouses in real-time based on the current processing needs of the systems and users. This flexibility allows the execution platform110to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed. All virtual warehouses can access data from any data storage device (e.g., any storage device in the cloud storage platform104). Although each virtual warehouse shown inFIG.3includes three execution nodes, a particular virtual warehouse may include any number of execution nodes. Further, the number of execution nodes in a virtual warehouse is dynamic, such that new execution nodes are created when additional demand is present, and existing execution nodes are deleted when they are no longer necessary. Each virtual warehouse is capable of accessing data from any of the data storage devices120-1to120-N and their associated storage device cache126(e.g., via a respective lock file) shown inFIG.1. Thus, the virtual warehouses are not necessarily assigned to a specific data storage device120-1to120-N and, instead, can access data from any of the data storage devices120-1to120-N within the cloud storage platform104. Similarly, each of the execution nodes shown inFIG.3can access data from any of the data storage devices120-1to120-N. In some embodiments, a particular virtual warehouse or a particular execution node may be temporarily assigned to a specific data storage device, but the virtual warehouse or execution node may later access data from any other data storage device. In the example ofFIG.3, virtual warehouse1includes three execution nodes302-1,302-2, and302-N. Execution node302-1includes a cache304-1and a processor306-1. Execution node302-2includes a cache304-2and a processor306-2. Execution node302-N includes a cache304-N and a processor306-N. Each execution node302-1,302-2, and302-N is associated with processing one or more data storage and/or data retrieval tasks. For example, a virtual warehouse may handle data storage and data retrieval tasks associated with an internal service, such as a clustering service, a materialized view refresh service, a file compaction service, a storage procedure service, or a file upgrade service. In other implementations, a particular virtual warehouse may handle data storage and data retrieval tasks associated with a particular data storage system or a particular category of data. Similar to virtual warehouse1discussed above, virtual warehouse2includes three execution nodes312-1,312-2, and312-N. Execution node312-1includes a cache314-1and a processor316-1. Execution node312-2includes a cache314-2and a processor316-2. Execution node312-N includes a cache314-N and a processor316-N. Additionally, virtual warehouse3includes three execution nodes322-1,322-2, and322-N. Execution node322-1includes a cache324-1and a processor326-1. Execution node322-2includes a cache324-2and a processor326-2. Execution node322-N includes a cache324-N and a processor326-N. In some embodiments, the execution nodes shown inFIG.3are stateless with respect to the data being cached by the execution nodes. For example, these execution nodes do not store or otherwise maintain state information about the execution node or the data being cached by a particular execution node. Thus, in the event of an execution node failure, the failed node can be transparently replaced by another node. Since there is no state information associated with the failed execution node, the new (replacement) execution node can easily replace the failed node without concern for recreating a particular state. Although the execution nodes shown inFIG.3each includes one data cache and one processor, alternative embodiments may include execution nodes containing any number of processors and any number of caches. Additionally, the caches may vary in size among the different execution nodes. The caches shown inFIG.3store, in the local execution node, data that was retrieved from one or more data storage devices in the cloud storage platform104. Thus, the caches reduce or eliminate the bottleneck problems occurring in platforms that consistently retrieve data from remote storage systems. Instead of repeatedly accessing data from the remote storage devices, the systems and methods described herein access data from the caches in the execution nodes, which is significantly faster and avoids the bottleneck problem discussed above. In some embodiments, the caches are implemented using high-speed memory devices that provide fast access to the cached data. Each cache can store data from any of the storage devices in the cloud storage platform104. The techniques described with respect to the cache manager128of the storage platform104(e.g., an HDD) can be similarly applied to the cache304-N,314-N, and324-N of the execution nodes302-N,312-N, and322-N. Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that require significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that require caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that require fast scanning of large amounts of data. In some embodiments, the cache resources and computing resources associated with a particular execution node are determined when the execution node is created, based on the expected tasks to be performed by the execution node. Additionally, the cache resources and computing resources associated with a particular execution node may change over time based on changing tasks performed by the execution node. For example, an execution node may be assigned more processing resources if the tasks performed by the execution node become more processor-intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node require a larger cache capacity. Although virtual warehouses1,2, and N are associated with the same execution platform110, virtual warehouses1, N may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse1can be implemented by a computing system at a first geographic location, while virtual warehouses2and N are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities. Additionally, each virtual warehouse is shown inFIG.3as having multiple execution nodes. The multiple execution nodes associated with each virtual warehouse may be implemented using multiple computing systems at multiple geographic locations. For example, an instance of virtual warehouse1implements execution nodes302-1and302-2on one computing platform at a geographic location, and execution node302-N at a different computing platform at another geographic location. Selecting particular computing systems to implement an execution node may depend on various factors, such as the level of resources needed for a particular execution node (e.g., processing resource requirements and cache requirements), the resources available at particular computing systems, communication capabilities of networks within a geographic location or between geographic locations, and which computing systems are already implementing other execution nodes in the virtual warehouse. Execution platform110is also fault-tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location. A particular execution platform110may include any number of virtual warehouses. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary. In some embodiments, the virtual warehouses may operate on the same data in the cloud storage platform104, but each virtual warehouse has its execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users. FIG.4is a block diagram illustrating an example of the table generation module400, which can be implemented by any of the virtual warehouses of the execution platform110, such as the execution node302-1, compute service manager108, and/or the request processing service208, in accordance with some embodiments of the present disclosure. The table generation module400can include a text access module410, a machine learning module420, and a table output module430. The table generation module400is configured to access one or more text documents that include a plurality of strings. The table generation module400processes the text document by a machine learning model (e.g., the machine learning module420) to generate a table comprising a plurality of entries that organizes the plurality of strings into rows and columns over a plurality of iterations. At each of the plurality of iterations, the table generation module400estimates by the machine learning model a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration. Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed. Machine learning explores the study and construction of algorithms, also referred to herein as tools, that may learn from existing data and make predictions about new data. Such machine-learning tools operate by building a model (e.g., the machine learning module420) from example training data in order to make data-driven predictions or decisions expressed as outputs or assessments. Although examples are presented with respect to a few machine-learning tools, the principles presented herein may be applied to other machine-learning tools. In some examples, different machine-learning tools may be used. For example, Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), deep NN (DNN), matrix factorization, and Support Vector Machines (SVM) tools may be used for classifying or scoring videos. Two common types of problems in machine learning are classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into one of several category values (for example, is this object an apple or an orange?). Regression algorithms aim at quantifying some items (for example, by providing a value that is a real number). The machine-learning algorithms use features for analyzing the data to generate an assessment. Each of the features is an individual measurable property of a phenomenon being observed. The concept of a feature is related to that of an explanatory variable used in statistical techniques such as linear regression. Choosing informative, discriminating, and independent features is important for the effective operation of the MLP in pattern recognition, classification, and regression. Features may be of different types, such as numeric features, strings, and graphs. In one example, the features may be of different types and may include one or more of content, concepts, attributes, historical data, and/or user data, merely for example. The machine-learning algorithms use the training data to find correlations among the identified features that affect the outcome or assessment. In some examples, the training data includes labeled data, which is known data for one or more identified features and one or more outcomes, such as detecting communication patterns, detecting the meaning of the message, generating a summary of a message, detecting action items in messages, detecting urgency in the message, detecting a relationship of the user to the sender, calculating score attributes, calculating message scores, determining type of text, categorizing text, computing confidence scores that certain text corresponds to labeled categories or column identifiers, and so forth. With the training data and the identified features, the machine-learning tool is trained by a machine-learning program training. The machine-learning tool appraises the value of the features as they correlate to the training data. The result of the training is the trained machine-learning program. When the trained machine-learning program is used to perform an assessment, new data is provided as an input to the trained machine-learning program, and the trained machine-learning program generates the assessment as output. The machine-learning program supports two types of phases, namely a training phase and prediction phase. In training phases, supervised, unsupervised or reinforcement learning may be used. For example, the machine-learning program (1) receives features (e.g., as structured or labeled data in supervised learning) and/or (2) identifies features (e.g., unstructured or unlabeled data for unsupervised learning) in training data. In prediction phases, the machine-learning program uses the features for analyzing input text documents to generate outcomes or predictions or a table that represents the strings of the text in rows and columns, as examples of an assessment. In the training phase, feature engineering is used to identify features and may include identifying informative, discriminating, and independent features for the effective operation of the machine-learning program in pattern recognition, classification, and regression. In some examples, the training data includes labeled data, which is known data for pre-identified features and one or more outcomes. Each of the features may be a variable or attribute, such as individual measurable property of a process, article, system, or phenomenon represented by a data set (e.g., the training data). In training phases, the machine-learning program uses the training data to find correlations among the features that affect a predicted outcome or assessment. With the training data and the identified features, the machine-learning program is trained during the training phase at machine-learning program training. The machine-learning program appraises values of the features as they correlate to the training data. The result of the training is the trained machine-learning program (e.g., a trained or learned model). Further, the training phases may involve machine learning, in which the training data is structured (e.g., labeled during preprocessing operations), and the trained machine-learning program implements a relatively simple neural network capable of performing, for example, classification and clustering operations. In other examples, the training phase may involve deep learning, in which the training data is unstructured, and the trained machine-learning program implements a DNN that is able to perform both feature extraction and classification/clustering operations. A neural network generated during the training phase, and implemented within the trained machine-learning program, may include a hierarchical (e.g., layered) organization of neurons. For example, neurons (or nodes) may be arranged hierarchically into a number of layers, including an input layer, an output layer, and multiple hidden layers. Each of the layers within the neural network can have one or many neurons and each of these neurons operationally computes a small function (e.g., activation function). For example, if an activation function generates a result that transgresses a particular threshold, an output may be communicated from that neuron (e.g., transmitting neuron) to a connected neuron (e.g., receiving neuron) in successive layers. Connections between neurons also have associated weights, which defines the influence of the input from a transmitting neuron to a receiving neuron. In some cases, these neurons implement one or more encoder or decoder networks. In some examples, the neural network may also be one of a number of different types of neural networks, including a single-layer feed-forward network, an Artificial Neural Network (ANN), a Generative Adversarial Network (GAN), a Recurrent Neural Network (RNN), a symmetrically connected neural network, and unsupervised pre-trained network, a Convolutional Neural Network (CNN), or a Recursive Neural Network (RNN), merely for example. During prediction phases, the trained machine-learning program is used to perform an assessment. In some examples, the text access module410receives a text document or a query that identifies a text document. The text document can include one or more strings that are structured or unstructured. The query or text document can specify columns of a table that is to be generated. In some cases, the text access module410implements a machine learning model (e.g., ANN) that processes the text document to derive columns of the table. For example, the text access module410can process words of the text document to determine or estimate the number or quantity of columns of the table that are needed to represent the content of the text document. The text document can be stored by the execution platform110and/or can be retrieved from the Internet or online source. Once the text access module410receives the text document and processes initially the text document to determine the column headers of the table and the relationship among the words of the text document, the text access module410provides the text document and the column headers to the machine learning module420. The machine learning module420can be trained to process the text document to generate the table including a plurality of entries (or cells) that organizes the plurality of strings into rows and columns over a plurality of iterations. At each of the plurality of iterations, the machine learning module420estimates a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration. For example, the machine learning module420generates a first table instance including a first set of entries based on the plurality of strings and, at a first iteration, generates a first plurality of confidence scores for values in each entry of the first set of entries of the first table instance. The machine learning module420selects a first subset of the first set of entries of the first table instance based on the first plurality of confidence scores. The machine learning module420retrieves a confidence threshold or selection criterion and compares the first plurality of confidence scores for each entry of the first set of entries of the first table instance to the confidence threshold or selection criterion/criteria. The machine learning module420identifies the first subset of the first set of entries that are associated with respective confidence scores that transgress the confidence threshold or satisfy the selection criterion/criteria. In some examples, the machine learning module420generates a second table instance including a second set of entries from the same text document and based on the processed first table instance. The machine learning module420does so by, after selecting the first subset of the first set of entries, resetting the values associated with a remaining set of entries that are excluded from the first subset of the first set of entries and retaining the values associated with the first subset of the first set of entries in the second table instance. Then, at a second iteration, the machine learning module420processes the second table instance to generate a second plurality of confidence scores for each entry of the second set of entries of the second table instance and selects a second subset of the second set of entries based on the second plurality of confidence scores. In some examples, the machine learning module420compares the second plurality of confidence scores to the confidence threshold or selection criterion. The machine learning module420identifies the second subset of the second set of entries that are associated with respective confidence scores that transgress the confidence threshold or satisfy the selection criterion/criteria. The machine learning module420repeats generation of additional table instances until all of the entries of a given table instance are associated with confidence scores that transgress the confidence threshold or satisfy the selection criterion/criteria. In some examples, the machine learning model implemented by the machine learning module420is trained based on a corpus of training documents to maximize an expected log likelihood for a training table across all random permutations of a factorization order. In some examples, the machine learning model420is trained to populate the plurality of entries of the table in any order in a way that maximizes confidence in each individual entry and reduces error accumulation. In some cases, the machine learning module420is trained based on a corpus of training documents by populating a plurality of training tables representing different permutations of strings in the training documents to maximize an expected log likelihood. In some cases, the machine learning model420is trained in a supervised manner and/or in an unsupervised manner. In some examples, the machine learning module420infers one or more values for individual entries of the plurality of entries with words excluded from the plurality of strings based on values of other entries of the plurality of entries. Namely, the machine learning module420can predict words that are not present in the text document received from the text access module410. These predicted words are included and predicted in the final table that is output by the table output module430based on other words that are mapped in the table and included in the text document. For example, instead of generating the cell values for the table in a top-down, left-to-right manner, the machine learning module420performs the pretraining by maximizing the expected log-likelihood of the sequence of cell values over all possible prediction orders. Specifically, suppose that the text access module410provides a document containing a table with row labels r= and column labels c=(c1, . . . ,cM), which are collectively denoted as h=(r, c). A linear ordering of the table cells can be represented with a bijection σ: {1, 2, . . . ,C}→{1, . . . ,N}×{1, . . . , M}.In this case, C=NM is the number of cells, so that σ(n)=(i,j) are the row and column coordinates of the n-th cell in the ordering. Given such cell values v=(vij), i<=N,j<=M, the machine learning module420factorizes the likelihood of v given h as pθ(v|h)=∏n=1Cpθ(vσ⁡(n)❘(vσ⁡(k))k<n,h), and using this factorization, the machine learning module420maximize the expected log-likelihood 1C!⁢∑σ∑n=1Clog⁢pθ(vσ⁡(n)|(vσ⁡(k))k<n, h over θ. The likelihoods pθ themselves can be factorized according to the standard autoregressive approach as pθ(vσ⁡(n)|(vσ⁡(k))k<n,h)==∏ℓ=1Lpθ(vσ⁡(n)ℓ|(vσ⁡(n)i)i<ℓ,(vσ⁡(k))k<n, h where L is the length of vσ(n)represented as a sequence of tokens (viσ(n)), i<=L. In practice, the expected log-likelihood is estimated by sampling bijections σ at random. In some examples, raw attention scores αijfor tokens i and j are modified by introducing a bias term: a′ij=a +βijwhere βij=W(ij) is a trainable weight, depending on the relative sequential position of these tokens. The machine learning module420modifies the decoder's self-attention by extending it with an additional tabular bias term τij={R⁡(ri-rj)+C⁡(ci-cj)if⁢rj>0R0+C(cicj)if⁢rj=0. In some examples, (ci,ri) are cell coordinates as given by its 1-based column and row indices (with 0 reserved for the header row/column), and R(k), C(k) and Ro are trainable weights. The special case with rj=0 corresponds to the situation when the key/value token lies in the column header, in which case the machine learning module420may use the same bias independent of the row of the query token, due to the different nature of the relation between two cells, and a cell and its column header. After these adjustments, the final attention score takes the form a′ij=αij+βij +τijwhere Tijis the bias term defined earlier. In some examples, the machine learning module420decodes the input text received from the text access module410based on the following algorithm: procedure OUTERLOOP(k)T ← 0n,m,ln × m table with l padding tokens per cellC ← 0n,mcurrent cell status (decoded or not)while SUM(C) < nm dowhile there is a cell to decodeT′, L ← INNERLOOP(T, C)create complete table candidate T′ and cell scores← OUTERCRITERION(L)sequence of coordinates sorted according to scoresfor c ← 1, k dofor k best cells from T′′i, j ←cget coordinatesTi,j← T′i,j...copy values to table T' accordinglyCi,j← 1...and mark the appropriate cell as already decodedend forend whilereturn Tend procedureprocedure INNERLOOP(T, C)L ← 0n,mscores for each cell in n × m tableT′ ← Tinner loop's table copyparfor : i ← 1, n dofor each table rowparfor j ← 1, m do...and each table cell processed in parallelif Ci,j= 0 then...if it was not decoded yets,t, ← DECODERMODEL(T, i, j)produce cell tokens t and their scores sLi,j← INNERCRITERION(s)aggregate per-token scores into cell scoreT′i,j← iupdate table copyend ifend parforend parforreturn (T′, L)end procedureprocedure INNERCRITERION(s)/* Anyn→function. STable assumes max, but we test other in the ablation studies. */end procedureprocedure OUTERCRITERION(L)/* Somem×n→ (×)mnfunction returning a permutation of indices of the inputmatrix L. STable assumes sort of matrix coordinates according to descending values of itselements, but we test other functions in the ablation studies. * /end procedure This algorithm represents an inner loop that determines each instance of the table and compares the confidence scores of each cell to the threshold or selection criterion/criteria. The outer loop of the algorithm receives the selected subset of cells for which the confidence scores transgress the threshold or satisfy the selection criterion/criteria and uses these selected cells to generate or update cell values of other cells. The inner loop generates each cell autoregressively and independently from other cells. This process can be treated as generating multiple concurrent threads of an answer and can be parallelizable. After the selection of the cell (e.g., based on the comparison of the confidence score to the threshold or selection criterion/criteria), the cell from the inner loop is inserted into the outer loop and is made visible to other cells. The cells that were not selected are reset and continuously generated in future iterations until they are chosen or selected (e.g., have a confidence score that transgresses the threshold or satisfies the selection criterion/criteria). The machine learning module420provides the table that is generated after multiple iterations to the table output module430. The table output module430presents the table to an end user and/or stores the table as part of the execution platform110, such as in the cloud storage platform104. FIGS.5and6are illustrative outputs500and600of the table generation module400, in accordance with some embodiments of the present disclosure. For example, the text access module410can receive and process an input text document to generate a decoder prompt510. The decoder prompt510can list various columns (e.g., a color column and a shape column) and the respective cell values (e.g., red and circle) for some of the columns. The decoder prompt510is provided to the machine learning module420to generate the final table540. In a first iteration, the machine learning module420inserts a first cell value522at a particular entry of a first table instance520corresponding to a first column (e.g., color column). The machine learning module420then predicts the value for a second entry (e.g., circle) corresponding to a second column (e.g., shape column). Then, rather than populating the next cell524under the first cell value522(shown as an empty cell), the machine learning module420populates the cell526with a value. This can be because the cell526can be estimated with a relatively higher confidence than the next cell524. In this case, the machine learning module420estimates the value530for the cell526based on the values of one or more previously determined cells (e.g., first cell value522and/or the value for the second entry). This process is repeated until all cell values are determined. Namely, after populating the cell526, the machine learning module420can then populate the next cell524and other cells to generate the final table540. As shown in the output600, the text access module410can receive a text document610that includes one or more strings. Initially, the machine learning module420receives the text document610(or some initial tabular representation of the text document610) and generates a first table instance620. The machine learning module420can generate the first table instance620in which each cell includes a value and a corresponding confidence score. For example, a first cell includes a first value624(e.g., red) and first confidence score622(e.g.,0.9), and a second cell626includes a second value (e.g., square) and a second confidence score (e.g.,0.4). Next, the machine learning module420compares the confidence scores of each cell or entry to the confidence threshold and/or selection criterion/criteria. In some cases, the confidence threshold can be 0.8 and any confidence score that exceeds 0.8 can be selected. For example, the machine learning module420can determine that the first two cells under the first column (e.g., the cells with values red and green) have corresponding confidence scores (e.g.,0.9) that transgress the confidence threshold. In such cases, the machine learning module420keeps these cells in the table generated in a second iteration and recalculates and/or clears or resets the values of the remaining cells in the first table instance620. At the second iteration, the machine learning module420generates a second table instance630. As shown, the values of the first two cells under the first column have been retained in the second table instance630while the other cells have been regenerated with new values and/or new confidence scores. These other cells can be generated based on the locked or retained values of the first two cells under the first column. In this second iteration, the second table instance630now includes a cell638with a value (e.g., blue) and corresponding confidence score (e.g.,1.0). This confidence score is greater than the confidence score previously computed for this cell638in the first table instance620. Particularly, cell638in the first table instance620may have had a confidence score of 0.8 because there were no selected or locked cell values. Now that the first two cells above the cell638have been locked and determined, the confidence score for the cell638has been raised to 1.0. The cell634previously had a value of, for example, square, and now in the second table instance630has been recalculated to have a value of, for example, hexagon. This again is because the first two cells632have been locked, which can change the estimation or prediction of the value in the cell634. The machine learning module420can process the second table instance630to determine that the cell638and the cell636have confidence scores that transgress the confidence threshold and/or selection criterion/criteria. Based on this, the machine learning module420can lock the values of the first two cells under the first column (e.g., from the first iteration) and the values of the cells638and636. In a third iteration, the machine learning module420clears out or resets the values of the non-locked or non-selected cells and keeps the values of the locked values to generate a third table instance640. As shown, the third table instance640includes a new value for the cell634(e.g., circle) where previously the value was hexagon in the second iteration and square in the first iteration. As before, the machine learning module420processes or compares the confidence scores of the remaining cells in the third table instance640to the confidence threshold and/or selection criterion/criteria to select which cell values to keep for a subsequent iteration and which cell values to reset or discard. Once all of the cell values have confidence scores that transgress the confidence threshold and/or selection criterion/criteria, the machine learning module420outputs the final table650to the table output module430. In some cases, after processing all of the cells, the machine learning module420can determine that one or more cells are still empty. Namely, the machine learning module420may not find any text in the text document that fits into the empty cells. In such cases, the machine learning module420can generatively determine or predict text to include in the empty cells. It can do so by identifying the category or column header corresponding to the empty cells. The machine learning module420also processes or retrieves data from the row corresponding to the empty cells and their corresponding headers. Using the information from the other cells in the same row, the machine learning module420can estimate or predict the value for the empty cell. For example, the empty cell can correspond to or represent citizenship. The other cells in the row can represent a birthplace or location for a person identified in the row. The machine learning module420can use the birthplace or location to identify the corresponding country for the birthplace and can infer the content of the empty cell based on the identified country corresponding to the birthplace. In this case, the country of the birthplace was never mentioned in the text document that was received but has been inferred by the machine learning module420based on the content of other cells entries determined by the machine learning module420. FIG.7is a flow diagram illustrating operations700of the table generation module400, in accordance with some embodiments of the present disclosure. The operations700may be embodied in computer-readable instructions for execution by one or more hardware components (e.g., one or more processors) such that the operations of the operations700may be performed by components of data platform102such as the execution platform110. Accordingly, the operations700is described below, by way of example with reference thereto. However, it shall be appreciated that operations700may be deployed on various other hardware configurations and is not intended to be limited to deployment within the data platform102. Depending on the embodiment, an operation of the operations700may be repeated in different ways or involve intervening operations not shown. Though the operations of the operations700may be depicted and described in a certain order, the order in which the operations are performed may vary among embodiments, including performing certain operations in parallel or performing sets of operations in separate processes. At operation701, the table generation module400accesses a text document comprising a plurality of strings, as discussed above. At operation702, the table generation module400processes the text document by a machine learning model to generate a table comprising a plurality of entries that organizes the plurality of strings into rows and columns over a plurality of iterations, as discussed above. At operation703, the table generation module400, at each of the plurality of iterations, estimates by the machine learning model a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration, as discussed above. Described implementations of the subject matter can include one or more features, alone or in combination as illustrated below by way of example. Example 1. A system comprising: at least one hardware processor; and at least one memory storing instructions that cause the at least one hardware processor to execute operations comprising: accessing a text document comprising a plurality of strings; processing the text document by a machine learning model to generate a table comprising a plurality of entries that organizes the plurality of strings into rows and columns over a plurality of iterations; and at each of the plurality of iterations, estimating by the machine learning model a first value of a first entry of the plurality of entries based on a second value of a second entry of the plurality of entries that has been determined in a prior iteration. Example 2. The system of Example 1, the operations comprising: receiving a query that indicates values for the columns, wherein the table is generated based on the values indicated for the columns. Example 3. The system of any one of Examples 1-2, the operations comprising: generating a first table instance comprising a first set of entries based on the plurality of strings; at a first iteration, generating, by the machine learning model, a first plurality of confidence scores for values in each entry of the first set of entries of the first table instance; and selecting a first subset of the first set of entries of the first table instance based on the first plurality of confidence scores. Example 4. The system of Example 3, the operations comprising: retrieving a confidence threshold; comparing the first plurality of confidence scores for each entry of the first set of entries of the first table instance to the confidence threshold; and identifying the first subset of the first set of entries that are associated with respective confidence scores that transgress the confidence threshold. Example 5. The system of Example 4, the operations comprising generating a second table instance comprising a second set of entries by: after selecting the first subset of the first set of entries, resetting the values associated with a remaining set of entries that are excluded from the first subset of the first set of entries; and retaining the values associated with the first subset of the first set of entries in the second table instance. Example 6. The system of Example 5, the operations comprising: at a second iteration, processing the second table instance by the machine learning model, to generate a second plurality of confidence scores for each entry of the second set of entries of the second table instance; and selecting a second subset of the second set of entries based on the second plurality of confidence scores. Example 7. The system of Example 6, the operations comprising: comparing the second plurality of confidence scores to the confidence threshold; and identifying the second subset of the second set of entries that are associated with respective confidence scores that transgress the confidence threshold. Example 8. The system of Example 7, the operations comprising: repeating generation of additional table instances until all of the entries of a given table instance are associated with confidence scores that transgress the confidence threshold. Example 9. The system of any one of Examples 1-8, the operations comprising: training the machine learning model based on a corpus of training documents to maximize an expected log likelihood for a training table across all random permutations of a factorization order. Example 10. The system of any one of Examples 1-9, wherein the machine learning model is trained to populate the plurality of entries of the table in any order in a way that maximizes confidence in each individual entry and reduces error accumulation. Example 11. The system of any one of Examples 1-10, the operations comprising: training the machine learning model based on a corpus of training documents by populating a plurality of training tables representing different permutations of strings in the training documents to maximize an expected log likelihood. Example 12. The system of any one of Examples 1-11, the machine learning model being trained in a supervised or unsupervised manner. Example 13. The system of any one of Examples 1-12, wherein the machine learning model estimates column headers of the table. Example 14. The system of any one of Examples 1-13, the operations comprising: inferring one or more values for individual entries of the plurality of entries with words excluded from the plurality of strings based on values of other entries of the plurality of entries. FIG.8illustrates a diagrammatic representation of a machine800in the form of a computer system within which a set of instructions may be executed for causing the machine800to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.8shows a diagrammatic representation of the machine800in the example form of a computer system, within which instructions816(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions816may cause the machine800to execute any one or more operations of the above processes (e.g., operations700). In this way, the instructions816transform a general, non-programmed machine into a particular machine800(e.g., the compute service manager108or one or more execution nodes of the execution platform110) that is specially configured to carry out any one of the described and illustrated functions in the manner described herein. In alternative embodiments, the machine800operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a smart phone, a mobile device, a network router, a network switch, a network bridge, or any machine capable of executing the instructions816, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions816to perform any one or more of the methodologies discussed herein. The machine800includes processors810, memory830, and input/output (I/O) components850configured to communicate with each other such as via a bus802. In an example embodiment, the processors810(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor812and a processor814that may execute the instructions816. The term “processor” is intended to include multi-core processors810that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions816contemporaneously. AlthoughFIG.8shows multiple processors810, the machine800may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory830may include a main memory832, a static memory834, and a storage unit836, all accessible to the processors810such as via the bus802. The main memory832, the static memory834, and the storage unit836store the instructions816embodying any one or more of the methodologies or functions described herein. The instructions816may also reside, completely or partially, within the main memory832, within the static memory834, within the storage unit836, within at least one of the processors810(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. The I/O components850include components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components850that are included in a particular machine800will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components850may include many other components that are not shown inFIG.8. The I/O components850are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components850may include output components852and input components854. The output components852may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), other signal generators, and so forth. The input components854may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. Communication may be implemented using a wide variety of technologies. The I/O components850may include communication components864operable to couple the machine800to a network880or devices870via a coupling882and a coupling872, respectively. For example, the communication components864may include a network interface component or another suitable device to interface with the network880. In further examples, the communication components864may include wired communication components, wireless communication components, cellular communication components, and other communication components to provide communication via other modalities. The devices870may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a universal serial bus (USB)). For example, as noted above, the machine800may correspond to any one of the compute service manager108, the execution platform110, and the devices870may include any other computing device described herein as being in communication with the data platform102. The various memories (e.g.,830,832,834, and/or memory of the processor(s)810and/or the storage unit836) may store one or more sets of instructions816and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions816, when executed by the processor(s)810, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple transitory or non-transitory storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable transitory or non-transitory instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate arrays (FPGAs), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network880may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network880or a portion of the network880may include a wireless or cellular network, and the coupling882may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling882may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions816may be transmitted or received over the network880using a transmission medium via a network interface device (e.g., a network interface component included in the communication components864) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions816may be transmitted or received using a transmission medium via the coupling872(e.g., a peer-to-peer coupling) to the devices870. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions816for execution by the machine800, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of the process or operations700may be performed by one or more processors. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but also deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across a number of locations. Although the embodiments of the present disclosure have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent, to those of skill in the art, upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim is still deemed to fall within the scope of that claim.
90,477
11860849
For clarity, identical reference numbers have been used, where applicable, to designate identical elements that are common between figures. It is contemplated that features of one embodiment may be incorporated in other embodiments without further recitation. DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details. Many applications that rely on multi-datastore environments need to keep these multiple datastores synchronized. Some applications rely on traditional multi-datastore synchronization techniques such as dual-writes, distributed transactions, or the like. Such traditional multi-datastore synchronization techniques have limitations including datastores remaining out of sync until a repair routine is applied, complications arising when more than two datastores are involved, inconsistent features and behavior across different datastores, heterogeneous transaction models, or the like. Some applications rely on Change-Data-Capture (CDC) techniques, which address some of the limitations of traditional multi-datastore synchronization techniques by allowing for capturing of committed changes from a source datastore in near real-time and enabling propagation of those changes to derived datastores, downstream consumers, or the like. However, many CDC techniques rely on transaction logs, which often have limited retention and are not guaranteed to contain the full history of changes of the source datastore. Prior-art techniques that have tried to address limitations of CDC techniques have many limitations, including propagation delays caused when processing of log events is stopped while a dump is in progress, limitations regarding when dump processing can be executed (e.g., only during a bootstrap phase or when data loss is detected on a transaction log), disruption in propagation of real time changes due to reliance on techniques that block write traffic to tables during dump processing, and reliance on non-transferrable advanced database features that cannot be used in heterogeneous multi-datastore environments. In contrast, disclosed techniques allow for concurrent log and dump processing across multiple generic databases in a multi-datastore environment. In disclosed embodiments, watermark based log and dump processing begins when a CDCLog process instance pauses the log event processing of a change log that includes one or more log events associated with one or more changes in source datastore(s). When the log event processing is paused, CDCLog process instance generates, in a watermark table, a low watermark entry. CDCLog process instance selects, from source datastore(s), a chunk comprising one or more rows of data. CDCLog process instance stores the chunk in memory, indexed by primary key. CDCLog process instance generates, in the watermark table, a high watermark entry and then resumes log event processing of the change log. CDCLog process instance determines whether a low watermark event associated with the low watermark entry has been received. When a low watermark event has been received, CDCLog process instance compares the one or more rows in the chunk with one or more log events occurring after the low watermark event in the change log to determine one or more conflicting rows. CDCLog process instance removes one or more conflicting rows from the chunk. CDCLog process instance determines whether a high watermark event associated with the high watermark entry has been received. When a high watermark event has been received, CDCLog process instance generates an event for each of the non-conflicting rows in the chunk. CDCLog process instance send events associated with the non-conflicting rows in the chunk to the output (e.g., a sink datastore) prior to processing any further log events in change log. Advantageous, disclosed techniques allow for concurrent log and dump processing, enabling high availability of real-time events to downstream consumers, and reducing propagation delays from the source datastore to the derived datastores by capturing dumps reflecting the full state of the source datastore via chunks interleaved with the real-time events. Disclosed techniques also allow downstream customers to trigger, pause, or resume dumps after the last completed chunk at any time, thereby allow for real-time and efficient customization of the log and dump processing without needing to restart the dump processing from the beginning. Additionally, disclosed techniques do not require locks on tables, thereby minimizing the impact on bandwidth and write traffic in databases. Further, disclosed techniques can be configured for a variety of relational databases, allowing for consistent behavior when deployed in heterogeneous multi-datastore environments that contain different kinds of database systems. FIG.1illustrates a computing system100configured to implement one or more aspects of the present disclosure. As shown, computing system100includes, without limitation, computing device101, source datastore(s)120, Change-Data-Capture Log (CDCLog) platform140, and sink datastore(s)160. Computing device101includes an interconnect (bus)112that connects one or more processor(s)102, an input/output (I/O) device interface104coupled to one or more input/output (I/O) devices108, memory115, a storage114, and a network interface106coupled to network110. Computing device101includes a desktop computer, a laptop computer, a smart phone, a personal digital assistant (PDA), tablet computer, or any other type of computing device configured to receive input, process data, and optionally display images, and is suitable for practicing one or more embodiments. Computing device101described herein is illustrative and that any other technically feasible configurations fall within the scope of the present disclosure. In some embodiments, computing device101includes any technically feasible internet-based computing system, such as a distributed computing system or a cloud-based storage system. In some embodiments, computing device101includes, without limitation, a plurality of networks, a plurality of servers, a plurality of operating systems, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or any other device suitable for implementing one or more aspects of the present disclosure. Interconnect (bus)112includes one or more reconfigurable interconnects that links one or more components of computing device101such as one or more processors, one or more input/output ports, storage, memory, or the like. In some embodiments, interconnect (bus)112combines the functions of a data bus, an address bus, a control bus, or the like. In some embodiments, interconnect (bus)112includes an I/O bus, a single system bus, a shared system bus, a local bus, a peripheral bus, an external bus, a dual independent bus, or the like. Processor(s)102includes any suitable processor implemented as a central processing unit (CPU), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), an artificial intelligence (AI) accelerator, any other type of processor, or a combination of different processors, such as a CPU configured to operate in conjunction with a GPU. In general, processor(s)102may be any technically feasible hardware unit capable of processing data and/or executing software applications. Further, in the context of this disclosure, the computing elements shown in computing device101may correspond to a physical computing system (e.g., a system in a data center) or may be a virtual computing instance executing within a computing cloud. I/O device interface104enables communication of I/O devices108with processor(s)102. I/O device interface104generally includes the requisite logic for interpreting addresses corresponding to I/O devices108that are generated by processor(s)102. I/O device interface104may also be configured to implement handshaking between processor(s)102and I/O devices108, and/or generate interrupts associated with I/O devices108. I/O device interface104may be implemented as any technically feasible CPU, ASIC, FPGA, any other type of processing unit or device. I/O devices108include devices capable of providing input, such as a keyboard, a mouse, a touch-sensitive screen, a microphone, a remote control, and so forth, as well as devices capable of providing output, such as a display device. Additionally, I/O devices108may include devices capable of both receiving input and providing output, such as a touchscreen, a universal serial bus (USB) port, and so forth. I/O devices108may be configured to receive various types of input from an end-user of computing system100, and to also provide various types of output to the end-user of computing system100, such as displayed digital images or digital videos or text. In some embodiments, one or more of I/O devices108are configured to couple computing system100to a network110. In some embodiments, I/O devices108can include, without limitation, a smart device such a personal computer, personal digital assistant, tablet computer, mobile phone, smart phone, media player, mobile device, or any other device suitable for implementing one or more aspects of the present invention. I/O devices108can augment the functionality of computing system100by providing various services, including, without limitation, telephone services, navigation services, infotainment services, or the like. Further, I/O devices108can acquire data from sensors and transmit the data to computing system100. I/O devices108can acquire sound data via an audio input device and transmits the sound data to computing system100for processing. Likewise, I/O devices108can receive sound data from computing system100and transmit the sound data to an audio output device so that the user can hear audio originating from computing system100. In some embodiments, I/O devices108include sensors configured to acquire biometric data from the user (e.g., heartrate, skin conductance, or the like) and transmit signals associated with the biometric data to computing system100. The biometric data acquired by the sensors can then be processed by a software application running on computing system100. In various embodiments, I/O devices108include any type of image sensor, electrical sensor, biometric sensor, or the like, that is capable of acquiring biometric data including, for example and without limitation, a camera, an electrode, a microphone, or the like. In some embodiments, I/O devices108include, without limitation, input devices, output devices, and devices capable of both receiving input data and generating output data. I/O devices108can include, without limitation, wired or wireless communication devices that send data to or receive data from smart devices, headphones, smart speakers, sensors, remote databases, other computing devices, or the like. Additionally, in some embodiments, I/O devices108may include a push-to-talk (PTT) button, such as a PTT button included in a vehicle, on a mobile device, on a smart speaker, or the like. Storage114includes non-volatile storage for applications and data, and may include fixed or removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-Ray, HD-DVD, or other magnetic, optical, solid state storage devices, or the like. In some embodiments, any of the software programs on CDCLog Platform140are stored in storage114and loaded into memory115on computing device101when executed. Memory115includes a random access memory (RAM) module, a flash memory unit, or any other type of technically feasible memory unit or combination thereof on computing device101. Processor(s)102, I/O device interface104, and network interface106are configured to read data from and write data to memory115. In some embodiments, any of the software programs on CDCLog Platform140are stored in memory115on computing device101. Network110includes any technically feasible type of communications network that allows data to be exchanged between computing system100and external entities or devices, such as a web server or another networked computing device. For example, network110may include a wide area network (WAN), a local area network (LAN), a wireless (WiFi) network, and/or the Internet, among others. Network interface106is a computer hardware component that connects processor102to a communication network. Network interface106may be implemented in computing system100as a stand-alone card, processor, or other hardware device. In some embodiments, network interface106may be configured with cellular communication capability, satellite telephone communication capability, a wireless WAN communication capability, or other types of communication capabilities that allow for communication with a communication network and other computing devices external to computing system100. Source datastore(s)120include any technically feasible storage infrastructure for storing and managing collections of data. In some embodiments, source datastore(s)120include one or more event producers, one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. In some embodiments, source datastore(s)120operate on a plurality of servers, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or the like. In some embodiments, source datastore(s)120include data managed by one or more teams, one or more business entities, or the like. Source datastore(s)120include, without limitation, source database(s)121, watermark table125, and change log127. Source database(s)121include any technically feasible collection of one or more sets of related data. In some embodiments, source database(s)121include one or more active databases, one or more cloud databases, one or more relational databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. In some embodiments, source database(s)121include a database management system configured to provide an interface for interaction between source database(s)121and users, applications, or the like. Source database(s)121include, without limitation, table(s)122. Table(s)122include any technically feasible organization of related data in a table format or the like. In some embodiments, table(s)122include one or more sets of attributes, data elements, data values, or the like. In some embodiments, table(s)122are associated with metadata, constraints, or the like. Table(s)122include, without limitation, one or more column(s)124and one or more row(s)123. Column(s)124include one or more set of data values of a particular type such as text values, numbers, points, or the like. In some embodiments, column(s)124are associated with one or more sets of attributes, one or more fields, or the like. In some embodiments, column(s)124include one or more columns designated as a primary key by which row(s)123can be uniquely identified, cross-referenced, or the like. In some embodiments, column(s)124include natural keys derived from application data, surrogate keys associated with an object or an entity, alternate keys, or the like. Row(s)123include one or more structured data items associated with table(s)122. In some embodiments, row(s)123include a succession of data values arranged in one or more column(s)124. In some embodiments, row(s)123include an ordered sequence of data elements or the like. In some embodiments, each row123includes a single structured data value or the like. Watermark table125includes any table in source datastore(s)120configured to store a row of data or the like. In some embodiments, watermark table125is stored in a dedicated namespace to prevent collision with application tables in source datastore(s)120. In some embodiments, watermark table125includes a single row of data which stores a Universally Unique Identifier (UUID) field. In some embodiments, a watermark (e.g., low watermark, high watermark) is generated by updating the row to reflect a specific UUID value. In some embodiments, the row update results in a change event which is captured and eventually received through change log127. Change log127includes any technically feasible history of actions executed or performed on source datastore(s)120or the like. In some embodiments, change log127includes a log of one or more log event(s)128reflecting one or more changes in source datastore(s)120(e.g., changed rows, changed column schema, or the like). In some embodiments, the change log127includes a linear history of committed changes and non-stale reads associated with source datastore(s)120. In some embodiments, log event(s)128include any event emitted by source datastore(s)120based on changes to rows, columns, or the like. In some embodiments, the log event(s)128may be emitted in real-time. In some embodiments, log event(s)128are of one or more types including create, update, delete, or the like. In some embodiments, log event(s)128are associated with one or more attributes including a log sequence number (LSN), the column values at the time of the operation (e.g., column title at a given point in time or the like), the schema that applied at the time of the operation, the full pre and post image of the row from the time of the change, a link to the last log record, type of database log record, information associated with the change that triggered the log record to be written, or the like. Sink datastore(s)160include any technically feasible storage infrastructure for storing and managing collections of data. In some embodiments, sink datastore(s)160include one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. In some embodiments, sink datastore(s)160operate on a plurality of servers, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or the like. In some embodiments, sink datastore(s)160include data managed by one or more teams, one or more business entities, or the like. In some embodiments, sink datastore(s)160include any downstream application configured to propagate received events (e.g., stream processing application, data analytics platform), search index, cache, or the like. Sink datastore(s)160includes, without limitation, derived datastore(s)161, stream(s)162and application programming interfaces (API(s))163. Derived datastore(s)161include any technically feasible storage infrastructure storing data derived from source datastore(s)120. In some embodiments, derived datastore(s)161include one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. Stream(s)162include any technically feasible software for handling real-time data feeds, including stream processing applications or the like. In some embodiments, stream(s)162includes functionality to allow users to receive output from CDCLog platform140and publish the data to any system or real-time application or the like. In some embodiments, stream(s)162sores key-value messages associated with one or more source datastore(s)120, partitions the data into one or more partitions, order messaged within each partition based on an offset associated with the position of a message within the partition, index and store messages with a timestamp, or the like. In some embodiments, stream(s)162is configured to allow one or more processes to read messages from the one or more partitions or the like. API(s)163include any technically feasible programming interface that enables propagation of output produced by output writer144or the like. In some embodiments, API(s)163include functionality to enable programming of output writer144. In some embodiments, API163include functionality to read configuration data stored in output writer144. CDCLog platform140includes any technically feasible Internet-based computing system, such as a distributed computing system or a cloud-based storage system. In some embodiments, CDCLog platform140includes, without limitation, a plurality of networks, a plurality of servers, a plurality of operating systems, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or any other device suitable for implementing one or more aspects of the present disclosure. CDCLog platform140includes, without limitation, Change-Data-Capture (CDC) connector141, centralized coordination service142, dump interface143, output writer144, CDCLog process instance(s)145, and CDCLog data150. CDC connector141includes any technically feasible connector configured to provide data transportation infrastructure for routing and publishing one or more events. The one or more events may be routing and/or publishing by the CDC connector141in real-time. In some embodiments, CDC connector141publishes events to the transport layer in a bus architecture or the like. In some embodiments, CDC connector141is configured to capture committed changes from source datastore(s)120in real-time from change log127, dump(s)151, or the like, and propagate the changes to sink datastore(s)160or the like. In some embodiments, CDC connector141includes an algorithm configured to optimize the flow of events to minimize propagation delays from source datastore(s)120to sink datastore(s)160or the like. In some embodiments, CDC connector141includes functionality associated with custom wire format, serialization (e.g., keeping track of different formats of payload to optimize on transportation and storage), batching, throttling, dynamic switching between one or more source database(s)121, data enrichment, data synchronization, or the like. In some embodiments, CDC connector141includes functionality associated with producing, collecting, processing, aggregating, and routing events. In some embodiments, CDC connector141includes functionality associated with deduplication, schematization, resilience, fault tolerance, or the like. In some embodiments, CDC connector141provides functionality that enables users to write custom processing logic (e.g., filter, transformation), build internal representation(s) of event processing steps, customize the interpretation of event processing steps, customize the initialization of one or more operators that execute event processing steps, or the like. Centralized coordination service142includes any technically feasible service configured to provide one or more distributed services on CDCLog platform140, including maintaining configuration or naming information, providing distributed synchronization, providing group management services and presence protocols, or the like. In some embodiments, centralized coordination service142includes functionality to automatically detect one or more failures (e.g., mismatch between view of container resources and container's runtime view, failure to elect a cluster leader, unclean leader election, unstable container resources with periodic restart/failure, network connectivity issues) and to perform one or more actions to correct the detected failure (e.g., proactive termination of affected containers, adjust job manager or task manager). In some embodiments, when CDCLog platform140uses an active-passive architecture (e.g., with one active instance of CDCLog process instance(s)145and one or more passive standby instances of CDCLog process instance(s)145), centralized coordination service142is configured to enable leader election to determine an active instance of CDCLog process instance(s)145. In some embodiments, the leadership of a given instance of CDCLog process instance(s)145is a lease and is lost if not refreshed on time, allowing another instance of CDCLog process instance(s)145to take over. In some embodiments, centralized coordination service142checks whether a leadership lease is valid before executing a protected block of code, and uses concurrency control or the like to guard against outdated leaders. Dump interface143includes any technically feasible programming interface that enables interaction with log and dump processing functionality on CDCLog platform140. In some embodiments, dump interface143includes functionality to enable triggering of dumps on demand, scheduling of dumps, or the like. In some embodiments, dump interface143includes functionality to enable execution of dump requests for all tables122, for a specific table in source datastore120, for a specific set of primary keys of a table in source database121, or the like. In some embodiments, dump interface143includes functionality to configure the size of one or more chunk(s)152of dump(s)151. In some embodiments, dump interface143includes functionality to delay processing of one or more chunk(s)152and to allow only log processing for a predetermined period of time. In some embodiments, dump interface includes an algorithm that automatically determines the size of one or more chunk(s)152, the timing of one or more delays in dump processing, or the like in order to meet one or more factors (e.g., system throughput requirements, one or more processing factors associated with CDCLog process instance(s)145) or the like. In some embodiments, dump interface143includes one or more controls for controlling dump processing during runtime (e.g., throttling, pausing, resuming). In some embodiments, dump interface143includes functionality to check state information associated with log or dump processing, enable programming of configuration data associated with log or dump processing, or the like. In some embodiments, dump interface143provides functionality to enable users to build and operate custom log or dump processing applications or the like. In some embodiments, dump interface143includes functionality to enable customization of one or more statistical properties associated with a target rate of dump processing (e.g., mean values for dump events emitted per second, minimum or maximum values of dump events emitted per second, standard deviation, range of values, median values, and/or the like). In some embodiments, the customization of the target rate of dump processing is based on the variations in one or more statistical properties associated with real-time bandwidth requirement of log processing events (e.g., mean values, minimum or maximum values, standard deviation, range of values, median values, and/or the like). Output writer144includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s)160or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer144runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema153or an associated schema identifier in the output. In some embodiments, the output buffer is stored in memory. In some embodiments, output writer144includes an interface configured to plugin to any output such as sink datastore(s)160. In some embodiments, output writer144includes an event serializer configured to serialize events into a customized format prior to appending the events to an output buffer, writing events to an output, or the like. In some embodiments, output writer144includes an interface to allow plugin of a custom formatter for serializing events in a customized format. In some events, output writer144appends events to an output buffer using one thread, and uses another thread to consume the events from the output buffer and send them to the output. CDCLog process instance(s)145include any technically feasible processor configured to perform concurrent log and dump processing or the like. In some embodiments, CDCLog process instance(s)145are configured in an active-passive architecture, with one active instance of CDCLog process instance(s)145and one or more passive standby instances of CDCLog process instance(s)145. In some embodiments, one or more CDCLog process instance(s)145are deployed for one or more availability zones, one or more regions, or the like. In some embodiments, CDCLog process instance(s)145include one or more algorithms configured to customize the execution of concurrent log and dump processing based on one or more processing factors including job state (e.g., stateless parallel processing, large local states), job complexity (e.g., one or more parallel jobs with operators chained together, one or more shuffling stages, complex sessionization logic), windows or sessions information (e.g., range of window size required to capture a transaction start or end event, user behavior session windows), traffic pattern (e.g., bursty traffic pattern with sudden spikes, consistent or fixed traffic patterns), failure recovery (e.g., low failure recovery latency), backfill or rewind of processing job (e.g., replay of data from a batch source, dynamically switching source, rewind from a specified checkpoint), resource contention (e.g., CPU, network bandwidth, memory, network reliability), tuneable tradeoffs (e.g., duplicates versus latency, consistency versus availability, strict ordering versus random ordering), ordering of events (e.g., strict ordering assumptions, random ordering), delivery or processing guarantees (e.g., exactly-once processing guarantee, high durability guarantee, service level guarantee), domain specific platform requirements (e.g., streaming job lifecycle details, operation overhead, agility of development and deployment of changes), failure characteristics (e.g., zone failure, instance failure, cluster failure, network blips, inter-service congestion/backpressure, regional disaster failures, transient failure), cluster configuration (e.g., enabling or disabling unclean leader election based on availability versus durability tradeoff, default replication factor, minimum insync replicas, acknowledgement requirement), or the like. CDCLog data150include any data associated with any component of CDCLog platform140including, without limitation, data associated with any of Change-Data-Capture (CDC) connector141, centralized coordination service142, dump interface143, output writer144, and CDCLog process instance(s)145. CDCLog data150includes, without limitation, dump(s)151, output buffer154, and state information155. Dump(s)151include any data that captures the full state of any component of source datastore(s)120, including, source database(s)121, table(s)122, column(s)124, row(s)123, or the like. In some embodiments, dump(s)151include the full state of all tables122, a specific table in source datastore120, a specific set of primary keys of a table in source database121, or the like. Dump(s)151include one or more chunk(s)152. Chunk(s)152include one or more portions of dump(s)151. In some embodiments, chunk(s)152are of a configurable size (e.g., 3 rows). In some embodiments, one or more chunk(s)152are selected for one or more row(s)123that meet a specified criteria (e.g., rows with primary key greater than a given integer, or the like). In some embodiments, a new chunk152are selected by sorting table(s)122using the primary key (e.g., ascending primary key order), and including one or more row(s)123where the primary key is greater than the last primary key of the previous chunk. Retrieved schema153includes metadata information associated one or more columns(s)124such as column name, column type, primary key information, column values, change deltas associated with updates to column attributes, or the like. In some embodiments, the full schema of table(s)122is selected and registered in centralized coordination service142prior to initiating processing of change log127. In some embodiments, retrieved schema153includes a full schema, a schema identifier associated with a schema, or the like. Output buffer154includes one or more events stored in memory by output writer144prior to writing the events to an output such as sink datastore(s)160. In some embodiments, the one or more events are serialized into a customized format prior to being appended to output buffer154. State information155includes any information associated with the state of any component, process, or the like in computing system100. In some embodiments, state information includes information associated with the state of log and dump processing (e.g., current state, goal state, checkpoint state), the state of active or passive CDCLog process instance(s), or the like. In operation, watermark based log and dump processing begins when CDCLog process instance145pauses the log event processing of change log127that includes one or more log events128associated with one or more changes in source datastore(s)120. When the log event processing is paused, CDCLog process instance145generates, in watermark table125, a low watermark entry. CDCLog process instance145selects, from source datastore(s)120, chunk152comprising one or more rows of data. CDCLog process instance145stores chunk152in memory, indexed by primary key. CDCLog process instance145generates, in watermark table125, a high watermark entry and then resumes log event processing of any further log events in change log127. CDCLog process instance145determines whether a low watermark event associated with the low watermark entry has been received. When a low watermark event has been received, CDCLog process instance145compares the one or more rows in chunk152with one or more log events128occurring after the low watermark event in the change log127to determine one or more conflicting rows. CDCLog process instance145removes one or more conflicting rows from chunk152. CDCLog process instance145determines whether a high watermark event associated with the high watermark entry has been received. When a high watermark event has been received, CDCLog process instance145generates an event for each of the non-conflicting rows in chunk152. CDCLog process instance145send events associated with the non-conflicting rows in chunk152to the output (e.g., sink datastore(s)160) prior to processing any further log events in change log127. Algorithm 1 is an example algorithm describing the watermark based log and dump processing: Algorithm 1: Watermark-based Chunk SelectionInput: table(1)pause log event processinglw := uuid( ), hw := uuid( )(2)update watermark table set value = lw(3)chunk := select next chunk from table(4)update watermark table set value = hw(5)resume log event processinginwindow := false// event processing loop:while true do|e := next event from changelog|if not inwindow then||if e is not watermark then|||append e to outputbuffer||else if e is watermark with value lw then|||inwindow := true|else||if e is not watermark then(6)|||if chunk contains e.key then||||remove e.key from chunk|||append e to outputbuffer||else if e is watermark with value hw then(7)|||for each row in chunk do|||└append row to outputbuffer||||└||// other steps of event precessing loop└... As described in algorithm 1, CDCLog process instance145pauses log event processing for a certain amount of time (step 1). CDCLog process instance145generates high and low watermarks by updating watermark table125(steps 2 and 4). CDCLog process instance145selects chunk152between the two watermarks and stores the chunk152in memory (step 3). After the high watermark is written, CDCLog process instance145resumes log event processing, sending received log events to the output writer144, and watching for the low watermark event in change log127. Once the low watermark event is received, CDCLog process instance145removes conflicting rows from chunk152for all primary keys that changed between the watermarks (step 6). Once the high watermark event is received, CDCLog process instance145uses output writer144to append all remaining rows to the output buffer before processing log events again in a sequential manner (step 7). In some embodiments, CDCLog process instance145returns a state associated with the chunk selection of step 3, with the state representing committed changes up to a certain point in history. In some embodiments, CDCLog process instance145executes the chunk selection of step 3 on a specific position of change log127, considering changes up to that point. Algorithm 1 uses CDCLog process instance145to determine a window on change log127which will necessarily contain the chunk selection. The window is opened by writing a low watermark, then the chunks are selected, and the window is closed by writing a high watermark. As the execution position (e.g., the specific log position on change log127where the chunk selection executes) is unknown, all selected chunk rows, which collide with log events128within that window, are removed. This ensures that the chunk selection cannot override the history of log changes. In some embodiments, CDCLog process instance145reads the table state from the time of the low watermark write, or later. In some embodiments, CDCLog process instance145sees the changes that committed before execution of the chunk selection. In some embodiments, CDCLog process instance(s) executes the chunk selection before writing the high watermark in watermark table125. In some embodiments, CDCLog process instance(s) repeats algorithm 1 as long as table(s)122have remaining chunks152. FIG.2is a flowchart of method steps200for watermark based log and dump processing on computing system100, according to various embodiments of the present disclosure. Although the method steps are described in conjunction with the system ofFIG.1, persons skilled in the art will understand that any system configured to perform the method steps in any order falls within the scope of the present disclosure. In step201, CDCLog process instance145pauses the log event processing of change log127that includes one or more log events128associated with one or more changes in source datastore(s)120. In some embodiments, CDCLog process instance145pauses transmission of one or more log events128to sink datastore(s)160. In some embodiments, CDCLog process instance145pauses log event processing of change log127for a period of time determined based on the tradeoff between executing the watermark update and chunk selection and the need to avoid caching log event entries. In some embodiments, CDCLog process instance145pauses log event processing of change log127for a period of time based on a tradeoff between chunk size required to advance chunk processing and high availability requirements for the propagation of real-time changes. In some embodiments, CDCLog process instance145pauses log event processing for a period of time based on user defined runtime configuration settings or the like. In step202, when the log event processing is paused, CDCLog process instance145generates, in watermark table125, a low watermark entry. In some embodiments, generating the low watermark entry includes updating the watermark table125using a single write operation. In some embodiments, CDCLog process instance145generates the low watermark entry by updating a row in watermark table125to reflect a specific UUID value. In step203, CDCLog process instance145selects, from source datastore(s)120, chunk152comprising one or more rows of data. In some embodiments, CDCLog process instance145selects a chunk152associated with committed changes in source datastore(s)120up to a specific point in history corresponding to the log entry associated with the generation of the low watermark entry or later, such as writes that committed after the log entry associated with the generation of the low watermark entry but before the chunk selection. In some embodiments, the chunk selection can be paused, throttled, or resumed at any time. In some embodiments, CDCLog process instance145selects chunk152based on dump processing status information (e.g., progress tracking information associated with the last completed chunk). For instance, if dump processing was paused prior to completion, CDCLog process instance145executes the chunk selection by resuming processing from the last completed chunk without needing to start from the beginning. In some embodiments, chunk152comprises data from a specific table122, across all tables122, for a specific set of primary keys of a table122, or the like. In some embodiments, the chunk selection runs without using locks on tables or blocking read or write traffic on one or more tables of source datastore(s)120. In step204, CDCLog process instance145stores chunk152in memory, indexed by primary key. In some embodiments, chunk152is stored automatically based on data acquired from sensors located on one or more I/O devices108. For instance, chunk152can be stored based on the sensor capturing the user voicing a save command, motion and/or a gesture by the user associated with the initiation of storage of chunk152, a user interaction with an input device, and/or the like. In step205, CDCLog process instance145generates, in watermark table125, a high watermark entry. In some embodiments, generating the high watermark entry includes updating the watermark table125using a single write operation. In some embodiments, CDCLog process instance145generates the high watermark entry by updating a row in watermark table125to reflect a specific UUID value. In step206, CDCLog process instance145resumes log event processing of change log127. In some embodiments, CDCLog process instance145resumes transmission of any further log events128in change log127to sink datastore(s)160. In some embodiments, log event(s)128are of one or more types including create, update, delete, or the like associated with one or more primary keys. In some embodiments, CDCLog process instance145resumes processing event-by-event without needing to cache log event entries in change log127. In step207, CDCLog process instance145determines whether a low watermark event associated with the low watermark entry has been received. In some embodiments, CDCLog process instance145compares one or more log events128with the UUID associated with a low watermark entry in watermark table125. When CDCLog process instance145determines that a low watermark event has been received, the procedure advances to step208. When CDCLog process instance145determines that a low watermark event has not been received, the procedure reverts to step206. In step208, when a low watermark event has been received, CDCLog process instance145compares the one or more rows in chunk152with one or more log events128occurring after the low watermark event in the change log127to determine one or more conflicting rows. In some embodiments, CDCLog process instance145compares one or more primary keys associated with one or more rows in chunk152to one or more primary keys associated with one or more log events128occurring after the low watermark event. In some embodiments, CDCLog process instance145identifies one or more conflicting rows when one or more rows in chunk152have one or more primary keys that match one or more log events128when the one or more primary keys overlap. In some embodiments, conflicting rows include one or more rows in chunk152that have identical primary keys (e.g., same primary key value or the like) as the one or more log events128. In some embodiments, non-conflicting rows include one or more rows in chunk152whose primary keys do not overlap or match any primary key in the one or more log events128. In step209, CDCLog process instance145removes one or more conflicting rows from chunk152. In some embodiments, the one or more conflicting rows in chunk152are removed automatically based on data acquired from sensors located on one or more I/O devices108. For instance, one or more conflicting rows can be removed based on the sensor capturing the user voicing a remove command, motion and/or a gesture by the user associated with the initiation of removal of the one or more conflicting rows, a user interaction with an input device, and/or the like. In step210, CDCLog process instance145determines whether a high watermark event associated with the high watermark entry has been received. In some embodiments, CDCLog process instance145compares one or more log events128with the UUID associated with a high watermark entry in watermark table125. When CDCLog process instance145determines that a high watermark event has been received, the procedure advances to step211. When CDCLog process instance145determines that a high watermark event has not been received, the procedure reverts to step208. In step211, when a high watermark event has been received, CDCLog process instance145uses output writer144to generate an event for each of the non-conflicting rows in chunk152. In some embodiments, the generated events are serialized in the same format as log events128in change log127, in a customized format, or the like. In step212, CDCLog process instance145send events associated with the non-conflicting rows in chunk152to the output prior to processing any further log events in change log127. In some embodiments, CDCLog process instance145uses output writer144to append the events associated with the non-conflicting rows in chunk152to output buffer154. In some events, output writer144appends events to output buffer154using one thread, and uses another thread to consume the events from the output buffer154and send them to the output (e.g., sink datastore(s)160), thereby allowing CDCLog process instance145to resume regular log event processing of events that occur after the high watermark event. In some embodiments, events occurring up to the high watermark event are appended to the output buffer154first. Then, events associated with the non-conflicting rows in chunk152are appended to the output buffer154in order prior to appending any log events occurring after the high watermark event. FIG.3Ais an example implementation300A of log and dump processing techniques on computing system100, according to various embodiments of the present disclosure. Although the method steps are described in conjunction with the system ofFIG.1, persons skilled in the art will understand that any system configured to perform the method steps in any order falls within the scope of the present disclosure. Steps 1-4 illustrate watermark generation and chunk selection. In step 1, CDCLog process instance145pauses log processing of change log127. In steps 2 and 4, CDCLog process instance145generates a low watermark and a high watermark respectively in a watermark table125(not illustrated) in source datastore120, creating two change events associated with watermark events301and302which are eventually received via change log127. In step 3, CDCLog process instance145selects a chunk152including a chunk result set303comprising rows associated with primary keys k1, k2, k3, k4, k5, k6. FIG.3Bis an example implementation300B of log and dump processing techniques on computing system100, according to various embodiments of the present disclosure. Although the method steps are described in conjunction with the system ofFIG.1, persons skilled in the art will understand that any system configured to perform the method steps in any order falls within the scope of the present disclosure. Steps 5-7 illustrate processing of chunk152between the watermark events301and302. In step 5, CDCLog process instance145resumes log processing of change log127. In step 6, when CDCLog process instance145determines that a low watermark event301has been reached, CDCLog process instance145compares the one or more rows in chunk result set303with one or more log events occurring after the low watermark event301in the change log127to determine one or more conflicting rows. In some embodiments, CDCLog process instance145removes rows in chunk result set303that have the same primary keys as log events occurring between watermark events301and302in change log127. For instance, CDCLog process instance145removes rows associated with primary keys k1 and k3 from the chunk result set303, with the remaining rows being included in chunk result set304. In step 7, when CDCLog process instance145determines that a high watermark event302has been reached, CDCLog process instance145concludes the chunk result set processing performed in step 6. FIG.4is an example implementation400of log and dump processing techniques on computing system100, according to various embodiments of the present disclosure. Although the method steps are described in conjunction with the system ofFIG.1, persons skilled in the art will understand that any system configured to perform the method steps in any order falls within the scope of the present disclosure. After the high watermark event302has been reached, CDCLog process instance145uses output writer144to generate events associated with chunk result set304. CDCLog process instance145then uses output writer144to append the events associated with chunk result set304to output buffer154. In some events, output writer144appends events to output buffer154using one thread, and uses another thread to consume the events from the output buffer154and send them to the output (e.g., sink datastore(s)160), thereby allowing CDCLog process instance145to resume regular log event processing of events that occur after the high watermark event302. In some embodiments, events occurring up to high watermark event302are appended to the output buffer first. Then, events associated with the chunk result set404are appended to the output buffer in order prior to appending any log events occurring after high watermark event302(e.g., k1, k2, k6). As a result, log and dump events are interleaved in the output buffer. Events in the output buffer are delivered to sink datastore(s)160in order. FIG.5illustrates a network infrastructure500used to distribute content to content servers510and endpoint devices515, according to various embodiments of the invention. As shown, the network infrastructure500includes content servers510, control server520, and endpoint devices515, each of which are connected via a network505. Each endpoint device515communicates with one or more content servers510(also referred to as “caches” or “nodes”) via the network505to download content, such as textual data, graphical data, audio data, video data, and other types of data. The downloadable content, also referred to herein as a “file,” is then presented to a user of one or more endpoint devices515. In various embodiments, the endpoint devices515may include computer systems, set top boxes, mobile computer, smartphones, tablets, console and handheld video game systems, digital video recorders (DVRs), DVD players, connected digital TVs, dedicated media streaming devices, (e.g., the Roku® set-top box), and/or any other technically feasible computing platform that has network connectivity and is capable of presenting content, such as text, images, video, and/or audio content, to a user. Each content server510may include a web-server, database, and server application617configured to communicate with the control server520to determine the location and availability of various files that are tracked and managed by the control server520. Each content server510may further communicate with a fill source530and one or more other content servers510in order “fill” each content server510with copies of various files. In addition, content servers510may respond to requests for files received from endpoint devices515. The files may then be distributed from the content server510or via a broader content distribution network. In some embodiments, the content servers510enable users to authenticate (e.g., using a username and password) in order to access files stored on the content servers510. Although only a single control server520is shown inFIG.5, in various embodiments multiple control servers520may be implemented to track and manage files. In various embodiments, the fill source530may include an online storage service (e.g., Amazon® Simple Storage Service, Google® Cloud Storage, etc.) in which a catalog of files, including thousands or millions of files, is stored and accessed in order to fill the content servers510. Although only a single fill source530is shown inFIG.5, in various embodiments multiple fill sources530may be implemented to service requests for files. Further, as is well-understood, any cloud-based services can be included in the architecture ofFIG.5beyond fill source530to the extent desired or necessary. FIG.6is a block diagram of a content server510that may be implemented in conjunction with the network infrastructure500ofFIG.5, according to various embodiments of the present invention. As shown, the content server510includes, without limitation, a central processing unit (CPU)604, a system disk606, an input/output (I/O) devices interface608, a network interface610, an interconnect612, and a system memory614. The CPU604is configured to retrieve and execute programming instructions, such as server application617, stored in the system memory614. Similarly, the CPU604is configured to store application data (e.g., software libraries) and retrieve application data from the system memory614. The interconnect612is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU604, the system disk606, I/O devices interface608, the network interface610, and the system memory614. The I/O devices interface608is configured to receive input data from I/O devices616and transmit the input data to the CPU604via the interconnect612. For example, I/O devices616may include one or more buttons, a keyboard, a mouse, and/or other input devices. The I/O devices interface608is further configured to receive output data from the CPU604via the interconnect612and transmit the output data to the I/O devices616. The system disk606may include one or more hard disk drives, solid state storage devices, or similar storage devices. The system disk606is configured to store non-volatile data such as files618(e.g., audio files, video files, subtitles, application files, software libraries, etc.). The files618can then be retrieved by one or more endpoint devices515via the network505. In some embodiments, the network interface610is configured to operate in compliance with the Ethernet standard. The system memory614includes a server application617configured to service requests for files618received from endpoint device515and other content servers510. When the server application617receives a request for a file618, the server application617retrieves the corresponding file618from the system disk606and transmits the file618to an endpoint device515or a content server510via the network505. FIG.7is a block diagram of a control server520that may be implemented in conjunction with the network infrastructure500ofFIG.5, according to various embodiments of the present invention. As shown, the control server520includes, without limitation, a central processing unit (CPU)704, a system disk706, an input/output (I/O) devices interface708, a network interface710, an interconnect712, and a system memory714. The CPU704is configured to retrieve and execute programming instructions, such as control application717, stored in the system memory714. Similarly, the CPU704is configured to store application data (e.g., software libraries) and retrieve application data from the system memory714and a database718stored in the system disk706. The interconnect712is configured to facilitate transmission of data between the CPU704, the system disk706, I/O devices interface708, the network interface710, and the system memory714. The I/O devices interface708is configured to transmit input data and output data between the I/O devices716and the CPU704via the interconnect712. The system disk706may include one or more hard disk drives, solid state storage devices, and the like. The system disk706is configured to store a database718of information associated with the content servers510, the fill source(s)530, and the files618. The system memory714includes a control application717configured to access information stored in the database718and process the information to determine the manner in which specific files618will be replicated across content servers510included in the network infrastructure500. The control application717may further be configured to receive and analyze performance characteristics associated with one or more of the content servers510and/or endpoint devices515. FIG.8is a block diagram of an endpoint device515that may be implemented in conjunction with the network infrastructure500ofFIG.5, according to various embodiments of the present invention. As shown, the endpoint device515may include, without limitation, a CPU810, a graphics subsystem812, an I/O device interface814, a mass storage unit816, a network interface818, an interconnect822, and a memory subsystem830. In some embodiments, the CPU810is configured to retrieve and execute programming instructions stored in the memory subsystem830. Similarly, the CPU810is configured to store and retrieve application data (e.g., software libraries) residing in the memory subsystem830. The interconnect822is configured to facilitate transmission of data, such as programming instructions and application data, between the CPU810, graphics subsystem812, I/O devices interface814, mass storage unit816, network interface818, and memory subsystem830. In some embodiments, the graphics subsystem812is configured to generate frames of video data and transmit the frames of video data to display device850. In some embodiments, the graphics subsystem812may be integrated into an integrated circuit, along with the CPU810. The display device850may comprise any technically feasible means for generating an image for display. For example, the display device850may be fabricated using liquid crystal display (LCD) technology, cathode-ray technology, and light-emitting diode (LED) display technology. An input/output (I/O) device interface814is configured to receive input data from user I/O devices852and transmit the input data to the CPU810via the interconnect822. For example, user I/O devices852may comprise one of more buttons, a keyboard, and a mouse or other pointing device. The I/O device interface814also includes an audio output unit configured to generate an electrical audio output signal. User I/O devices852includes a speaker configured to generate an acoustic output in response to the electrical audio output signal. In alternative embodiments, the display device850may include the speaker. A television is an example of a device known in the art that can display video frames and generate an acoustic output. A mass storage unit816, such as a hard disk drive or flash memory storage drive, is configured to store non-volatile data. A network interface818is configured to transmit and receive packets of data via the network505. In some embodiments, the network interface818is configured to communicate using the well-known Ethernet standard. The network interface818is coupled to the CPU810via the interconnect822. In some embodiments, the memory subsystem830includes programming instructions and application data that comprise an operating system832, a user interface834, and a playback application836. The operating system832performs system management functions such as managing hardware devices including the network interface818, mass storage unit816, I/O device interface814, and graphics subsystem812. The operating system832also provides process and memory management models for the user interface834and the playback application836. The user interface834, such as a window and object metaphor, provides a mechanism for user interaction with endpoint device515. Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into the endpoint device515. In some embodiments, the playback application836is configured to request and receive content from the content server510via the network interface818. Further, the playback application836is configured to interpret the content and present the content via display device850and/or user I/O devices852. In sum, watermark based log and dump processing begins when CDCLog process instance145pauses the log event processing of change log127that includes one or more log events128associated with one or more changes in source datastore(s)120. When the log event processing is paused, CDCLog process instance145generates, in watermark table125, a low watermark entry. CDCLog process instance145selects, from source datastore(s)120, chunk152comprising one or more rows of data. CDCLog process instance145stores chunk152in memory, indexed by primary key. CDCLog process instance145generates, in watermark table125, a high watermark entry and then resumes log event processing of any further log events in change log127. CDCLog process instance145determines whether a low watermark event associated with the low watermark entry has been received. When a low watermark event has been received, CDCLog process instance145compares the one or more rows in chunk152with one or more log events128occurring after the low watermark event in the change log127to determine one or more conflicting rows. CDCLog process instance145removes one or more conflicting rows from chunk152. CDCLog process instance145determines whether a high watermark event associated with the high watermark entry has been received. When a high watermark event has been received, CDCLog process instance145generates an event for each of the non-conflicting rows in chunk152. CDCLog process instance145send events associated with the non-conflicting rows in chunk152to the output (e.g., sink datastore(s)160) prior to processing any further log events in change log127. Disclosed techniques allow for concurrent log and dump processing, enabling high availability of real-time events to downstream consumers, and reducing propagation delays from the source datastore to the derived datastores by capturing dumps reflecting the full state of the source datastore via chunks interleaved with the real-time events. Disclosed techniques also allow downstream customers to trigger, pause, or resume dumps after the last completed chunk at any time, thereby allow for real-time and efficient customization of the log and dump processing without needing to restart the dump processing from the beginning. Additionally, disclosed techniques do not require locks on tables, thereby minimizing the impact on bandwidth and write traffic in databases. Further, disclosed techniques can be configured for a variety of relational databases, allowing for consistent behavior when deployed in heterogeneous multi-datastore environments that contain different kinds of database systems.1. In some embodiments, a computer-implemented method for concurrent log and dump processing comprises: selecting, from a datastore, a chunk comprising one or more rows of data; comparing the one or more rows of data in the chunk with a first set of log events in a change log associated with the datastore, wherein each log event included in the first set of log events occurs after a first log event in the change log and prior to a second log event in the change log; selecting, based on the comparison, one or more non-conflicting rows in the chunk; and transmitting, to an output, one or more log events associated with the one or more non-conflicting rows in the chunk prior to processing a second set of log events in the change log, wherein the second set of log events occur after the second log event.2. The computer-implemented method of clause 1, further comprising: appending, to an output buffer associated with the output, the one or more log events associated with the one or more non-conflicting rows in the chunk, wherein the one or more log events are appended to the output buffer after the first set of log events and before the second set of log events.3. The computer-implemented method of clauses 1 or 2, wherein the one or more log events associated with the one or more non-conflicting rows in the chunk are serialized in the same format as the first set of log events.4. The computer-implemented method of clauses 1-3, wherein the first log event is associated with generating a low watermark on a watermark table, and wherein the second log event is associated with generating a high watermark on the watermark table.5. The computer-implemented method of clauses 1-4, wherein generating a low watermark comprises updating a row in the watermark table to reflect a specific universally unique identifier (UUID) value.6. The computer-implemented method of clauses 1-5, wherein the chunk is selected after the first log event and prior to the second log event.7. The computer-implemented method of clauses 1-6, wherein the chunk is selected in response to pausing processing of the change log.8. The computer-implemented method of clauses 1-7, further comprising: removing or more conflicting rows from the chunk based on the comparison.9. The computer-implemented method of clauses 1-8, wherein the comparison is performed in response to resuming processing of the change log.10. The computer-implemented method of clauses 1-9, wherein the comparison further comprises comparing one or more primary keys of the one or more rows to one or more primary keys of one or more log events in the first set of log events.11. In some embodiments, one or more non-transitory computer readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: selecting, from a datastore, a chunk comprising one or more rows of data; comparing one or more rows of data in the chunk with a first set of log events in a change log associated with the datastore, wherein the first set of log events occur after a first log event in the change log and prior to a second log event in the change log; selecting, based on the comparison, one or more non-conflicting rows in the chunk; and transmitting, to an output, one or more log events associated with the one or more non-conflicting rows in the chunk prior to processing a second set of log events in the change log, wherein the second set of log events occur after the second log event.12. The one or more non-transitory computer readable media of clause 11, further comprising: appending, to an output buffer associated with the output, the one or more log events associated with the one or more non-conflicting rows in the chunk, wherein the one or more log events are appended after the first set of log events and before the second set of log events.13. The one or more non-transitory computer readable media of clauses 11 or 12, wherein the one or more log events associated with the one or more non-conflicting rows in the chunk are serialized in the same format as the first set of log events.14. The one or more non-transitory computer readable media of clauses 11-13, wherein the first log event is associated with generating a low watermark on a watermark table, and wherein the second log event is associated with generating a high watermark on the watermark table.15. The one or more non-transitory computer readable media of clauses 11-14, wherein generating a low watermark comprises updating a row in the watermark table to reflect a specific universally unique identifier (UUID) value.16. The one or more non-transitory computer readable media of clauses 11-15, wherein the chunk is selected after the first log event and prior to the second log event.17. The one or more non-transitory computer readable media of clauses 11-16, wherein the chunk is selected in response to pausing processing of the change log.18. The one or more non-transitory computer readable media of clauses 11-17, further comprising: removing or more conflicting rows from the chunk based on the comparison.19. The one or more non-transitory computer readable media of clauses 11-18, wherein the comparison is performed in response to resuming processing of the change log.20. In some embodiments, a system comprises: a memory storing one or more software applications; and a processor that, when executing the one or more software applications, is configured to perform the steps of: selecting, from a datastore, a chunk comprising one or more rows of data; comparing one or more rows of data in the chunk with a first set of log events in a change log associated with the datastore, wherein the first set of log events occur after a first log event in the change log and prior to a second log event in the change log; selecting, based on the comparison, one or more non-conflicting rows in the chunk; and transmitting, to an output, one or more log events associated with the one or more non-conflicting rows in the chunk prior to processing a second set of log events in the change log, wherein the second set of log events occur after the second log event. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module,” a “system,” or a “computer.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
76,287
11860850
DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Examples of these specific embodiments are illustrated in the accompanying drawings, and specific details are outlined in the following description to provide a thorough understanding of the subject matter. It will be understood that these examples are not intended to limit the scope of the claims to the illustrated embodiments. On the contrary, they are intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the disclosure. In the present disclosure, physical units of data that are stored in a data platform—and that make up the content of, e.g., database tables in customer accounts—are referred to as micro-partitions. In different implementations, a data platform may store metadata in micro-partitions as well. The term “micro-partitions” is distinguished in this disclosure from the term “files,” which, as used herein, refers to data units such as image files (e.g., Joint Photographic Experts Group (JPEG) files, Portable Network Graphics (PNG) files, etc.), video files (e.g., Moving Picture Experts Group (MPEG) files, MPEG-4 (MP4) files, Advanced Video Coding High Definition (AVCHD) files, etc.), Portable Document Format (PDF) files, documents that are formatted to be compatible with one or more word-processing applications, documents that are formatted to be compatible with one or more spreadsheet applications, and/or the like. If stored internal to the data platform, a given file is referred to herein as an “internal file” and may be stored in (or at, or on, etc.) what is referred to herein as an “internal storage location.” If stored external to the data platform, a given file is referred to herein as an “external file” and is referred to as being stored in (or at, or on, etc.) what is referred to herein as an “external storage location.” These terms are further discussed below. Computer-readable files come in several varieties, including unstructured files, semi-structured files, and structured files. These terms may mean different things to different people. As used herein, examples of unstructured files include image files, video files, PDFs, audio files, and the like; examples of semi-structured files include JavaScript Object Notation (JSON) files, eXtensible Markup Language (XML) files, and the like; and examples of structured files include Variant Call Format (VCF) files, Keithley Data File (KDF) files, Hierarchical Data Format version 5 (HDF5) files, and the like. As known to those of skill in the relevant arts, VCF files are often used in the bioinformatics field for storing, e.g., gene-sequence variations, KDF files are often used in the semiconductor industry for storing, e.g., semiconductor-testing data, and HDF5 files are often used in industries such as the aeronautics industry, in that case for storing data such as aircraft-emissions data. Numerous other example unstructured-file types, semi-structured-file types, and structured-file types, as well as example uses thereof, could certainly be listed here as well and will be familiar to those of skill in the relevant arts. Different people of skill in the relevant arts may classify types of files differently among these categories and may use one or more different categories instead of or in addition to one or more of these. Aspects of the present disclosure provide techniques for configuring database object types (e.g., a stream object) for querying changes in the results of queries and consuming them transactionally. For example, the disclosed techniques may be performed by a streams manager in a network-based database system. In some embodiments, the streams manager configures the processing of a stream object applied on a view. The disclosed techniques build on the concept of a stream by facilitating querying and consumption of changes to queries, rather than consumption of changes only on base tables. For example, the disclosed techniques may be used for maintaining a denormalized join for faster querying, storing the history of changes to a query for auditing, combining with data sharing to enable simple Extract, Transform, Load (ETL) across organizations. In comparison to other query processing techniques (e.g., streams on tables and incremental ETL techniques like Slowly Changing Dimensions (SCD) joins), the disclosed techniques are associated with the following advantages: ease of use, enable incremental processing over a larger set of use cases than the other processing techniques, enables querying changes in combination with governance techniques (e.g., row-access policies, column access policies, and shared secure views). Additional advantages of the disclosed techniques include not requiring timestamp management to track the progress of query processing, using transactional stream consumption (which simplifies exactly-once processing), complex queries are automatically rewritten into a form that efficiently produces incremental changes (instead of the user having to transform the query). The various embodiments that are described herein are described with reference where appropriate to one or more of the various figures. An example computing environment using a streams manager for configuring database object types (e.g., a stream object) for querying changes in the results of queries and consuming them transactionally is discussed in connection withFIGS.1-3. Example configuration and functions associated with the streams manager are discussed in connection withFIGS.4-13. A more detailed discussion of example computing devices that may be used in connection with the disclosed techniques is provided in connection withFIG.14. FIG.1illustrates an example computing environment100that includes a database system in the example form of a network-based database system102, in accordance with some embodiments of the present disclosure. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.1. However, a skilled artisan will readily recognize that various additional functional components may be included as part of the computing environment100to facilitate additional functionality that is not specifically described herein. In other embodiments, the computing environment may comprise another type of network-based database system or a cloud data platform. For example, in some aspects, the computing environment100may include a cloud computing platform101with the network-based database system102, a storage platform104(also referred to as a cloud storage platform), and credential store provider106. The cloud computing platform101provides computing resources and storage resources that may be acquired (purchased) or leased and configured to execute applications and store data. The cloud computing platform101may host a cloud computing service103that facilitates storage of data on the cloud computing platform101(e.g., data management and access) and analysis functions (e.g. SQL queries, analysis), as well as other processing capabilities (e.g., performing reverse ETL functions described herein). The cloud computing platform101may include a three-tier architecture: data storage (e.g., storage platforms104and122), an execution platform110(e.g., providing query processing), and a compute service manager108providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations). For example, a company could be a customer of a particular data platform and also separately maintain storage of any number of files—be they unstructured files, semi-structured files, structured files, and/or files of one or more other types—on, as examples, one or more of their servers and/or on one or more cloud-storage platforms such as AMAZON WEB SERVICES™ (AWS™), MICROSOFT® AZURE®, GOOGLE CLOUD PLATFORM™, and/or the like. The customer's servers and cloud-storage platforms are both examples of what a given customer could use as what is referred to herein as an external storage location. The cloud computing platform101could also use a cloud-storage platform as what is referred to herein as an internal storage location concerning the data platform. From the perspective of the network-based database system102of the cloud computing platform101, one or more files that are stored at one or more storage locations are referred to herein as being organized into one or more of what is referred to herein as either “internal stages” or “external stages.” Internal stages are stages that correspond to data storage at one or more internal storage locations, and where external stages are stages that correspond to data storage at one or more external storage locations. In this regard, external files can be stored in external stages at one or more external storage locations, and internal files can be stored in internal stages at one or more internal storage locations, which can include servers managed and controlled by the same organization (e.g., company) that manages and controls the data platform, and which can instead or in addition include data-storage resources operated by a storage provider (e.g., a cloud-storage platform) that is used by the data platform for its “internal” storage. The internal storage of a data platform is also referred to herein as the “storage platform” of the data platform. It is further noted that a given external file that given customer stores at a given external storage location may or may not be stored in an external stage in the external storage location—i.e., in some data-platform implementations, it is a customer's choice whether to create one or more external stages (e.g., one or more external-stage objects) in the customer's data-platform account as an organizational and functional construct for conveniently interacting via the data platform with one or more external files. As shown, the network-based database system102of the cloud computing platform101is in communication with the cloud storage platforms104and122(e.g., AWS®, Microsoft Azure Blob Storage®, or Google Cloud Storage), and a cloud credential store provider106. The network-based database system102is a network-based system used for reporting and analysis of integrated data from one or more disparate sources including one or more storage locations within the cloud storage platform104. The cloud storage platform104comprises a plurality of computing machines and provides on-demand computer system resources such as data storage and computing power to the network-based database system102. The network-based database system102comprises a compute service manager108, an execution platform110, and one or more metadata databases112. The network-based database system102hosts and provides data reporting and analysis services to multiple client accounts. The compute service manager108coordinates and manages operations of the network-based database system102. The compute service manager108also performs query optimization and compilation as well as managing clusters of computing services that provide compute resources (also referred to as “virtual warehouses”). The compute service manager108can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager108. In some embodiments, the compute service manager108comprises a streams manager128for configuring database object types (e.g., a stream object) for querying changes in the results of queries and consuming them transactionally. A more detailed description of the streams manager128and the functions it may perform is provided in connection withFIGS.2and4-13. The compute service manager108is also in communication with a client device114. The client device114corresponds to a user of one of the multiple client accounts supported by the network-based database system102. A user may utilize the client device114to submit data storage, retrieval, and analysis requests to the compute service manager108. Client device114(also referred to as user device114) may include one or more of a laptop computer, a desktop computer, a mobile phone (e.g., a smartphone), a tablet computer, a cloud-hosted computer, cloud-hosted serverless processes, or other computing processes or devices may be used to access services provided by the cloud computing platform101(e.g., cloud computing service103) by way of a network105, such as the Internet or a private network. In the description below, actions are ascribed to users, particularly consumers and providers. Such actions shall be understood to be performed concerning client device (or devices)114operated by such users. For example, notification to a user may be understood to be a notification transmitted to client device114, input or instruction from a user may be understood to be received by way of the client device114, and interaction with an interface by a user shall be understood to be interaction with the interface on the client device114. In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing such actions by the cloud computing service103in response to an instruction from that user. The compute service manager108is also coupled to one or more metadata databases112that store metadata about various functions and aspects associated with the network-based database system102and its users. For example, a metadata database112may include a summary of data stored in remote data storage systems as well as data available from a local cache. Additionally, a metadata database112may include information regarding how data is organized in remote data storage systems (e.g., the cloud storage platform104) and the local caches. Information stored by a metadata database112allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device. As another example, a metadata database112can store one or more credential objects115. In general, a credential object115indicates one or more security credentials to be retrieved from a remote credential store. For example, the credential store provider106maintains multiple remote credential stores118-1to118-N. Each of the remote credential stores118-1to118-N may be associated with a user account and may be used to store security credentials associated with the user account. A credential object115can indicate one or more security credentials to be retrieved by the compute service manager108from one of the remote credential stores118-1to118-N(e.g., for use in accessing data stored by the storage platform104). The compute service manager108is further coupled to the execution platform110, which provides multiple computing resources (e.g., execution nodes) that execute, for example, various data storage, data retrieval, and data processing tasks. The execution platform110is coupled to storage platform104and cloud storage platforms122. The storage platform104comprises multiple data storage devices120-1to120-N. In some embodiments, the data storage devices120-1to120-N are cloud-based storage devices located in one or more geographic locations. For example, the data storage devices120-1to120-N may be part of a public cloud infrastructure or a private cloud infrastructure. The data storage devices120-1to120-N may be hard disk drives (HDDs), solid-state drives (SSDs), storage clusters, Amazon S3™ storage systems, or any other data-storage technology. Additionally, the cloud storage platform104may include distributed file systems (such as Hadoop Distributed File Systems (HDFS)), object storage systems, and the like. In some embodiments, at least one internal stage126may reside on one or more of the data storage devices120-1-120-N, and at least one external stage124may reside on one or more of the cloud storage platforms122. In some embodiments, communication links between elements of the computing environment100are implemented via one or more data communication networks. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-Networks) coupled to one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol. The compute service manager108, metadata database(s)112, execution platform110, and storage platform104, are shown inFIG.1as individual discrete components. However, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104may be implemented as a distributed system (e.g., distributed across multiple systems/platforms at multiple geographic locations). Additionally, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104can be scaled up or down (independently of one another) depending on changes to the requests received and the changing needs of the network-based database system102. Thus, in the described embodiments, the network-based database system102is dynamic and supports regular changes to meet the current data processing needs. During typical operation, the network-based database system102processes multiple jobs determined by the compute service manager108. These jobs are scheduled and managed by the compute service manager108to determine when and how to execute the job. For example, the compute service manager108may divide the job into multiple discrete tasks and may determine what data is needed to execute each of the multiple discrete tasks. The compute service manager108may assign each of the multiple discrete tasks to one or more nodes of the execution platform110to process the task. The compute service manager108may determine what data is needed to process a task and further determine which nodes within the execution platform110are best suited to process the task. Some nodes may have already cached the data needed to process the task and, therefore, be a good candidate for processing the task. Metadata stored in a metadata database112assists the compute service manager108in determining which nodes in the execution platform110have already cached at least a portion of the data needed to process the task. One or more nodes in the execution platform110process the task using data cached by the nodes and, if necessary, data retrieved from the cloud storage platform104. It is desirable to retrieve as much data as possible from caches within the execution platform110because the retrieval speed is typically much faster than retrieving data from the cloud storage platform104. As shown inFIG.1, the cloud computing platform101of the computing environment100separates the execution platform110from the storage platform104. In this arrangement, the processing resources and cache resources in the execution platform110operate independently of the data storage devices120-1to120-N in the cloud storage platform104. Thus, the computing resources and cache resources are not restricted to specific data storage devices120-1to120-N. Instead, all computing resources and all cache resources may retrieve data from, and store data to, any of the data storage resources in the cloud storage platform104. FIG.2is a block diagram illustrating components of the compute service manager108, in accordance with some embodiments of the present disclosure. As shown inFIG.2, the compute service manager108includes an access manager202and a credential management system204coupled to an access metadata database206, which is an example of the metadata database(s)112. Access manager202handles authentication and authorization tasks for the systems described herein. The credential management system204facilitates the use of remotely stored credentials (e.g., credentials stored in one of the remote credential stores118-1to118-N) to access external resources such as data resources in a remote storage device. As used herein, the remote storage devices may also be referred to as “persistent storage devices” or “shared storage devices.” For example, the credential management system204may create and maintain remote credential store definitions and credential objects (e.g., in the access metadata database206). A remote credential store definition identifies a remote credential store (e.g., one or more of the remote credential stores118-1to118-N) and includes access information to access security credentials from the remote credential store. A credential object identifies one or more security credentials using non-sensitive information (e.g., text strings) that are to be retrieved from a remote credential store for use in accessing an external resource. When a request invoking an external resource is received at run time, the credential management system204and access manager202use information stored in the access metadata database206(e.g., a credential object and a credential store definition) to retrieve security credentials used to access the external resource from a remote credential store. A request processing service208manages received data storage requests and data retrieval requests (e.g., jobs to be performed on database data). For example, the request processing service208may determine the data to process a received query (e.g., a data storage request or data retrieval request). The data may be stored in a cache within the execution platform110or in a data storage device in storage platform104. A management console service210supports access to various systems and processes by administrators and other system managers. Additionally, the management console service210may receive a request to execute a job and monitor the workload on the system. The compute service manager108also includes a job compiler212, a job optimizer214, and a job executor216. The job compiler212parses a job into multiple discrete tasks and generates the execution code for each of the multiple discrete tasks. The job optimizer214determines the best method to execute the multiple discrete tasks based on the data that needs to be processed. Job optimizer214also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the job. The job executor216executes the execution code for jobs received from a queue or determined by the compute service manager108. A job scheduler and coordinator218sends received jobs to the appropriate services or systems for compilation, optimization, and dispatch to the execution platform110. For example, jobs may be prioritized and then processed in that prioritized order. In an embodiment, the job scheduler and coordinator218determines a priority for internal jobs that are scheduled by the compute service manager108with other “outside” jobs such as user queries that may be scheduled by other systems in the database but may utilize the same processing resources in the execution platform110. In some embodiments, the job scheduler and coordinator218identifies or assigns particular nodes in the execution platform110to process particular tasks. A virtual warehouse manager220manages the operation of multiple virtual warehouses implemented in the execution platform110. For example, the virtual warehouse manager220may generate query plans for executing received queries. Additionally, the compute service manager108includes a configuration and metadata manager222, which manages the information related to the data stored in the remote data storage devices and the local buffers (e.g., the buffers in execution platform110). The configuration and metadata manager222uses metadata to determine which data files need to be accessed to retrieve data for processing a particular task or job. A monitor and workload analyzer224oversees processes performed by the compute service manager108and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in the execution platform110. The monitor and workload analyzer224also redistributes tasks, as needed, based on changing workloads throughout the network-based database system102and may further redistribute tasks based on a user (e.g., “external”) query workload that may also be processed by the execution platform110. The configuration and metadata manager222and the monitor and workload analyzer224are coupled to a data storage device226. The data storage device226inFIG.2represents any data storage device within the network-based database system102. For example, data storage device226may represent buffers in execution platform110, storage devices in storage platform104, or any other storage device. As described in embodiments herein, the compute service manager108validates all communication from an execution platform (e.g., the execution platform110) to validate that the content and context of that communication are consistent with the task(s) known to be assigned to the execution platform. For example, an instance of the execution platform executing a query A should not be allowed to request access to data-source D (e.g., data storage device226) that is not relevant to query A. Similarly, a given execution node (e.g., execution node302-1may need to communicate with another execution node (e.g., execution node302-2), and should be disallowed from communicating with a third execution node (e.g., execution node312-1) and any such illicit communication can be recorded (e.g., in a log or other location). Also, the information stored on a given execution node is restricted to data relevant to the current query and any other data is unusable, rendered so by destruction or encryption where the key is unavailable. In some embodiments, the compute service manager108further includes the streams manager128for configuring database object types (e.g., a stream object) for querying changes in the results of queries and consuming them transactionally. The streams manager128may comprise a row-wise views manager228configured to perform the functionalities discussed herein related to row-wise views configuration and processing. The streams manager128may also comprise a join views manager230configured to perform the functionalities discussed herein related to join views configuration and processing. In some embodiments, the row-wise views manager228is configured to support change queries on (secure) views containing row-wise operators (e.g., select, project, and union all). In some aspects, the limitation on operators may allow for the delivery of data processing features to users. In some embodiments, the join views manager230is configured to handle join operations (or joins) in change queries (e.g., over slowly-changing dimensions). This initial algebra associated with this functionality may be extended to cover updates, outer joins, and higher-arity joins. The implementation of this functionality may also involve significant query rewriting functionality. FIG.3is a block diagram illustrating components of the execution platform110, in accordance with some embodiments of the present disclosure. As shown inFIG.3, the execution platform110includes multiple virtual warehouses, including virtual warehouse1(or301-1), virtual warehouse2(or301-2), and virtual warehouse N (or301-N). Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. The virtual warehouses can execute multiple tasks in parallel by using multiple execution nodes. As discussed herein, the execution platform110can add new virtual warehouses and drop existing virtual warehouses in real-time based on the current processing needs of the systems and users. This flexibility allows the execution platform110to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed. All virtual warehouses can access data from any data storage device (e.g., any storage device in the cloud storage platform104). Although each virtual warehouse shown inFIG.3includes three execution nodes, a particular virtual warehouse may include any number of execution nodes. Further, the number of execution nodes in a virtual warehouse is dynamic, such that new execution nodes are created when additional demand is present, and existing execution nodes are deleted when they are no longer necessary. Each virtual warehouse is capable of accessing any of the data storage devices120-1to120-N shown inFIG.1. Thus, the virtual warehouses are not necessarily assigned to a specific data storage device120-1to120-N and, instead, can access data from any of the data storage devices120-1to120-N within the cloud storage platform104. Similarly, each of the execution nodes shown inFIG.3can access data from any of the data storage devices120-1to120-N. In some embodiments, a particular virtual warehouse or a particular execution node may be temporarily assigned to a specific data storage device, but the virtual warehouse or execution node may later access data from any other data storage device. In the example ofFIG.3, virtual warehouse1includes three execution nodes302-1,302-2, and302-N. Execution node302-1includes a cache304-1and a processor306-1. Execution node302-2includes a cache304-2and a processor306-2. Execution node302-N includes a cache304-N and a processor306-N. Each execution node302-1,302-2, and302-N is associated with processing one or more data storage and/or data retrieval tasks. For example, a virtual warehouse may handle data storage and data retrieval tasks associated with an internal service, such as a clustering service, a materialized view refresh service, a file compaction service, a storage procedure service, or a file upgrade service. In other implementations, a particular virtual warehouse may handle data storage and data retrieval tasks associated with a particular data storage system or a particular category of data. Similar to virtual warehouse1discussed above, virtual warehouse2includes three execution nodes312-1,312-2, and312-N. Execution node312-1includes a cache314-1and a processor316-1. Execution node312-2includes a cache314-2and a processor316-2. Execution node312-N includes a cache314-N and a processor316-N. Additionally, virtual warehouse3includes three execution nodes322-1,322-2, and322-N. Execution node322-1includes a cache324-1and a processor326-1. Execution node322-2includes a cache324-2and a processor326-2. Execution node322-N includes a cache324-N and a processor326-N. In some embodiments, the execution nodes shown inFIG.3are stateless with respect to the data being cached by the execution nodes. For example, these execution nodes do not store or otherwise maintain state information about the execution node or the data being cached by a particular execution node. Thus, in the event of an execution node failure, the failed node can be transparently replaced by another node. Since there is no state information associated with the failed execution node, the new (replacement) execution node can easily replace the failed node without concern for recreating a particular state. Although the execution nodes shown inFIG.3each includes one data cache and one processor, alternative embodiments may include execution nodes containing any number of processors and any number of caches. Additionally, the caches may vary in size among the different execution nodes. The caches shown inFIG.3store, in the local execution node, data that was retrieved from one or more data storage devices in the cloud storage platform104. Thus, the caches reduce or eliminate the bottleneck problems occurring in platforms that consistently retrieve data from remote storage systems. Instead of repeatedly accessing data from the remote storage devices, the systems and methods described herein access data from the caches in the execution nodes, which is significantly faster and avoids the bottleneck problem discussed above. In some embodiments, the caches are implemented using high-speed memory devices that provide fast access to the cached data. Each cache can store data from any of the storage devices in the cloud storage platform104. Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that require significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that require caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that require fast scanning of large amounts of data. In some embodiments, the cache resources and computing resources associated with a particular execution node are determined when the execution node is created, based on the expected tasks to be performed by the execution node. Additionally, the cache resources and computing resources associated with a particular execution node may change over time based on changing tasks performed by the execution node. For example, an execution node may be assigned more processing resources if the tasks performed by the execution node become more processor-intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node require a larger cache capacity. Although virtual warehouses1,2, and N are associated with the same execution platform110, virtual warehouses1, N may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse1can be implemented by a computing system at a first geographic location, while virtual warehouses2and n are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities. Additionally, each virtual warehouse is shown inFIG.3as having multiple execution nodes. The multiple execution nodes associated with each virtual warehouse may be implemented using multiple computing systems at multiple geographic locations. For example, an instance of virtual warehouse1implements execution nodes302-1and302-2on one computing platform at a geographic location, and execution node302-N at a different computing platform at another geographic location. Selecting particular computing systems to implement an execution node may depend on various factors, such as the level of resources needed for a particular execution node (e.g., processing resource requirements and cache requirements), the resources available at particular computing systems, communication capabilities of networks within a geographic location or between geographic locations, and which computing systems are already implementing other execution nodes in the virtual warehouse. Execution platform110is also fault-tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location. A particular execution platform110may include any number of virtual warehouses. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary. In some embodiments, the virtual warehouses may operate on the same data in the cloud storage platform104, but each virtual warehouse has its execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users. As used herein, the term “table” indicates a mutable bag of rows, supporting time travel up to a retention period. As used herein, the term “view” indicates a named SELECT statement, conceptually similar to a table. In some aspects, a view can be secure, which prevents queries from getting information on the underlying data obliquely. As used herein, the term “materialized view” indicates a view that is eagerly computed rather than lazily (e.g., as a standard view). In some aspects, efficient implementation of materialized views has overlap with change tracking functionality. As used herein, the term “CHANGES clause” indicates a syntactic modifier on a FROM clause indicating that a SELECT statement should return the changes that occurred to the specified table between two given times (docs). In some aspects, several different change types can be requested:(a) The default type (also referred to as delta) finds the smallest set of changes that could account for the difference between the tables at the given times;(b) The append-only type only finds rows that were appended to the table; and(c) The audit type (currently not public) computes all changes made between the given times, even if they cancel out. FIG.4is diagram400of using a CHANGES clause in connection with query processing, in accordance with some embodiments of the present disclosure. Referring toFIG.4, queries or data processing commands Insert404, Delete406, and Update408are applied to source table402. As illustrated inFIG.4, the SELECT statement412may be used for returning the changes that occurred to the source table402during period410(e.g., one hour). As used herein, the term “stream” refers to a table and a timestamp. In some aspects, a stream may be used to iterate over changes to a table. When a stream is read inside a Data Manipulation Language (DML) statement, its timestamp may be transactionally advanced to the greater timestamp of its time interval (docs). In some aspects associated with join operations (or joins), the term “stream” may refer to a mapping of tables to timestamps. FIG.5is diagram500of a stream object configuration for a table, in accordance with some embodiments of the present disclosure. Referring toFIG.5, queries or data processing commands Insert504, Delete506, and Update508are applied to source table502. As illustrated inFIG.5, a stream514is generated on source table T1502at times X1, X2 (after a time interval510from X1), and X3 (after a time interval of512from X2). Additionally, at operation516, stream S1 is created on table T2. At operation518, a stream entry from stream S1 at time X1 is inserted into table T2. At operation520, a stream entry from stream S1 at time X2 is inserted into table T2. As used herein, the term “access control” indicates that customers can control who can access database objects within their organization (docs). As used herein, the term “data sharing” indicates customers can grant access to database objects to other organizations (docs). In some aspects, any query with a CHANGES clause or a stream may be referred to as a change query. A change query on a view may be defined similarly. In some embodiments, the streams manager128is configured to provide changes to views (e.g., a stream on views) so that the changes may be further processed and acted on. More specifically, the streams manager128may be configured to provide or process streams on views in connection with the following three use cases: shared views, complex views, and view evolution. In some aspects, more than one use case may apply at a given time. Shared (secure) views may be used to provide (e.g., a user or organization) limited access to sensitive data. The consumer of the data often wishes to observe changes to the data being shared with them. Some considerations implied by this use case include giving the consumer visibility into the shared view's retention period and how to enforce secure view limitations on change queries. FIG.6is a diagram600of shared views, in accordance with some embodiments of the present disclosure. Referring toFIG.6, a data provider602manages a source table604. The data provider602applies different filters to source table604to generate views606and608. View606is shared with consumer610, and view608is shared with consumer614. In some embodiments, the streams manager128is used for configuring streams612and616on corresponding views606and608for consumption by consumers610and614. The definition of a view can be quite complex, but observing the changes to such a view may be useful independently of its complexity. Manually constructing a query to compute those changes may be achieved, but can be toilsome, error-prone, and suffer from performance issues. In some aspects, a change query on a view may automatically rewrite the view query, relieving users of this burden. In some aspects, simple views containing only row-wise operators (e.g., select, project, union all) may be used. In some aspects, complex views that join fact tables with (potentially several) slowly-changing-dimension (DIM) tables may also be used. Other kinds of operators like aggregates, windowing functions, and recursion may also be used in connection with complex views. FIG.7is a diagram700of a stream object based on a complex view, in accordance with some embodiments of the present disclosure. Referring toFIG.7, a complex view708may be generated based on source tables702,704, and706. In some embodiments, the streams manager128configures a stream710based on the complex view708of source tables702,704, and706. In some aspects, views may be used to create an abstraction boundary, where the underlying tables can be modified without consumers being aware. For example, a view over a table undergoing a backward-incompatible schema change may be replaced by a new query that presents the same data in a different query, causing a view evolution. In some aspects, change queries may work across view redefinition, allowing change observation to the view uninterrupted by modifications to its definition. Considerations for this use case may include schema compatibility and performance. Some view redefinitions may use full joins to resolve, and others, such as workflows involving table clones, could be resolved more efficiently. FIG.8is a diagram800of a view evolution, in accordance with some embodiments of the present disclosure. Referring toFIG.8, at operation804, view V1802is created based on a Select operation. Stream S1812of view V1802is generated at times X1, X2 (after a time interval808from X1), and X3 (after a time interval of810from X2). Additionally, at operation814, a stream entry from stream51at time X2 is inserted into table T2. Before time X3, view V1802evolves at operation806, when a union all operation is used. At operation816, a stream entry from stream51(based on the evolved view V1 at time X3) is inserted into table T2. In some embodiments, to provide or process streams on views in connection with the above-listed use cases, the streams manager128may be configured with the following functionalities: intuitive semantics, unsurprising security, linear cost scaling, and easy operability. In some aspects associated with intuitive semantics, change queries on views may work intuitively and consistently. The essence of a change query is to take a time-varying object and a time interval, then return a set of changes that explain the differences in the object over the interval. This definition applies naturally to views, but there are some additional configurations addressed below. As not all operations may be supported by the streams manager128, property on views may be configured which explicitly allows change queries on it: CHANGE_TRACKING=true. When a view is created with this property enabled, a validation is performed that it only contains supported operators and the base tables have change tracking enabled. When a change query is issued on a view, it may succeed if the view has change tracking enabled. In some aspects, a standing change query (e.g., a stream) may exhibit reference semantics. That is, when a user specifies a view in a change query, such specification may be interpreted as referring to the view itself, not what the view is currently defined as. Adopting value semantics would likely result in surprising behavior, especially around access management. Adopting reference semantics is associated with the ways a view can be modified. The following techniques may be used for view modifications:(a) “ALTER VIEW . . . RENAME TO . . . ” When a view is renamed, objects referencing it may be updated. Complying with this precedent means a stream should break if its view is renamed.(b) “ALTER VIEW . . . SET SECURE . . . ” If a view is made secure, subsequent change queries to it should enforce secure view constraints.(c) “CREATE OR REPLACE VIEW . . . ” If a view is replaced, there are processing choices. Per the View Evolution use case, some users may want the view to keep working as long as the replacement is schema compatible. However, this may add complexity to the implementation. In some aspects associated with unsurprising security, a consumer of a change query on a view may have the same access they have to the view itself. The following configurations may apply to all views: creating a stream on a view fails if the underlying tables do not have change tracking enabled, and the creator does not have permission to enable it; consumers can see the minimum retention period of the tables referenced by a view (they cannot see which table the retention applies to); and if change tracking was enabled on a table in a view more recently than the beginning of the retention period, consumers can see when it was enabled. In some aspects, the following configurations may be applied to secure views: consumers cannot see the view's definition; consumers cannot issue a change query to before access was granted to the view; optimizations abide by secure view limitations (they do not reorder operators into the expanded view), and the retention period on a table in a secure view is not extended automatically to prevent a consuming stream from going stale. In some aspects associated with linear cost scaling, a key attribute of change queries on tables is that their cost (both in terms of latency and credits) may be proportional to the result size. Append-only change queries may be introduced to work around cases when this scaling does not hold for delta queries. In some aspects, change queries on views may scale similarly in cost. That is, delta change queries and append-only change queries may scale proportionally to the result size. In some aspects associated with easy operability, introducing change queries on views may increase the likely distance between the view provider and consumer (the shared views use case may revolve around this). The distance makes collaboration between provider and consumer more difficult. In turn, this means that a smooth operational experience for change queries on views is more important than for traditional change queries. In some aspects, the following operational challenges may be addressed by the streams manager128: handling view modification and surface errors. In some aspects associated with the handling of view modifications, if the view provider renames or replaces their view, a stream on it will break. The consumer will then want to take action to repair it. The details of such repairs are use case-specific but it may involve trying to recreate the stream with a new definition, and resuming where the broken stream let off. To support this, the streams manager128may be configured to support statements of the following form: CREATE OR REPLACE STREAM s . . . AT (STREAM=>s). The stream S is being both queried and replaced. In some aspects associated with surface errors, a view consumers may try to issue changes queries that are invalid for various reasons. The errors may be surfaced clearly to the consumer. Examples of such errors include: the underlying tables may not have change tracking enabled; the change query may be outside of the tables' retention period; the change query may contain unsupported operators; and the view may have been modified, breaking the change query. View providers may have control over what happens to a view and any objects derived from it. However, they would benefit from visibility into how the view is being used to avoid accidentally breaking consumers. Examples of such notices include: when the provider tries to make a breaking modification to a view, warn the provider that consumers will be disrupted; when consumers' change queries fail due to retention or change tracking, send the provider a notification; and support some introspection as well, such as a view provider looking up the number of streams consuming it and their offsets. Streams on Views—General Configurations for the Streams Manager128 A stream object on tables (including external tables) may be configured to let the user retrieve a stream of changesets as the underlying data in the table changes. A stream object is configured to maintain a position in this list of changesets and that is only advanced if it is used in a DML statement. Reading from the stream may return the changeset from the current position up to the current transaction timestamp. As the underlying data changes the size of the changeset will grow until the stream is advanced. In some aspects, the advance may be transactional. In some embodiments, the streams manager128is configured to create and process stream objects on views, in particular for data sharing scenarios. In some aspects, shared data consumers may be able to get the latest changes from the shared data provider. Given that exposing shared data is done through secure views, a stream may be created on the consumer side on the view from the provider. In some aspects, streams on materialized views may also be configured to allow retrieving changesets as the underlying materialized view (MV) changes. In some embodiments, providing changesets on a view (e.g., a query) is similar to the incremental materialized view maintenance problem. In the case of MVs as the underlying data source(s) change, the materialized data set may be updated incrementally. In some aspects, this processing may be performed at the micro-partition level to create a query plan which uses the data from the added/deleted partitions and merges it with the MV data to produce the updated data. In the case of a stream object (or stream) on a view, the changeset returned may be the delta of the data the view would return at the current transactional time compared to the data of the view would return at the transactional time of the position of the stream. In some aspects, computing the delta efficiently may be a consideration since there may be no materialized data set that can be leveraged and incrementally updated. In some aspects, a materialized view may be created behind the scenes to mitigate this with the limitations of the queries MVs support today which can make sense, especially for aggregate queries. In some aspects, the delta for certain classes of queries may be generated efficiently (e.g., if there is only one data source). In that case, the data source of the view can be logically replaced with the delta provided by the stream on the data source. In some embodiments, the streams manager128may support projections and filters in the view as well. For example, data processing operators may be allowed where applying the operators on the delta provides the same result as computing the delta on the datasets at the two end points. In the initial solution when the stream is created on a view, support for the view is validated, the data source table is located, and change tracking is set up for the table. When the data is requested from the stream, the underlying view in the query plan is expanded, and the data source table is replaced with generating the delta (similar to the processing applied if a stream on that table is configured in the first place). This processing may also be supported for secure views as well since the data source inside is swapped and no outside filters would get pushed in. In addition to maintaining the position of the start point of the change set, the stream may also implicitly expand the retention period on the underlying table up to two weeks depending on how far in the past of the table version history the stream position points. Such processing may also be performed for non-remote data sources. For shared data sources, the same mechanism may not be used because the table compaction status data on the remote side would need to be updated. In this regard, streams on shared data sources can go stale after a day which is the default retention period for tables. To mitigate this effect, the provider of the shared data can increase the retention period on the table to allow more time for the stream on the provider side to be consumed (and advanced). FIG.9is a diagram900of a stream expansion performed by the streams manager128, in accordance with some embodiments of the present disclosure. Referring toFIG.9, a query may include a project operation902and a filter operation904applied on a source table T. At operation906, a stream S is generated (e.g., by the streams manager128) on source (base) table T. As illustrated inFIG.9, stream expansion901is performed on stream S to replace the stream S with a join operation908, where the join operation908is applied between table scan operations910and912. Table scan operation910may be based on scanning the source table T for added files, and table scan operation912may be based on scanning the source table T for deleted files. In some embodiments associated with stream expansion, the stream delta changes are computed by considering the added and deleted files (micro-partitions) in the base table T of the stream S and joining the rows from these tables on the logical row id. In some aspects, the logical row ID is maintained by the change tracking system during example DML, operations. As illustrated inFIG.9, the stream expansion901will generate the join operation908(and possibly other operations which are not illustrated inFIG.9for simplicity) and will produce inserted and deleted rows based on the delta information. If a row is updated in the base table T, there will be a deleted and inserted row generated with a special row id in these to allow matching these up. The following is an example of a high-level query plan for a query that may be used in connection withFIG.9: CREATE STREAM S ON TABLE T; and SELECT C1, C2 FROM S WHERE C3>2. FIG.10is a diagram1000of a view expansion and a stream expansion performed by the streams manager128in connection with a single source table, in accordance with some embodiments of the present disclosure. Referring toFIG.10, the streams manager128may process a query1001including a project operation1002and stream S1004on a view. The view can include an example view1005based on a project operation1006, a filter operation1008, and a table scan operation1010on a source (base) table T. The streams manager128may perform a view expansion operation1003and include a project operation1012, a filter operation1014, and a stream S1016on the base table T in place of the stream S1004on the view. The streams manager128may perform a stream expansion operation1007to replace stream S1016with a join operation1018. The join operation1018may be applied between table scan operations1020and1022. Table scan operation1020may be based on scanning the source table T for added files, and table scan operation1022may be based on scanning the source table T for deleted files. In some embodiments, to produce a stream changeset on a view with one table source, the streams manager128can replace the table source with the changeset according to the stream offset during the view expansion. The following is an example of a high-level query plan for a query that may be used in connection withFIG.10: CREATE VIEW V AS SELECT C1, C2 FROM T WHERE C3>2; CREATE STREAM S ON VIEW V; and SELECT C1 FROM S. In some aspects associated with data sharing scenarios and handling views with at least two table inner joins, the streams manager128may use a table to restrict what rows are visible in a shared table. This type of processing may be done using secure views so that a user can only retrieve a specific dataset out of the table. In this case, the table source may not be replaced with a stream since both tables might have changed since the last time the stream was consumed. In this regard, the streams manager128may generate the changeset for a join of two tables by composing a union of joining the previous table data with the deltas and also joining the deltas themselves (e.g., as illustrated inFIG.11). In some aspects, insert only changes may be considered. In this case, the notation of A[T1] is the table dataset at a transactional time, and AA[T1-T2] is the changeset between T1 and T2. The following processing illustrated in Table 1 may be performed by the streams manager128: TABLE 11. A2 = A1 U ΔA;2. B2 = B1 U ΔB;3. A2B2 = (A1 U ΔA)(B1 U ΔB) = (A1B1) U(A1ΔB) U (B1ΔA) U (ΔAΔB);4. Δ(A2B2) = (A1ΔB) U (B1ΔA) U (ΔAΔB) =(A1ΔB) U (B2ΔA) =(A2ΔB) U (B1ΔA)= In some aspects, the following processing illustrated in Table 2 may be performed by the streams manager128to delete only changes: TABLE 21. A[T2] = A[T1] − ΔA[T1 − T2]2. B[T2] = B[T1] − ΔB[T1, T2]3. A[T2]B[T2] = (A[T1] − ΔA[T1 − T2])(B[T1] − ΔB[T1, T2]) =(A1[T1]B1[T1]) − (A[T1]ΔB[T1, T2]) −(B[T1]ΔA[T1, T2]) − (ΔA[T1, T2]ΔB[T1, T2])4. Δ(A[T2]B[T2])[T1, T2] = (A[T1]ΔB[T1, T2]) U(B[T1]ΔA[T1, T2]) U (ΔA[T1, T2]ΔB[T1, T2]) =(A[T1]ΔB[T1, T2]) U (B[T2]ΔA[T1, T2]) =(A[T2]ΔB[T1, T2]) U (B[T1]ΔA[T1, T2])= In some aspects, the view and stream expansion may be changed by replacing the table sources with the appropriate changeset where applicable, and the join may be replaced with a union of two joins. FIG.11is a diagram1100of a view expansion and a stream expansion performed by the streams manager128in connection with multiple source tables, in accordance with some embodiments of the present disclosure. Referring toFIG.11, the streams manager128may process a query1101including a project operation1102and stream S1104on a view V. The view V can include an example view1105based on a project operation1106and a join operation1108. The join operation1108is associated with a table scan operation1110for a source table A and a table scan operation1112for a source table B. The streams manager128may perform a view expansion1103and include a project operation1114and a join operation1116in place of the stream S1104. The join operation1116is associated with a stream1118on source table A and stream1120on source table B. The streams manager128may perform a stream expansion1107to replace the join operation1116with a union operation1122. The union operation1122may be performed on join operations1124and1126. The join operation1124may be performed on table scan operations1128and1130, and join operation1126may be performed on table scan operations1132and1134. Example Configurations of the Row-Wise Views Manager228 In some embodiments associated with example syntax, a new property on views may be configured for the row-wise views manager228to enable change-tracking features based on the following syntax in Table 3: TABLE 3CREATE [ OR REPLACE ] SECURE VIEW <name>CHANGE_TRACKING=trueAS <select> In some embodiments, a new syntax to create streams on change-tracking views may be configured for the row-wise views manager228based on the following syntax in Table 4: TABLE 4CREATE [ OR REPLACE ] STREAM [IF NOT EXISTS]<name>[ COPY GRANTS ][ COMMENT = ′<string_literal>′ ]ON { TABLE <table_name> | VIEW <view_name> }[ APPEND_ONLY = TRUE | FALSE ][ { AT | BEFORE } { TIMESTAMP => <timestamp>| OFFSET => <time_difference>| STATEMENT => <id> } ] In some embodiments, a CHANGES clause can be applied by the row-wise views manager228to change-tracking views, using the following syntax: SELECT . . . FROM <view> CHANGES . . . . In some embodiments, new columns in the output of SHOW STREAMS can be applied by the row-wise views manager228, based on the following syntax in Table 5: TABLE 5source_domain = { TABLE | STREAM | . . . }source_name: name of table, stream, etc. the stream reads from.Contains the name of the view whose DML updates are tracked by thestream. In some embodiments associated with supported views, the row-wise views manager228may support views containing the following operations:(a) Filter: “SELECT . . . WHERE <predicate>”.(b) Project: “SELECT c1, c2, . . . FROM v”.(c) Deterministic scalar functions. The following functions may be explicitly disallowed: context functions like CURRENT_TIME (semi-deterministic context functions like CURRENT_SCHEMA may be supported) and system functions like SYSTEM$EXPLAIN_PLAN_JSON (SYSTEM$TYPEOF may be supported).(d) UNION ALL. This operation may be useful for materialization workflows. In some aspects, queries on secure views may block optimizations from moving operators outside of the view into the view. In some embodiments associated with altering view operations, the row-wise views manager228is configured to support altering views as follows.(a) RENAME TO<new_name>. This operation may be used for changing the user-visible name of the view, but not its ID. The stream may continue working as if nothing changed. Future SHOW STREAMS queries may show the new view name.(b) {SET|UNSET} SECURE. This operation may be used for changing the view to be {secure|not-secure}. Future reads from the stream will {apply not apply} secure constraints to the query.(c) {SET|UNSET} COMMENT. This operation may be used for associating a string with the view. In some aspects associated with a stream on view creation, the row-wise views manager228can be configured for processing the following syntax formats:(a) allow the VIEW keyword;(b) handle the VIEW table kind in SqlParser::generateParseTreeCreateDDL. The view may be expanded and validated at create time. This can be done in one of the following ways:(b.1) use SqlObjectRef:setDontExpandView and let the validator visit the parse tree;(b.2) manually fetch the view definition and use Statement.compile to validate it;(b.3) both of these methods may include notifying the SqlValidator visitor that it is working within a change query, which can be configured with a boolean field SqlValidator::inChangeQuery. The source table(s) for the view may be extracted using a DefaultSqlNodeVisitor. Using these table(s), processing can tie into the existing code that checks whether change tracking is enabled on them (and try to enable it or fail) and looks up and stores the most recent table versions as the current stream state. For secure views, the row-wise views manager228may check that the stream's initial offset is not before the time the view was shared. In aspects associated with the CHANGES clause, the row-wise views manager228may be configured to handle this query type in SqlObjectNameResolver::visitPost(SqlFrom), from which Stream::expandImplementation is invoked. The expansion may be guarded by a check that the FROM reference is a physical table. In some embodiments, this guard may be relaxed to allow views. In some embodiments associated with Show and Describe, the following two columns may be added to SHOW STREAMS in ExecShow:source_kind is the domain of the stream's source (table or view) and source_name is the name of that source. In some aspects, a row containing the source's retention time may be added. In some embodiments associated with stream expansion, the row-wise views manager228may be configured with extended Stream::expandImplementation to expand the view using SqlObjectNameResolver::expandViewDefinition. Once the view is expanded, the processing code may proceed normally into expandGeneratedViewText, where a stream is expanded. The table stream expansion may be configured with immediate access to the leaf table. In the view stream case, the row-wise views manager228may traverse the view and perform stream expansion on each leaf table. Because of the algebraic properties of row-wise views, performing the current stream expansion on the interior of the view may be used as well. The row-wise views manager228may be configured to distinguish between the source table, which is the view targeted by the change query, and the offset table(s), which are the leaf tables inside of the expanded view. The following several exceptions may be handled by the row-wise views manager228during query expansion: (a) if the stream's source view is replaced, the table lookup will return null just as with a table; and (b) if the stream's source tables are beyond their retention time, the expanded change query will fail during stream expansion when fetching EP files. In some embodiments associated with query validation, query validation may be performed by the row-wise views manager228using an SqlValidator tree visitor. The validation may run over a fully expanded tree. In some aspects, the validation may be modeled on materialized view validation and reuse the same utilities. In some aspects, the same checks may not be reused. In some aspects, the validation logic may be refactored. In some embodiments, view stream validation may be enabled if the visitor is traversing a descendant of either of the following:(a) a SqlCreateStream node, which indicates the tree being traversed is the stream's source. To inform the visitor of this context, we add a boolean field to SqlValidator, similarly to SqlValidator::inMaterializedViewCreation.(b) a SqlFrom node on a stream or with a changes clause. The queryBlocks deque may be inspected to determine if the current node has a from-with-changes node as an ancestor. In some embodiments, this information may be materialized in the QueryBlockVisitState for performance. In some embodiments, when a visitor is in a change query context, the following constraints may be checked:(a) In visit(SqlObjectRef): for example, based on no built-in views, materialized views, table functions, etc.).(b) In visit(SqlQueryBlock): for example, based on no window functions; no distinct; no group by; no limit; no order by; FROM clause has a single, physical source table or . . . , a join tree with only UNION ALL, no CHANGES clauses.(c) In visit(SqlSessionVariableRef), error.(d) In visitPre(SqlFunction): only deterministic functions and no aggregate functions. In some embodiments associated with offset generalization, a stream's offset may be stored as a pair of table IDs and a timestamp. This configuration may work for single-table views but may be generalized to handle multi-table views. In some embodiments, a class Frontier is introduced to represent a stream's progress abstractly. This class may have the following subtypes: TableFrontier (for table streams, the frontier is a single table and timestamp), ViewFrontier (for view streams, the frontier is a map from table ids to timestamps, containing exactly the tables in the expanded view), and <external source>Frontier. An extensible frontier type may facilitate extending streams to cover such use cases. In some embodiments, Frontier may be encoded using one or more of JavaScript Object Notation (JSON), a simple binary format consisting of an array of pairs of integers, and into a binary format using another encoding. For storage, the STREAM_OFFSET, STREAM_ADVANCE, and STREAM_TABLE_ID values may be migrated to a single string containing an encoded frontier. Specifically:(a.1) In STREAM_STATE_SLICE, deprecate STREAM_OFFSET and STREAM_TABLE_ID, and add CURRENT_FRONTIER.(a.2) In TXN_STREAM_STATE_SLICE, deprecate STREAM_OFFSET, STREAM_ADVANCE, and TABLE_VERSIONS_COUNT, and add CURRENT_FRONTIER and NEXT_FRONTIER.(a.3) In CHANGELOG_SLICE, deprecate STREAM_OFFSET and STREAM_TABLE_ID, and add CURRENT_FRONTIER. Example Configurations of the Join Views Manager230 In some embodiments associated with multi-table stream state, the join views manager230may be configured to stores a stream's offset in 3 fields: table ID, offset, and watermark. This model may be used for single-table views but may be generalized to represent multiple offsets. A class Frontier may be introduced to represent a stream's progress abstractly. In some aspects, this class may have a single subtype TableFrontier. For streams on regular tables, external tables, directory tables, and views, the frontier is a map from table IDs to an offset and watermark, containing the tables in the expanded view. In some aspects, an extensible frontier type may facilitate extending streams. In some aspects associated with frontier advance, when a stream is queried, a new offset may be computed referring to the base table's latest version before the given END( )time (or the current time if not provided). The new offset may be referred to as the stream's advance. This calculation may be generalized to compute an advanced frontier across all base tables. In some embodiments, Frontier may be encoded using one or more of JavaScript Object Notation (JSON), a simple binary format consisting of an array of pairs of integers, and into a binary format using another encoding. For storage, the STREAM_OFFSET, STREAM_ADVANCE, and STREAM_TABLE_ID values may be migrated to a single string containing an encoded frontier. Specifically:(a.1) In STREAM_STATE_SLICE, add CURRENT_FRONTIER and deprecate STREAM_OFFSET, STREAM_WATERMARK, and STREAM_TABLE_ID.(a.2) In TXN_STREAM_STATE_SLICE, add CURRENT_FRONTIER and NEXT_FRONTIER, and deprecate STREAM_OFFSET, STREAM_WATERMARK, STREAM_ADVANCE, STREAM_ADVANCE_WATERMARK, and TABLE_VERSIONS_COUNT.(a.3) In CHANGELOG_SLICE, add CURRENT_FRONTIER and deprecate STREAM_OFFSET, STREAM_WATERMARK, and STREAM_TABLE_ID. In some aspects associated with stream DDL, DML, and system functions, when creating a stream on a single-table view, the view may be expanded and its base table ID may be extracted. With multi-table views, all such table IDs may be found and stored. For user-provided initial offsets, a Stream state may be created with an initial offset stored in the StreamDPO, which may be replaced by a frontier. In some embodiments, single-object clones may be configured to resolve the current frontier instead of the base table's current version. For schema and database clones, the source view and base tables may need to be resolved using the mapping from old to new in case any of them were cloned. In some embodiments, the SHOW and DESCRIBE STREAM commands have columns indicating whether the stream is currently stale and when it will become stale. This calculation may be updated to find the minimum stale time across all base tables. In some embodiments, the STREAM_HAS_DATA system function returns true whenever the stream's base table has unconsumed table versions. This function may be generalized to return true when any of the base tables have unconsumed table versions. Stream refresh occurs during calls to STREAM_HAS_DATA, a process that moves up the stream's offset and watermark when the stream is empty. This process may be generalized to refresh a frontier over multiple base tables. Because views can filter a large fraction of the data in a table, but STREAM_HAS_DATA only checks whether any tables have changed, checking whether a stream on a view has data is more prone to false positives than streams on tables. In some embodiments, query pruning on the whole stream query may be used to reduce the false-positive rate. In some embodiments, the STREAM_GET_TABLE_TIMESTAMP system function fetches a stream's current offset as a timestamp. For streams with multiple base tables, this function may return an error. In aspects associated with Get DDL( ) the DDL that defines a stream may be independent of the definition of the view that defines it. In aspects associated with DML, and transactions, interactions between streams and join views manager230may be unaffected because the stream state encapsulates implementation details. However, changes may be made to the code in StreamStateDAO that stores the uncommitted stream state in the stream's transaction workspace. This may support storing frontiers as well as (table id, offset, advance). Furthermore, stream retention extension may be generalized to EpFileCompactStatus::setStreamOldestTableVersionToKeep for each base table in the stream. In aspect associated with multi-table change queries, streams may be expanded into a change query during name resolution (before type checking and optimization). This expansion creates and expands a built-in view to compute the changes in the stream, and it resolves the micro partitions that the plan needs to scan. Expanding a built-in view during name resolution may be used in connection with streams that only compute changes in base tables and simple select/project views. In this case, the expansion may be performed similarly to expanding a regular built-in view. However, when the query is more complicated, the entire view may need to be rewritten. Doing this during name resolution may present challenges that may be addressed based on refactoring stream expansion and moving change query rewrites into the optimizer. In some embodiments and in connection with providing a compiler overview, the compiler phases used by the join views manager230can include:(a) Parsing (SqlParser): turns query text into the parse tree representing query syntax.(b) Typechecking (SqlTypechecker):. Name resolution (SqlObjectNameResolver::resolve): resolves names in the parse tree into dictionary entities, checking permissions, and expanding views. Type resolution (SqlTypechecker::typeResolution): infers types of expressions, raising type errors upon encountering contradictions. Validation (SqlValidator::validityCheck): ensures the parse tree conforms to constraints other than type checking, such as which operations are supported by MVs or change queries. Parse tree optimizations (SqlTypechecker::optimizeTypecheckStatement): does an initial battery of rewrites on the parse tree, including expression rewrites (which rewrite scalar expressions in simpler terms), and early rounds of constant folding, filter pushdown, and pruning.(c) Query planning (QueryPlanner::generateQueryPlan):. Parse tree translation (SqlQueryBlockTranslator): translate SQL syntactic constructs into a uniform representation of operators accepting inputs and producing outputs. For example, a SELECT expression is split into nodes for scanning, selection, projection, joining, and aggregation. Plan optimization (QueryPlanRewriter): does a more comprehensive set of rewrites on the query plan, including more constant folding, filter pushdown, pruning, aggregation pushdown, and cost-based optimization.(d) Code generation (CodeGenerator::generateSDL): takes the optimized query plan and produces an SDL program for XP to execute. In some embodiments associated with stream expansion refactoring, moving stream expansion into the query plan optimizer may provide the following processing efficiencies:(a) Simpler representation. The parse tree directly represents SQL constructs in a class called SqlQueryBlock. The query plan represents the same logic more uniformly, in terms of operators with inputs and outputs. Rewrites are easier to implement on the query plan because of that uniformity.(b) Code reuse. The plan rewrites rule framework may be used to implement many optimizer rewriting rules.(c) Flexibility. Rewriting in the optimizer may provide control over the order in which the rewrites are invoked, which allows for improved integration with existing optimizations like filter pushdown, pruning, and join permutation. For the refactoring, the following configurations may be provided for the join views manager230:(a) In name resolution, rewrite a stream into a view with a CHANGES clause (may be called the change view). The changing view is created by: selecting all columns from the source, and defines the change-metadata columns (ROW_ID, ACTION, ISUPDATE) as system functions to resolve later; and read from the stream's source. Create the change view's object ref based on: setting the view query block change view; setting the AT and END times to the stream's offset and advance frontiers; and setting the CHANGES clause to the stream type. The stream state may be put into the DmlTargetCollector.(b) During validation, only supported operations may be allowed inside the change view.(c) During parse tree optimizations, optimizations may be prevented from doing rewrites incompatible with the change view. For example, a filter containing a constant subquery may not be pushed down into the change view. Also, column expressions with change-metadata columns may not be moved from the query block with the CHANGES clause. The change view can be identified by the presence of the CHANGES clause in the SqlObjectRef.(d) During parse tree translation, the change view may be transformed to have an additional QueryPlanNode that indicates that the subtree beneath it should be rewritten as a change query. A new class, QueryPlanNodeChanges, may be created, which contains the change type and frontiers to resolve. The change type can be MIN_DELTA, APPEND_ONLY, and INSERT ONLY, with an additional internal type NON_MIN_DELTA, which can produce an INSERT and DELETE for the same row. The column expressions containing change-metadata system functions may be moved from the query block into the plan node, which adds these columns to its output row vector.(e) During plan optimization, incompatible rewrites may be prevented from corrupting the change view. In some embodiments associated with View Evolution/AT(Stream=> . . . ), when a view changes, or when a Stream is used in an AT clause, the set of tables in the current state may not match the set of tables being requested. In these cases, a table offset for the new tables may be selected. To solve this issue, a transactional timestamp may be stored in the frontier each time the stream advances. Then, new tables can use the most recent version older than that timestamp. Additionally, for Streams created as show initial rows=true, the state may be remembered and initial rows may be included for new tables added by view evolution. In some embodiments associated with change rewrites, the join views manager230is configured to perform change rewrites to push the QueryPlanNodeChanges node down through the plan tree until there are no more Changes nodes in the tree. The join views manager230may be configured to rewrite queries that request changes to queries that compute the changes. The algorithm proceeds by applying rewrite rules that pattern match to the plan beneath the Changes node, as shown inFIG.12. FIG.12illustrates a diagram1200of a change rewrite in connection with a union all query, in accordance with some embodiments of the present disclosure. Referring toFIG.12, query1202is transformed into query1210via a change rewrite operation1211. Node1203represents a join of two sub-queries, where one of the sub-queries is represented by the changes node1204. The changes node1204is associated with a union all node1206for sub-queries1208. During the change rewrite operation1211, all nodes/operations below the changes node1204may be rewritten to produce changes. More specifically, the transformed query1210includes the union all node1206moved up, and the changes node1204is now duplicated as changes nodes1212into the branches for sub-queries1208. In some embodiments, the join views manager230may be configured with the following rewrite rules.(a) Quick overview of the notation: Δ represents a Changes node. Δmin, Δapp, Δins, and Δnon specify the change type. σ+ and σ− are operators that select insertions and deletions. T+ and T− are modified scan nodes that produce rows that may have been inserted or deleted during the change interval. πμ is a projection that adds change-metadata columns.(b) If the change type is MIN_DELTA, a full outer join is produced to eliminate rows that were both inserted and deleted, with a NON_MIN_DELTA node below. Δmin(Q)⇒ξ(ΔnonQ), where ξ is a table function that consolidates redundant deltas and determines whether a change is an update. This consolidation can be optimized away in some cases when all base tables have only had insertions.(c) If the change node is directly above a scan, then the rewrite is similar to a stream expansion. Code in MetaPhysicalCache may be reused to generate scans with the correct pruning information. When the change type is NON_MIN_DELTA, a multiset of rows is produced that may have been inserted or deleted from the base table during the change time interval. This may include rows that cancel out. For example, Δnon(T)⇒πμ(T+T−). When the change type is APPEND_ONLY, the multiset of rows that were inserted during the change time interval may be produced. For example, Δapp(T)⇒σ+(πμ(T+)). When the change type is INSERT ONLY, a multiset of rows that may have been inserted from the base table during the change time interval may be produced. In some aspects, this mode is intended for external tables. For example, Δins(T)⇒πμ(T+).(d) If the change node is above a selection with no subqueries in the column list, the operators are commuted. For example, Δ(σφ(Q))⇒σφ(Δ(Q)).(e) If the change node is above a projection with no subqueries in the column list, the operators are commuted after adding the change-metadata columns: Δ(πC(Q))⇒πC×μ(Δ(Q)).(f) If the change node is above a union all, the change node is distributed among the branches: Δ(Q. . .R)=ΔQ. . .ΔR.(g) If the change node is above an inner join, join distributivity is applied to push it down as provided in Table 6 below. TABLE 6Δ(QR) =(Q0ΔQ)(R0ΔR) − Q0R0 =(Q0R0) − (Q0R0)(ΔQR0)(Q0ΔR)(ΔQΔR) ⇒(ΔQR0)(QΔR), where Q0 and R0 are Q and R atthe start of the change time interval.(h) Otherwise, the change query may not be rewritten and produce an error. The logic to add change-metadata columns to plan node column lists, which is vaguely hand-waved above using the πμ operator, is as follows:(h.1) The change node conceptually includes a projection that adds the change metadata columns to its output column list.(h.2) It keeps a list of the change-metadata columns that are required by operations above it in the query plan. When first created, these are the standard user-visible change-metadata columns: ACTION, ISUPDATE, and ROW_ID.(h.3) Whenever the change node is pushed down through an operator, its change-metadata columns are added to that operator's column list, defined in different ways depending on the rewrite. The MIN_DELTA rewrite defines ISUPDATE in using the row-comparison in the full-outer join and removes it from the change node's set of change-metadata columns. Join rewrites define ROW_ID by combining the ROW IDs of the two branches that joined. The combining function may be commutative (e.g., addition) for join-reordering optimizations to preserve IDs. To deal with tables that are joined or unioned with themselves, a salt may be inserted on the row-id computation for tables that occur more than once in a plan. Join rewrites define ACTION as the product of the multiplicities they represent: INSERT=1, DELETE=−1. In this regard, two inserts remain to insert, one of each becomes delete, two deletes become insert. Table scans may define visible change-metadata columns in terms of physical change-tracking columns. Other operators may pass through ACTION and ROW_ID transparently.(i) In some embodiments, rewrite rules may be implemented for other operators, which may fit into the structure disclosed herein. In some embodiments, streams on materialized views may be supported for such use cases. In aspect associated with consolidating redundant changes, in existing streams, redundant changes may be eliminated using a full outer join on the row ID to find inserts and deletes that cancel out. However, when there is more than one insert or more than one delete for a row ID, this computes an incorrect delta. For example, consider what happens if we insert a 0, then increment that by 1. The changes to the table can be represented as: {+0, −0, +1}. This delta can be consolidated into {+1}. If a full-outer join is used that pairs up inserts and deletes, the result is {(+1, −0)}⇒{+1, −0}, where the first element is filtered out because it is unchanged. Since this processing is not correct, for streams on tables and streams on row-wise views, a single row may be considered to not have more than one insert or more than one delete. But for streams on joins, this type of processing may no longer hold and can be different. FIG.13is a flow diagram illustrating operations of a database system in performing a method1300for processing a stream object on a view, in accordance with some embodiments of the present disclosure. Method1300may be embodied in computer-readable instructions for execution by one or more hardware components (e.g., one or more processors) such that the operations of the method1300may be performed by components of network-based database system102, such as components of the compute service manager108(e.g., the streams manager128) and/or the execution platform110(e.g., which may be implemented as machine1400ofFIG.14). Accordingly, method1300is described below, by way of example with reference thereto. However, it shall be appreciated that method1300may be deployed on various other hardware configurations and is not intended to be limited to deployment within the network-based database system102. At operation1302, a first stream object is detected on a view, where the view may include a query associated with a source table. For example and aboutFIG.10, the streams manager128detects a stream object1004on a view1005. At operation1304, a syntax tree of the query is determined based on a definition of the view (e.g., based on the project-filter-table scan operations associated with the view1005). At operation1306, the view is expanded based on replacing the first stream object with the syntax tree, the syntax tree comprising a second stream object on the source table. For example, the view expansion operation1003is performed by replacing stream object1004with the syntax tree of view1005. The syntax tree of view1005includes a second stream object (e.g.,1016) on the source table T. At operation1308, stream expansion of the second stream object is performed based on computing changes on the source table. For example, a stream expansion operation1007is performed based on computing the changes on the source table T (e.g., based on applying the join operation1018). FIG.14illustrates a diagrammatic representation of a machine1400in the form of a computer system within which a set of instructions may be executed for causing the machine1400to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.14shows a diagrammatic representation of the machine1400in the example form of a computer system, within which instructions1416(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1400to perform any one or more of the methodologies discussed herein may be executed. For example, instructions1416may cause machine1400to execute any one or more operations of method1300(or any other technique discussed herein, for example in connection withFIG.4-FIG.13). As another example, instructions1416may cause machine1400to implement one or more portions of the functionalities discussed herein. In this way, instructions1416may transform a general, non-programmed machine into a particular machine1400(e.g., the compute service manager108or a node in the execution platform110) that is specially configured to carry out any one of the described and illustrated functions in the manner described herein. In yet another embodiment, instructions1416may configure the compute service manager108and/or a node in the execution platform110to carry out any one of the described and illustrated functions in the manner described herein. In alternative embodiments, the machine1400operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1400may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1400may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a smartphone, a mobile device, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1416, sequentially or otherwise, that specify actions to be taken by the machine1400. Further, while only a single machine1400is illustrated, the term “machine” shall also be taken to include a collection of machines1400that individually or jointly execute the instructions1416to perform any one or more of the methodologies discussed herein. Machine1400includes processors1410, memory1430, and input/output (I/O) components1450configured to communicate with each other such as via a bus1402. In some example embodiments, the processors1410(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1412and a processor1414that may execute the instructions1416. The term “processor” is intended to include multi-core processors1410that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions1416contemporaneously. AlthoughFIG.14shows multiple processors1410, the machine1400may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory1430may include a main memory1432, a static memory1434, and a storage unit1436, all accessible to the processors1410such as via the bus1402. The main memory1432, the static memory1434, and the storage unit1436store the instructions1416embodying any one or more of the methodologies or functions described herein. The instructions1416may also reside, completely or partially, within the main memory1432, within the static memory1434, within machine storage medium1438of the storage unit1436, within at least one of the processors1410(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1400. The I/O components1450include components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1450that are included in a particular machine1400will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1450may include many other components that are not shown inFIG.14. The I/O components1450are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1450may include output components1452and input components1454. The output components1452may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), other signal generators, and so forth. The input components1454may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures or other tactile input components), audio input components (e.g., a microphone), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1450may include communication components1464operable to couple the machine1400to a network1480or devices1470via a coupling1482and a coupling1472, respectively. For example, the communication components1464may include a network interface component or another suitable device to interface with the network1480. In further examples, the communication components1464may include wired communication components, wireless communication components, cellular communication components, and other communication components to provide communication via other modalities. The device1470may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a universal serial bus (USB)). For example, as noted above, machine1400may correspond to any one of the compute service manager108or the execution platform110, and the devices1470may include the client device114or any other computing device described herein as being in communication with the network-based database system102or the cloud storage platform104. The various memories (e.g.,1430,1432,1434, and/or memory of the processor(s)1410and/or the storage unit1436) may store one or more sets of instructions1416and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions1416, when executed by the processor(s)1410, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate arrays (FPGAs), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network1480may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network1480or a portion of the network1480may include a wireless or cellular network, and the coupling1482may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling1482may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions1416may be transmitted or received over the network1480using a transmission medium via a network interface device (e.g., a network interface component included in the communication components1464) and utilizing any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, instructions1416may be transmitted or received using a transmission medium via the coupling1472(e.g., a peer-to-peer coupling) to the device1470. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions1416for execution by the machine1400, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of method1300may be performed by one or more processors. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine but also deployed across several machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across several locations. Described implementations of the subject matter can include one or more features, alone or in combination as illustrated below by way of examples. Example 1 is a system comprising: at least one hardware processor; and at least one memory storing instructions that cause the at least one hardware processor to perform operations comprising: detecting a first stream object on a view, the view comprising a query associated with a source table; determining a syntax tree of the query based on a definition of the view; expanding the view based on replacing the first stream object with the syntax tree, the syntax tree comprising a second stream object on the source table; and performing stream expansion of the second stream object based on computing changes on the source table. In Example 2, the subject matter of Example 1 includes subject matter where during performing the stream expansion, the instructions further cause the at least one hardware processor to perform operations comprising: converting the second stream object on the source table into a second query; and performing the second query to compute the changes on the source table. In Example 3, the subject matter of Example 2 includes subject matter where during performing the second query, the instructions further cause the at least one hardware processor to perform operations comprising: determining added and deleted micro-partitions in the source table. In Example 4, the subject matter of Example 3 includes subject matter where during performing the second query, the instructions further cause the at least one hardware processor to perform operations comprising: joining rows from the added and deleted micro-partitions to compute the changes in the source table. In Example 5, the subject matter of Examples 1-4 includes subject matter where the query is associated with a plurality of source tables, the plurality of source tables comprising the source table, and at least a second source table. In Example 6, the subject matter of Example 5 includes subject matter where the instructions further cause the at least one hardware processor to perform operations comprising: expanding the view based on replacing the first stream object on the view with the syntax tree, the syntax tree comprising the second stream object on the source table and at least a third stream object on the at least a second source table. In Example 7, the subject matter of Example 6 includes subject matter where the instructions further cause the at least one hardware processor to perform operations comprising: converting the second stream object on the source table into a second query; and converting the third stream object on the at least a second source table into a third query. In Example 8, the subject matter of Example 7 includes subject matter where the instructions further cause the at least one hardware processor to perform operations comprising: performing the stream expansion of the second stream object based on performing the second query to compute the changes on the source table; and performing stream expansion of the third stream object based on performing the third query to compute changes on the at least a second source table. In Example 9, the subject matter of Example 8 includes subject matter where the syntax tree comprises a join operation between the second stream object and the at least a third stream object, and wherein the instructions further cause the at least one hardware processor to perform operations comprising: replacing the join operation associated with the expanding of the view with a union of two join operations associated with the stream expansion of the second stream object and the stream expansion of the third stream object. In Example 10, the subject matter of Examples 8-9 includes subject matter where the second query or the third query comprises a change node requesting a change operation, and the instructions further cause the at least one hardware processor to perform operations comprising: during the stream expansion of the second stream object or the stream expansion of the third stream object: rewriting the second query or the third query to compute changes associated with the change operation. Example 11 is a method comprising: detecting a first stream object on a view, the view comprising a query associated with a source table; determining a syntax tree of the query based on a definition of the view; expanding the view based on replacing the first stream object with the syntax tree, the syntax tree comprising a second stream object on the source table; and performing stream expansion of the second stream object based on computing changes on the source table. In Example 12, the subject matter of Example 11 includes subject matter where performing the stream expansion further comprises: converting the second stream object on the source table into a second query; and performing the second query to compute the changes on the source table. In Example 13, the subject matter of Example 12 includes subject matter where performing the second query further comprises: determining added and deleted micro-partitions in the source table. In Example 14, the subject matter of Example 13 includes subject matter where performing the second query further comprises: joining rows from the added and deleted micro-partitions to compute the changes in the source table. In Example 15, the subject matter of Examples 11-14 includes subject matter where the query is associated with a plurality of source tables, the plurality of source tables comprising the source table, and at least a second source table. In Example 16, the subject matter of Example 15 includes, expanding the view based on replacing the first stream object on the view with the syntax tree, the syntax tree comprising the second stream object on the source table, and at least a third stream object on the at least a second source table. In Example 17, the subject matter of Example 16 includes, converting the second stream object on the source table into a second query; and converting the third stream object on the at least a second source table into a third query. In Example 18, the subject matter of Example 17 includes, performing the stream expansion of the second stream object based on performing the second query to compute the changes on the source table; and performing stream expansion of the third stream object based on performing the third query to compute changes on the at least a second source table. In Example 19, the subject matter of Example 18 includes subject matter where the syntax tree comprises a join operation between the second stream object and the at least a third stream object, and wherein the method further comprises: replacing the join operation associated with the expanding of the view with a union of two join operations associated with the stream expansion of the second stream object and the stream expansion of the third stream object. In Example 20, the subject matter of Examples 18-19 includes subject matter where the second query or the third query comprises a change node requesting a change operation, and wherein the method further comprises: during the stream expansion of the second stream object or the stream expansion of the third stream object: rewriting the second query or the third query to compute changes associated with the change operation. Example 21 is a computer-storage medium comprising instructions that, when executed by one or more processors of a machine, configure the machine to perform operations comprising: detecting a first stream object on a view, the view comprising a query associated with a source table; determining a syntax tree of the query based on a definition of the view; expanding the view based on replacing the first stream object with the syntax tree, the syntax tree comprising a second stream object on the source table; and performing stream expansion of the second stream object based on computing changes on the source table. In Example 22, the subject matter of Example 21 includes subject matter where the operations for performing the stream expansion further comprise: converting the second stream object on the source table into a second query; and performing the second query to compute the changes on the source table. In Example 23, the subject matter of Example 22 includes subject matter where the operations for performing the second query further comprise: determining added and deleted micro-partitions in the source table. In Example 24, the subject matter of Example 23 includes subject matter where the operations for performing the second query further comprise: joining rows from the added and deleted micro-partitions to compute the changes in the source table. In Example 25, the subject matter of Examples 21-24 includes subject matter where the query is associated with a plurality of source tables, the plurality of source tables comprising the source table, and at least a second source table. In Example 26, the subject matter of Example 25 includes, the operations further comprising: expanding the view based on replacing the first stream object on the view with the syntax tree, the syntax tree comprising the second stream object on the source table, and at least a third stream object on the at least a second source table. In Example 27, the subject matter of Example 26 includes, the operations further comprising: converting the second stream object on the source table into a second query; and converting the third stream object on the at least a second source table into a third query. In Example 28, the subject matter of Example 27 includes, the operations further comprising: performing the stream expansion of the second stream object based on performing the second query to compute the changes on the source table; and performing stream expansion of the third stream object based on performing the third query to compute changes on the at least a second source table. In Example 29, the subject matter of Example 28 includes subject matter where the syntax tree comprises a join operation between the second stream object and the at least a third stream object, and wherein the operations further comprise: replacing the join operation associated with the expanding of the view with a union of two join operations associated with the stream expansion of the second stream object and the stream expansion of the third stream object. In Example 30, the subject matter of Examples 28-29 includes subject matter where the second query or the third query comprises a change node requesting a change operation, and wherein the operations further comprise: during the stream expansion of the second stream object or the stream expansion of the third stream object: rewriting the second query or the third query to compute changes associated with the change operation. Example 31 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of Examples 1-30. Example 32 is an apparatus comprising means to implement any of Examples 1-30. Example 33 is a system to implement any of Examples 1-30. Example 34 is a method to implement any of Examples 1-30. Although the embodiments of the present disclosure have been described concerning specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent, to those of skill in the art, upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim is still deemed to fall within the scope of that claim.
114,854
11860851
SUMMARY Various aspects of this disclosure provide systems and methods for enhancing operation of entities in a network of moving things. As non-limiting examples, various aspects of this disclosure provide systems and methods to guarantee data integrity when building data analytics in a network of moving things. DETAILED DESCRIPTION OF VARIOUS ASPECTS OF THE DISCLOSURE As utilized herein the terms “circuits” and “circuitry” refer to physical electronic components (i.e., hardware) and any software and/or firmware (“code”) that may configure the hardware, be executed by the hardware, and or otherwise be associated with the hardware. As used herein, for example, a particular processor and memory (e.g., a volatile or non-volatile memory device, a general computer-readable medium, etc.) may comprise a first “circuit” when executing a first one or more lines of code and may comprise a second “circuit” when executing a second one or more lines of code. Additionally, a circuit may comprise analog and/or digital circuitry. Such circuitry may, for example, operate on analog and/or digital signals. It should be understood that a circuit may be in a single device or chip, on a single motherboard, in a single chassis, in a plurality of enclosures at a single geographical location, in a plurality of enclosures distributed over a plurality of geographical locations, etc. As utilized herein, circuitry is “operable” to perform a function whenever the circuitry comprises the necessary hardware and code (if any is necessary) to perform the function, regardless of whether performance of the function is disabled, or not enabled (e.g., by a user-configurable setting, factory setting or trim, etc.). As utilized herein, “and/or” means any one or more of the items in the list joined by “and/or”. As an example, “x and/or y” means any element of the three-element set {(x), (y), (x, y)}. That is, “x and/or y” means “one or both of x and y.” As another example, “x, y, and/or z” means any element of the seven-element set {(x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)}. That is, “x, y, and/or x” means “one or more of x, y, and z.” As utilized herein, the terms “e.g.,” and “for example,” “exemplary,” and the like set off lists of one or more non-limiting examples, instances, or illustrations. The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting of the disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “includes,” “comprising,” “including,” “has,” “have,” “having,” and the like when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. Thus, for example, a first element, a first component or a first section discussed below could be termed a second element, a second component or a second section without departing from the teachings of the present disclosure. Similarly, various spatial terms, such as “upper,” “lower,” “side,” and the like, may be used in distinguishing one element from another element in a relative manner. It should be understood, however, that components may be oriented in different manners, for example an electronic device may be turned sideways so that its “top” surface is facing horizontally and its “side” surface is facing vertically, without departing from the teachings of the present disclosure. With the proliferation of the mobile and/or static things (e.g., devices, machines, people, etc.) and logistics for such things to become connected to each other (e.g., in the contexts of smart logistics, transportation, environmental sensing, etc.), a platform that is for example always-on, robust, scalable and secure that is capable of providing connectivity, services and Internet access to such things (or objects), anywhere and anytime is desirable. Efficient power utilization within the various components of such system is also desirable. Accordingly, various aspects of the present disclosure provide a fully-operable, always-on, responsive, robust, scalable, secure platform/system/architecture to provide connectivity, services and Internet access to all mobile things and/or static things (e.g., devices, machines, people, access points, end user devices, sensors, etc.) anywhere and anytime, while operating in an energy-efficient manner. Various aspects of the present disclosure provide a platform that is flexibly configurable and adaptable to the various requirements, features, and needs of different environments, where each environment may be characterized by a respective level of mobility and density of mobile and/or static things, and the number and/or types of access to those things. Characteristics of various environments may, for example, include high mobility of nodes (e.g., causing contacts or connections to be volatile), high number of neighbors, high number of connected mobile users, mobile access points, availability of multiple networks and technologies (e.g., sometimes within a same area), etc. For example, the mode of operation of the platform may be flexibly adapted from environment to environment, based on each environment's respective requirements and needs, which may be different from other environments. Additionally for example, the platform may be flexibly optimized (e.g., at design/installation time and/or in real-time) for different purposes (e.g., to reduce the latency, increase throughput, reduce power consumption, load balance, increase reliability, make more robust with regard to failures or other disturbances, etc.), for example based on the content, service or data that the platform provides or handles within a particular environment. In accordance with various aspects of the present disclosure, many control and management services (e.g., mobility, security, routing, etc.) are provided on top of the platform (e.g., directly, using control overlays, using containers, etc.), such services being compatible with the services currently deployed on top of the Internet or other communication network(s). The communication network (or platform), in whole or in part, may for example be operated in public and/or private modes of operation, for example depending on the use case. The platform may, for example, operate in a public or private mode of operation, depending on the use-case (e.g., public Internet access, municipal environment sensing, fleet operation, etc.). Additionally for example, in an implementation in which various network components are mobile, the transportation and/or signal control mechanisms may be adapted to serve the needs of the particular implementation. Also for example, wireless transmission power and/or rate may be adapted (e.g., to mitigate interference, to reduce power consumption, to extend the life of network components, etc. Various example implementations of a platform, in accordance with various aspects of the present disclosure, are capable of connecting different subsystems, even when various other subsystems that may normally be utilized are unavailable. For example, the platform may comprise various built-in redundancies and fail-recovery mechanisms. For example, the platform may comprise a self-healing capability, self-configuration capability, self-adaptation capability, etc. The protocols and functions of the platform may, for example, be prepared to be autonomously and smoothly configured and adapted to the requirements and features of different environments characterized by different levels of mobility and density of things (or objects), the number/types of access to those things. For example, various aspects of the platform may gather context parameters that can influence any or all decisions. Such parameters may, for example, be derived locally, gathered from a neighborhood, fixed APs, the Cloud, etc. Various aspects of the platform may also, for example, ask for historical information to feed any of the decisions, where such information can be derived from historical data, from surveys, from simulators, etc. Various aspects of the platform may additionally, for example, probe or monitor decisions made throughout the network, for example to evaluate the network and/or the decisions themselves in real-time. Various aspects of the platform may further, for example, enforce the decisions in the network (e.g., after evaluating the probing results). Various aspects of the platform may, for example, establish thresholds to avoid any decision that is to be constantly or repeatedly performed without any significant advantage (e.g., technology change, certificate change, IP change, etc.). Various aspects of the platform may also, for example, learn locally (e.g., with the decisions performed) and dynamically update the decisions. In addition to (or instead of) failure robustness, a platform may utilize multiple connections (or pathways) that exist between distinct sub-systems or elements within the same sub-system, to increase the robustness and/or load-balancing of the system. The following discussion will present examples of the functionality performed by various example subsystems of the communication network. It should be understood that the example functionality discussed herein need not be performed by the particular example subsystem or by a single subsystem. For example, the subsystems present herein may interact with each other, and data or control services may be deployed either in a centralized way, or having their functionalities distributed among the different subsystems, for example leveraging the cooperation between the elements of each subsystem. Various aspects of the present disclosure provide a communication network (e.g., a city-wide vehicular network, a shipping port-sized vehicular network, a campus-wide vehicular network, etc.) that utilizes vehicles (e.g., automobiles, buses, trucks, boats, forklifts, etc.) as Wi-Fi hotspots. Note that Wi-Fi is generally used throughout this discussion as an example, but the scope of various aspects of this disclosure is not limited thereto. For example, other wireless LAN technologies, PAN technologies, MAN technologies, etc., may be utilized. Such utilization may, for example, provide cost-effective ways to gather substantial amounts of urban data, and provide for the efficient offloading of traffic from congested cellular networks (or other networks). In controlled areas (e.g., ports, harbors, etc.) with many vehicles, a communication network in accordance with various aspects of this disclosure may expand the wireless coverage of existing enterprise Wi-Fi networks, for example providing for real-time communication with vehicle drivers (e.g., human, computer-controlled, etc.) and other mobile employees without the need for SIM cards or cellular (or other network) data plans. Vehicles may have many advantageous characteristics that make them useful as Wi-Fi (or general wireless) hotspots. For example, vehicles generally have at least one battery, vehicles are generally densely spread over the city at street level and/or they are able to establish many contacts with each other in a controlled space, and vehicles can communicate with 10× the range of normal Wi-Fi in the 5.9 GHz frequency band, reserved for intelligent transportation systems in the EU, the U.S., and elsewhere. Note that the scope of this disclosure is not limited to such 5.9 GHz wireless communication. Further, vehicles are able to effectively expand their coverage area into a swath over a period of time, enabling a single vehicle access point to interact with substantially more data sources over the period of time. In accordance with various aspects of the present disclosure, an affordable multi-network on-board unit (OBU) is presented. Note that the OBU may also be referred to herein as a mobile access point, Mobile AP, MAP, etc. The OBU may, for example, comprise a plurality of networking interfaces (e.g., Wi-Fi, 802.11p, 4G, Bluetooth, UWB, etc.). The OBU may, for example, be readily installed in or on private and/or public vehicles (e.g., individual user vehicles, vehicles of private fleets, vehicles of public fleets, etc.). The OBU may, for example, be installed in transportation fleets, waste management fleets, law enforcement fleets, emergency services, road maintenance fleets, taxi fleets, aircraft fleets, etc. The OBU may, for example, be installed in or on a vehicle or other structure with free mobility or relatively limited mobility. The OBU may also, for example, be carried by a person or service animal, mounted to a bicycle, mounted to a moving machine in general, mounted to a container, etc. The OBUs may, for example, operate to connect passing vehicles to the wired infrastructure of one or more network providers, telecom operators, etc. In accordance with the architecture, hardware, and software functionality discussed herein, vehicles and fleets can be connected not just to the cellular networks (or other wide area or metropolitan area networks, etc.) and existing Wi-Fi hotspots spread over a city or a controlled space, but also to other vehicles (e.g., utilizing multi-hop communications to a wired infrastructure, single or multi-hop peer-to-peer vehicle communication, etc.). The vehicles and/or fleets may, for example, form an overall mesh of communication links, for example including the OBUs and also fixed Access Points (APs) connected to the wired infrastructure (e.g., a local infrastructure, etc.). Note that OBUs herein may also be referred to as “Mobile APs,” “mobile hotspots,” “MAPs,” etc. Also note that fixed access points may also be referred to herein as Road Side Units (RSUs), Fixed APs, FAPs, etc. In an example implementation, the OBUs may communicate with the Fixed APs utilizing a relatively long-range protocol (e.g., 802.11p, etc.), and the Fixed APs may, in turn, be hard wired to the wired infrastructure (e.g., via cable, tethered optical link, etc.). Note that Fixed APs may also, or alternatively, be coupled to the infrastructure via wireless link (e.g., 802.11p, etc.). Additionally, clients or user devices may communicate with the OBUs using one or more relatively short-range protocols (e.g., Wi-Fi, Bluetooth, UWB, etc.). The OBUs, for example having a longer effective wireless communication range than typical Wi-Fi access points or other wireless LAN/PAN access points (e.g., at least for links such as those based on 802.11p, etc.), are capable of substantially greater coverage areas than typical Wi-Fi or other wireless LAN/PAN access points, and thus fewer OBUs are necessary to provide blanket coverage over a geographical area. The OBU may, for example, comprise a robust vehicular networking module (e.g., a connection manager) which builds on long-range communication protocol capability (e.g., 802.11p, etc.). For example, in addition to comprising 802.11p (or other long-range protocol) capability to communicate with Fixed APs, vehicles, and other nodes in the network, the OBU may comprise a network interface (e.g., 802.11a/b/g/n, 802.11ac, 802.11af, any combination thereof, etc.) to provide wireless local area network (WLAN) connectivity to end user devices, sensors, fixed Wi-Fi access points, etc. For example, the OBU may operate to provide in-vehicle Wi-Fi Internet access to users in and/or around the vehicle (e.g., a bus, train car, taxi cab, public works vehicle, etc.). The OBU may further comprise one or more wireless backbone communication interfaces (e.g., cellular network interfaces, etc.). Though in various example scenarios, a cellular network interface (or other wireless backbone communication interface) might not be the preferred interface for various reasons (e.g., cost, power, bandwidth, etc.), the cellular network interface may be utilized to provide connectivity in geographical areas that are not presently supported by a Fixed AP, may be utilized to provide a fail-over communication link, may be utilized for emergency communications, may be utilized to subscribe to local infrastructure access, etc. The cellular network interface may also, for example, be utilized to allow the deployment of solutions that are dependent on the cellular network operators. An OBU, in accordance with various aspects of the present disclosure, may for example comprise a smart connection manager that can select the best available wireless link(s) (e.g., Wi-Fi, 802.11p, cellular, vehicle mesh, etc.) with which to access the Internet. The OBU may also, for example, provide geo-location capabilities (e.g., GPS, etc.), motion detection sensors to determine if the vehicle is in motion, and a power control subsystem (e.g., to ensure that the OBU does not deplete the vehicle battery, etc.). The OBU may, for example, comprise any or all of the sensors (e.g., environmental sensors, etc.) discussed herein. The OBU may also, for example, comprise a manager that manages machine-to-machine data acquisition and transfer (e.g., in a real-time or delay-tolerant fashion) to and from the cloud. For example, the OBU may log and/or communicate information of the vehicles. The OBU may, for example, comprise a connection and/or routing manager that operates to perform routing of communications in a vehicle-to-vehicle/vehicle-to-infrastructure multi-hop communication. A mobility manager (or controller, MC) may, for example, ensure that communication sessions persist over one or more handoff(s) (also referred to herein as a “handover” or “handovers”) (e.g., between different Mobile APs, Fixed APs, base stations, hot spots, etc.), among different technologies (e.g., 802.11p, cellular, Wi-Fi, satellite, etc.), among different MCs (e.g., in a fail-over scenario, load redistribution scenario, etc.), across different interfaces (or ports), etc. Note that the MC may also be referred to herein as a Local Mobility Anchor (LMA), a Network Controller, etc. Note that the MC, or a plurality thereof, may for example be implemented as part of the backbone, but may also, or alternatively, be implemented as part of any of a variety of components or combinations thereof. For example, the MC may be implemented in a Fixed AP (or distributed system thereof), as part of an OBU (or a distributed system thereof), etc. Various non-limiting examples of system components and/or methods are provided in U.S. Provisional Application No. 62/222,098, filed Sep. 22, 2015, and titled “Systems and Method for Managing Mobility in a Network of Moving Things,” the entire contents of which are hereby incorporated herein by reference. Note that in an example implementation including a plurality of MCs, such MCs may be co-located and/or may be geographically distributed. Various aspects of the present disclosure also provide a cloud-based service-oriented architecture that handles the real-time management, monitoring and reporting of the network and clients, the functionalities required for data storage, processing and management, the Wi-Fi client authentication and Captive Portal display, etc. A communication network (or component thereof) in accordance with various aspects of the present disclosure may, for example, support a wide range of smart city applications (or controlled scenarios, or connected scenarios, etc.) and/or use-cases, as described herein. For example, an example implementation may operate to turn each vehicle (e.g., both public and private taxis, buses, trucks, etc.) into a Mobile AP (e.g., a mobile Wi-Fi hotspot), offering Internet access to employees, passengers and mobile users travelling in the city, waiting in bus stops, sitting in parks, etc. Moreover, through an example vehicular mesh network formed between vehicles and/or fleets of vehicles, an implementation may be operable to offload cellular traffic through the mobile Wi-Fi hotspots and/or fixed APs (e.g., 802.11p-based APs) spread over the city and connected to the wired infrastructure of public or private telecom operators in strategic places, while ensuring the widest possible coverage at the lowest possible cost. An example implementation (e.g., of a communication network and/or components thereof) may, for example, be operable as a massive urban scanner that gathers large amounts of data (e.g., continuously) on-the-move, actionable or not, generated by a myriad of sources spanning from the in-vehicle sensors or On Board Diagnostic System port (e.g., OBD2, etc.), external Wi-Fi/Bluetooth-enabled sensing units spread over the city, devices of vehicles' drivers and passengers (e.g., information characterizing such devices and/or passengers, etc.), positioning system devices (e.g., position information, velocity information, trajectory information, travel history information, etc.), etc. Depending on the use case, the OBU may for example process (or computer, transform, manipulate, aggregate, summarize, etc.) the data before sending the data from the vehicle, for example providing the appropriate granularity (e.g., value resolution) and sampling rates (e.g., temporal resolution) for each individual application. For example, the OBU may, for example, process the data in any manner deemed advantageous by the system. The OBU may, for example, send the collected data (e.g., raw data, preprocessed data, information of metrics calculated based on the collected data, etc.) to the Cloud (e.g., to one or more networked servers coupled to any portion of the network) in an efficient and reliable manner to improve the efficiency, environmental impact and social value of municipal city operations and transportation services. Various example use cases are described herein. In an example scenario in which public buses are moving along city routes and/or taxis are performing their private transportation services, the OBU is able to collect large quantities of real-time data from the positioning systems (e.g., GPS, etc.), from accelerometer modules, etc. The OBU may then, for example, communicate such data to the Cloud, where the data may be processed, reported and viewed, for example to support such public or private bus and/or taxi operations, for example supporting efficient remote monitoring and scheduling of buses and taxis, respectively. In an example implementation, small cameras (or other sensors) may be coupled to small single-board computers (SBCs) that are placed above the doors of public buses to allow capturing image sequences of people entering and leaving buses, and/or on stops along the bus routes in order to estimate the number of people waiting for a bus. Such data may be gathered by the OBU in order to be sent to the Cloud. With such data, public transportation systems may detect peaks; overcrowded buses, routes and stops; underutilized buses, routes and stops; etc., enabling action to be taken in real-time (e.g., reducing bus periodicity to decrease fuel costs and CO2emissions where and when passenger flows are smaller, etc.) as well as detecting systematic transportation problems. An OBU may, for example, be operable to communicate with any of a variety of Wi-Fi-enabled sensor devices equipped with a heterogeneous collection of environmental sensors. Such sensors may, for example, comprise noise sensors (microphones, etc.), gas sensors (e.g., sensing CO, NO2, O3, volatile organic compounds (or VOCs), CO2, etc.), smoke sensors, pollution sensors, meteorological sensors (e.g., sensing temperature, humidity, luminosity, particles, solar radiation, wind speed (e.g., anemometer), wind direction, rain (e.g., a pluviometer), optical scanners, biometric scanners, cameras, microphones, etc.). Such sensors may also comprise sensors associated with users (e.g., vehicle operators or passengers, passersby, etc.) and/or their personal devices (e.g., smart phones or watches, biometrics sensors, wearable sensors, implanted sensors, etc.). Such sensors may, for example, comprise sensors and/or systems associated with on-board diagnostic (OBD) units for vehicles. Such sensors may, for example, comprise positioning sensors (e.g., GPS sensors, Galileo sensors, GLONASS sensors, etc.). Such sensors may, for example, comprise container sensors (e.g., garbage can sensors, shipping container sensors, container environmental sensors, container tracking sensors, etc.). Once a vehicle enters the vicinity of such a sensor device, a wireless link may be established, so that the vehicle (or OBU thereof) can collect sensor data from the sensor device and upload the collected data to a database in the Cloud. The appropriate action can then be taken. In an example waste management implementation, several waste management (or collection) trucks may be equipped with OBUs that are able to periodically communicate with sensors installed on containers in order to gather information about waste level, time passed since last collection, etc. Such information may then sent to the Cloud (e.g., to a waste management application coupled to the Internet, etc.) through the vehicular mesh network, in order to improve the scheduling and/or routing of waste management trucks. Note that various sensors may always be in range of the Mobile AP (e.g., vehicle-mounted sensors). Note that the sensor may also (or alternatively) be mobile (e.g., a sensor mounted to another vehicle passing by a Mobile AP or Fixed AP, a drone-mounted sensor, a pedestrian-mounted sensor, etc.). In an example implementation, for example in a controlled space (e.g., a port, harbor, airport, factory, plantation, mine, etc.) with many vehicles, machines and employees, a communication network in accordance with various aspects of the present disclosure may expand the wireless coverage of enterprise and/or local Wi-Fi networks, for example without resorting to a Telco-dependent solution based on SIM cards or cellular fees. In such an example scenario, apart from avoiding expensive cellular data plans, limited data rate and poor cellular coverage in some places, a communication network in accordance with various aspects of the present disclosure is also able to collect and/or communicate large amounts of data, in a reliable and real-time manner, where such data may be used to optimize harbor logistics, transportation operations, etc. For example in a port and/or harbor implementation, by gathering real-time information on the position, speed, fuel consumption and CO2emissions of the vehicles, the communication network allows a port operator to improve the coordination of the ship loading processes and increase the throughput of the harbor. Also for example, the communication network enables remote monitoring of drivers' behaviors, trucks' positions and engines' status, and then be able to provide real-time notifications to drivers (e.g., to turn on/off the engine, follow the right route inside the harbor, take a break, etc.), thus reducing the number and duration of the harbor services and trips. Harbor authorities may, for example, quickly detect malfunctioning trucks and abnormal trucks' circulation, thus avoiding accidents in order to increase harbor efficiency, security, and safety. Additionally, the vehicles can also connect to Wi-Fi access points from harbor local operators, and provide Wi-Fi Internet access to vehicles' occupants and surrounding harbor employees, for example allowing pilots to save time by filing reports via the Internet while still on the water. FIG.1shows a block diagram of a communication network100, in accordance with various aspects of this disclosure. Any or all of the functionality discussed herein may be performed by any or all of the example components of the example network100. Also, the example network100may, for example, share any or all characteristics with the other example networks and/or network components200,300,400,500-570, and600, discussed herein. The example network100, for example, comprises a Cloud that may, for example comprise any of a variety of network level components. The Cloud may, for example, comprise any of a variety of server systems executing applications that monitor and/or control components of the network100. Such applications may also, for example, manage the collection of information from any of a large array of networked information sources, many examples of which are discussed herein. The Cloud (or a portion thereof) may also be referred to, at times, as an API. For example, Cloud (or a portion thereof) may provide one or more application programming interfaces (APIs) which other devices may use for communicating/interacting with the Cloud. An example component of the Cloud may, for example, manage interoperability with various multi-cloud systems and architectures. Another example component (e.g., a Cloud service component) may, for example, provide various cloud services (e.g., captive portal services, authentication, authorization, and accounting (AAA) services, API Gateway services, etc.). An additional example component (e.g., a DevCenter component) may, for example, provide network monitoring and/or management functionality, manage the implementation of software updates, etc. A further example component of the Cloud may manage data storage, data analytics, data access, etc. A still further example component of the Cloud may include any of a variety of third-partly applications and services. The Cloud may, for example, be coupled to the Backbone/Core Infrastructure of the example network100via the Internet (e.g., utilizing one or more Internet Service Providers). Though the Internet is provided by example, it should be understood that scope of the present disclosure is not limited thereto. The Backbone/Core may, for example, comprise any one or more different communication infrastructure components. For example, one or more providers may provide backbone networks or various components thereof. As shown in the example network100illustrated inFIG.1, a Backbone provider may provide wireline access (e.g., PSTN, fiber, cable, etc.). Also for example, a Backbone provider may provide wireless access (e.g., Microwave, LTE/Cellular, 5G/TV Spectrum, etc.). The Backbone/Core may also, for example, comprise one or more Local Infrastructure Providers. The Backbone/Core may also, for example, comprise a private infrastructure (e.g., run by the network100implementer, owner, etc.). The Backbone/Core may, for example, provide any of a variety of Backbone Services (e.g., AAA, Mobility, Monitoring, Addressing, Routing, Content services, Gateway Control services, etc.). The Backbone/Core Infrastructure may comprise any of a variety of characteristics, non-limiting examples of which are provided herein. For example, the Backbone/Core may be compatible with different wireless or wired technologies for backbone access. The Backbone/Core may also be adaptable to handle public (e.g., municipal, city, campus, etc.) and/or private (e.g., ports, campus, etc.) network infrastructures owned by different local providers, and/or owned by the network implementer or stakeholder. The Backbone/Core may, for example, comprise and/or interface with different Authentication, Authorization, and Accounting (AAA) mechanisms. The Backbone/Core Infrastructure may, for example, support different modes of operation (e.g., L2 in port implementations, L3 in on-land public transportation implementations, utilizing any one or more of a plurality of different layers of digital IP networking, any combinations thereof, equivalents thereof, etc.) or addressing pools. The Backbone/Core may also for example, be agnostic to the Cloud provider(s) and/or Internet Service Provider(s). Additionally for example, the Backbone/Core may be agnostic to requests coming from any or all subsystems of the network100(e.g., Mobile APs or OBUs (On Board Units), Fixed APs or RSUs (Road Side Units), MCs (Mobility Controllers) or LMAs (Local Mobility Anchors) or Network Controllers, etc.) and/or third-party systems. The Backbone/Core Infrastructure may, for example, comprise the ability to utilize and/or interface with different data storage/processing systems (e.g., MongoDB, MySql, Redis, etc.). The Backbone/Core Infrastructure may further, for example, provide different levels of simultaneous access to the infrastructure, services, data, etc. The example network100may also, for example, comprise a Fixed Hotspot Access Network. Various example characteristics of such a Fixed Hotspot Access Network200are shown atFIG.2. The example network200may, for example, share any or all characteristics with the other example networks and/or network components100,300,400,500-570, and600, discussed herein. In the example network200, the Fixed APs (e.g., the proprietary APs, the public third party APs, the private third party APs, etc.) may be directly connected to the local infrastructure provider and/or to the wireline/wireless backbone. Also for example, the example network200may comprise a mesh between the various APs via wireless technologies. Note, however, that various wired technologies may also be utilized depending on the implementation. As shown, different fixed hotspot access networks can be connected to a same backbone provider, but may also be connected to different respective backbone providers. In an example implementation utilizing wireless technology for backbone access, such an implementation may be relatively fault tolerant. For example, a Fixed AP may utilize wireless communications to the backbone network (e.g., cellular, 3G, LTE, other wide or metropolitan area networks, etc.) if the backhaul infrastructure is down. Also for example, such an implementation may provide for relatively easy installation (e.g., a Fixed AP with no cable power source that can be placed virtually anywhere). In the example network200, the same Fixed AP can simultaneously provide access to multiple Fixed APs, Mobile APs (e.g., vehicle OBUs, etc.), devices, user devices, sensors, things, etc. For example, a plurality of mobile hotspot access networks (e.g., OBU-based networks, etc.) may utilize the same Fixed AP. Also for example, the same Fixed AP can provide a plurality of simultaneous accesses to another single unit (e.g., another Fixed AP, Mobile AP, device, etc.), for example utilizing different channels, different radios, etc.). Note that a plurality of Fixed APs may be utilized for fault-tolerance/fail-recovery purposes. In an example implementation, a Fixed AP and its fail-over AP may both be normally operational (e.g., in a same switch). Also for example, one or more Fixed APs may be placed in the network at various locations in an inactive or monitoring mode, and ready to become operational when needed (e.g., in response to a fault, in response to an emergency services need, in response to a data surge, etc.). Referring back toFIG.1, the example Fixed Hotspot Access Network is shown with a wireless communication link to a backbone provider (e.g., to one or more Backbone Providers and/or Local Infrastructure Providers), to a Mobile Hotspot Access Network, to one or more End User Devices, and to the Environment. Also, the example Fixed Hotspot Access Network is shown with a wired communication link to one or more Backbone Providers, to the Mobile Hotspot Access Network, to one or more End User Devices, and to the Environment. The Environment may comprise any of a variety of devices (e.g., in-vehicle networks, devices, and sensors; autonomous vehicle networks, devices, and sensors; maritime (or watercraft) and port networks, devices, and sensors; general controlled-space networks, devices, and sensors; residential networks, devices, and sensors; disaster recovery & emergency networks, devices, and sensors; military and aircraft networks, devices, and sensors; smart city networks, devices, and sensors; event (or venue) networks, devices, and sensors; underwater and underground networks, devices, and sensors; agricultural networks, devices, and sensors; tunnel (auto, subway, train, etc.) networks, devices, and sensors; parking networks, devices, and sensors; security and surveillance networks, devices, and sensors; shipping equipment and container networks, devices, and sensors; environmental control or monitoring networks, devices, and sensors; municipal networks, devices, and sensors; waste management networks, devices, and sensors, road maintenance networks, devices, and sensors, traffic management networks, devices, and sensors; advertising networks, devices and sensors; etc.). The example network100ofFIG.1also comprises a Mobile Hotspot Access Network. Various example characteristics of such a Mobile Hotspot Access Network300are shown atFIG.3. Note that various fixed network components (e.g., Fixed APs) are also illustrated. The example network300may, for example, share any or all characteristics with the other example networks and/or network components100,200,400,500-570, and600discussed herein. The example network300comprises a wide variety of Mobile APs (or hotspots) that provide access to user devices, provide for sensor data collection, provide multi-hop connectivity to other Mobile APs, etc. For example, the example network300comprises vehicles from different fleets (e.g., aerial, terrestrial, underground, (under)water, etc.). For example, the example network300comprises one or more mass distribution/transportation fleets, one or more mass passenger transportation fleets, private/public shared-user fleets, private vehicles, urban and municipal fleets, maintenance fleets, drones, watercraft (e.g., boats, ships, speedboats, tugboats, barges, etc.), emergency fleets (e.g., police, ambulance, firefighter, etc.), etc. The example network300, for example, shows vehicles from different fleets directly connected and/or mesh connected, for example using same or different communication technologies. The example network300also shows fleets simultaneously connected to different Fixed APs, which may or may not belong to different respective local infrastructure providers. As a fault-tolerance mechanism, the example network300may for example comprise the utilization of long-range wireless communication network (e.g., cellular, 3G, 4G, LTE, etc.) in vehicles if the local network infrastructure is down or otherwise unavailable. A same vehicle (e.g., Mobile AP or OBU) can simultaneously provide access to multiple vehicles, devices, things, etc., for example using a same communication technology (e.g., shared channels and/or different respective channels thereof) and/or using a different respective communication technology for each. Also for example, a same vehicle can provide multiple accesses to another vehicle, device, thing, etc., for example using a same communication technology (e.g., shared channels and/or different respective channels thereof, and/or using a different communication technology). Additionally, multiple network elements may be connected together to provide for fault-tolerance or fail recovery, increased throughput, or to achieve any or a variety of a client's networking needs, many of examples of which are provided herein. For example, two Mobile APs (or OBUs) may be installed in a same vehicle, etc. Referring back toFIG.1, the example Mobile Hotspot Access Network is shown with a wireless communication link to a backbone provider (e.g., to one or more Backbone Providers and/or Local Infrastructure Providers), to a Fixed Hotspot Access Network, to one or more End User Device, and to the Environment (e.g., to any one of more of the sensors or systems discussed herein, any other device or machine, etc.). Though the Mobile Hotspot Access Network is not shown having a wired link to the various other components, there may (at least at times) be such a wired link, at least temporarily. The example network100ofFIG.1also comprises a set of End-User Devices. Various example end user devices are shown atFIG.4. Note that various other network components (e.g., Fixed Hotspot Access Networks, Mobile Hotspot Access Network(s), the Backbone/Core, etc.) are also illustrated. The example network400may, for example, share any or all characteristics with the other example networks and/or network components100,200,300,500-570, and600, discussed herein. The example network400shows various mobile networked devices. Such network devices may comprise end-user devices (e.g., smartphones, tablets, smartwatches, laptop computers, webcams, personal gaming devices, personal navigation devices, personal media devices, personal cameras, health-monitoring devices, personal location devices, monitoring panels, printers, etc.). Such networked devices may also comprise any of a variety of devices operating in the general environment, where such devices might not for example be associated with a particular user (e.g. any or all of the sensor devices discussed herein, vehicle sensors, municipal sensors, fleet sensors road sensors, environmental sensors, security sensors, traffic sensors, waste sensors, meteorological sensors, any of a variety of different types of municipal or enterprise equipment, etc.). Any of such networked devices can be flexibly connected to distinct backbone, fixed hotspot access networks, mobile hotspot access networks, etc., using the same or different wired/wireless technologies. A mobile device may, for example, operate as an AP to provide simultaneous access to multiple devices/things, which may then form ad hoc networks, interconnecting devices ultimately connected to distinct backbone networks, fixed hotspot, and/or mobile hotspot access networks. Devices (e.g., any or all of the devices or network nodes discussed herein) may, for example, have redundant technologies to access distinct backbone, fixed hotspot, and/or mobile hotspot access networks, for example for fault-tolerance and/or load-balancing purposes (e.g., utilizing multiple SIM cards, etc.). A device may also, for example, simultaneously access distinct backbone, fixed hotspot access networks, and/or mobile hotspot access networks, belonging to the same provider or to different respective providers. Additionally for example, a device can provide multiple accesses to another device/thing (e.g., via different channels, radios, etc.). Referring back toFIG.1, the example End-User Devices are shown with a wireless communication link to a backbone provider (e.g., to one or more Backbone Providers and/or Local Infrastructure Providers), to a Fixed Hotspot Access Network, to a Mobile Hotspot Access Network, and to the Environment. Also for example, the example End-User Devices are shown with a wired communication link to a backbone provider, to a Fixed Hotspot Access Network, to a Mobile Hotspot Access Network, and to the Environment. The example network100illustrated inFIG.1has a flexible architecture that is adaptable at implementation time (e.g., for different use cases) and/or adaptable in real-time, for example as network components enter and leave service.FIGS.5A-5Cillustrate such flexibility by providing example modes (or configurations). The example networks500-570may, for example, share any or all characteristics with the other example networks and/or network components100,200,300,400, and600, discussed herein. For example and without limitation, any or all of the communication links (e.g., wired links, wireless links, etc.) shown in the example networks500-570are generally analogous to similarly positioned communication links shown in the example network100ofFIG.1. For example, various aspects of this disclosure provide communication network architectures, systems, and methods for supporting a dynamically configurable communication network comprising a complex array of both static and moving communication nodes (e.g., the Internet of moving things). For example, a communication network implemented in accordance with various aspects of the present disclosure may operate in one of a plurality of modalities comprising various fixed nodes, mobile nodes, and/or a combination thereof, which are selectable to yield any of a variety of system goals (e.g., increased throughput, reduced latency and packet loss, increased availability and robustness of the system, extra redundancy, increased responsiveness, increased security in the transmission of data and/or control packets, reduced number of configuration changes by incorporating smart thresholds (e.g., change of technology, change of certificate, change of IP, etc.), providing connectivity in dead zones or zones with difficult access, reducing the costs for maintenance and accessing the equipment for updating/upgrading, etc.). At least some of such modalities may, for example, be entirely comprised of fixed-position nodes, at least temporarily if not permanently. For illustrative simplicity, many of the example aspects shown in the example system or network100ofFIG.1(and other Figures herein) are omitted fromFIGS.5A-5C, but may be present. For example, the Cloud, Internet, and ISP aspects shown inFIG.1and in other Figures are not explicitly shown inFIGS.5A-5C, but may be present in any of the example configurations (e.g., as part of the backbone provider network or coupled thereto, as part of the local infrastructure provider network or coupled thereto, etc.). For example, the first example mode500is presented as a normal execution mode, for example a mode (or configuration) in which all of the components discussed herein are present. For example, the communication system in the first example mode500comprises a backbone provider network, a local infrastructure provider network, a fixed hotspot access network, a mobile hotspot access network, end-user devices, and environment devices. As shown inFIG.5A, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the first example mode500(or configuration) via one or more wired (or tethered) links. For example, the backbone provider network may be communicatively coupled to the local infrastructure provider network (or any component thereof), fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via a wired link. Note that such a wired coupling may be temporary. Also note that in various example configurations, the backbone provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also shown inFIG.5A, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the first example mode500(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the backbone provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Also note that in various example configurations, the backbone provider network may also be communicatively coupled to the local infrastructure provider network via one or more wireless (or non-tethered) links. Though not shown in the first example mode500(or any of the example modes ofFIGS.5A-5C), one or more servers may be communicatively coupled to the backbone provider network and/or the local infrastructure network.FIG.1provides an example of cloud servers being communicatively coupled to the backbone provider network via the Internet. As additionally shown inFIG.5A, and inFIG.1in more detail, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the first example mode500(or configuration) via one or more wired (or tethered) links. For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network (or any component thereof), fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the local infrastructure provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also, though not explicitly shown, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the first example mode500(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network (or any component thereof), the fixed hotspot access network (or any component thereof), the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Note that the communication link shown in the first example mode500ofFIG.5Abetween the local infrastructure provider network and the fixed hotspot access network may be wired and/or wireless. The fixed hotspot access network is also shown in the first example mode500to be communicatively coupled to the mobile hotspot access network, the end-user devices, and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Additionally, the mobile hotspot access network is further shown in the first example mode500to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the first example mode500to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Note that in various example implementations any of such wireless links may instead (or in addition) comprise a wired (or tethered) link. In the first example mode500(e.g., the normal mode), information (or data) may be communicated between an end-user device and a server (e.g., a computer system) via the mobile hotspot access network, the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an end user device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network, fixed hotspot access network, and/or local infrastructure provider network). Similarly, in the first example mode500(e.g., the normal mode), information (or data) may be communicated between an environment device and a server via the mobile hotspot access network, the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the local infrastructure provider network and/or backbone provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an environment device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network, fixed hotspot access network, and/or local infrastructure provider network). Additionally for example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network (e.g., skipping the mobile hotspot access network and/or fixed hotspot access network). As discussed herein, the example networks presented herein are adaptively configurable to operate in any of a variety of different modes (or configurations). Such adaptive configuration may occur at initial installation and/or during subsequent controlled network evolution (e.g., adding or removing any or all of the network components discussed herein, expanding or removing network capacity, adding or removing coverage areas, adding or removing services, etc.). Such adaptive configuration may also occur in real-time, for example in response to real-time changes in network conditions (e.g., networks or components thereof being available or not based on vehicle or user-device movement, network or component failure, network or component replacement or augmentation activity, network overloading, etc.). The following example modes are presented to illustrate characteristics of various modes in which a communication system may operate in accordance with various aspects of the present disclosure. The following example modes will generally be discussed in relation to the first example mode500(e.g., the normal execution mode). Note that such example modes are merely illustrative and not limiting. The second example mode (or configuration)510(e.g., a no backbone available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the backbone provider network and communication links therewith. For example, the communication system in the second example mode510comprises a local infrastructure provider network, a fixed hotspot access network, a mobile hotspot access network, end-user devices, and environment devices. As shown inFIG.5A, and inFIG.1in more detail, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the second example mode510(or configuration) via one or more wired (or tethered) links. For example, the local infrastructure provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the local infrastructure provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also, though not explicitly shown, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the second example mode510(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the local infrastructure provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Note that the communication link(s) shown in the second example mode510ofFIG.5Abetween the local infrastructure provider network and the fixed hotspot access network may be wired and/or wireless. The fixed hotspot access network is also shown in the second example mode510to be communicatively coupled to the mobile hotspot access network, the end-user devices, and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Additionally, the mobile hotspot access network is further shown in the second example mode510to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the second example mode510to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Note that in various example implementations any of such wireless links may instead (or in addition) comprise a wired (or tethered) link. In the second example mode510(e.g., the no backbone available mode), information (or data) may be communicated between an end-user device and a server (e.g., a computer, etc.) via the mobile hotspot access network, the fixed hotspot access network, and/or the local infrastructure provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the fixed hotspot access network and/or the local infrastructure provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an end user device and a server may be communicated via the local infrastructure provider network (e.g., skipping the mobile hotspot access network and/or fixed hotspot access network). Similarly, in the second example mode510(e.g., the no backbone available mode), information (or data) may be communicated between an environment device and a server via the mobile hotspot access network, the fixed hotspot access network, and/or the local infrastructure provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the local infrastructure provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the fixed hotspot access network and/or the local infrastructure provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network (e.g., skipping the mobile hotspot access network and/or fixed hotspot access network). The second example mode510may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. For example, due to security and/or privacy goals, the second example mode510may be utilized so that communication access to the public Cloud systems, the Internet in general, etc., is not allowed. For example, all network control and management functions may be within the local infrastructure provider network (e.g., wired local network, etc.) and/or the fixed access point network. In an example implementation, the communication system might be totally owned, operated and/or controlled by a local port authority. No extra expenses associated with cellular connections need be spent. For example, cellular connection capability (e.g., in Mobile APs, Fixed APs, end user devices, environment devices, etc.) need not be provided. Note also that the second example mode510may be utilized in a scenario in which the backbone provider network is normally available but is currently unavailable (e.g., due to server failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The third example mode (or configuration)520(e.g., a no local infrastructure and fixed hotspots available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the local infrastructure provider network, the fixed hotspot access network, and communication links therewith. For example, the communication system in the third example mode520comprises a backbone provider network, a mobile hotspot access network, end-user devices, and environment devices. As shown inFIG.5A, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the third example mode520(or configuration) via one or more wired (or tethered) links. For example, the backbone provider network may be communicatively coupled to the end-user devices and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the backbone provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also shown inFIG.5A, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the third example mode520(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the backbone provider network may be communicatively coupled to the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. The mobile hotspot access network is further shown in the third example mode520to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the third example mode520to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Note that in various example implementations any of such wireless links may instead (or in addition) comprise a wired (or tethered) link. In the third example mode520(e.g., the no local infrastructure and fixed hotspots available mode), information (or data) may be communicated between an end-user device and a server (e.g., a computer, etc.) via the mobile hotspot access network and/or the backbone provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network). Similarly, in the third example mode520(e.g., the no local infrastructure and fixed hotspots available mode), information (or data) may be communicated between an environment device and a server via the mobile hotspot access network and/or the backbone provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the backbone provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network). In the third example mode520, all control/management functions may for example be implemented within the Cloud. For example, since the mobile hotspot access network does not have a communication link via a fixed hotspot access network, the Mobile APs may utilize a direct connection (e.g., a cellular connection) with the backbone provider network (or Cloud). If a Mobile AP does not have such capability, the Mobile AP may also, for example, utilize data access provided by the end-user devices communicatively coupled thereto (e.g., leveraging the data plans of the end-user devices). The third example mode520may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example implementation, the third example mode520may be utilized in an early stage of a larger deployment, for example deployment that will grow into another mode (e.g., the example first mode500, example fourth mode530, etc.) as more communication system equipment is installed. Note also that the third example mode520may be utilized in a scenario in which the local infrastructure provider network and fixed hotspot access network are normally available but are currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The fourth example mode (or configuration)530(e.g., a no fixed hotspots available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the fixed hotspot access network and communication links therewith. For example, the communication system in the fourth example mode530comprises a backbone provider network, a local infrastructure provider network, a mobile hotspot access network, end-user devices, and environment devices. As shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the fourth example mode530(or configuration) via one or more wired (or tethered) links. For example, the backbone provider network may be communicatively coupled to the local infrastructure provider network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the backbone provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the fourth example mode530(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the backbone provider network may be communicatively coupled to the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Also note that in various example configurations, the backbone provider network may also be communicatively coupled to the local infrastructure provider network via one or more wireless (or non-tethered) links. As additionally shown inFIG.5B, and inFIG.1in more detail, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the fourth example mode530(or configuration) via one or more wired (or tethered) links. For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the local infrastructure provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also, though not explicitly shown, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the fourth example mode530(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network (or any component thereof), the mobile hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. The mobile hotspot access network is further shown in the fourth example mode530to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the fourth example mode530to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. In the fourth example mode530(e.g., the no fixed hotspots mode), information (or data) may be communicated between an end-user device and a server via the mobile hotspot access network, the local infrastructure provider network, and/or the backbone provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the local infrastructure provider network and/or the backbone provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an end user device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network and/or local infrastructure provider network). Similarly, in the fourth example mode530(e.g., the no fixed hotspots available mode), information (or data) may be communicated between an environment device and a server via the mobile hotspot access network, the local infrastructure provider network, and/or the backbone provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the local infrastructure provider network and/or backbone provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network and/or the backbone provider network (e.g., skipping the mobile hotspot access network). Also for example, information communicated between an environment device and a server may be communicated via the backbone provider network (e.g., skipping the mobile hotspot access network and/or local infrastructure provider network). Additionally for example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network (e.g., skipping the mobile hotspot access network and/or backbone provider network). In the fourth example mode530, in an example implementation, some of the control/management functions may for example be implemented within the local backbone provider network (e.g., within a client premises). For example, communication to the local infrastructure provider may be performed through the backbone provider network (or Cloud). Note that in a scenario in which there is a direct communication pathway between the local infrastructure provider network and the mobile hotspot access network, such communication pathway may be utilized. For example, since the mobile hotspot access network does not have a communication link via a fixed hotspot access network, the Mobile APs may utilize a direct connection (e.g., a cellular connection) with the backbone provider network (or Cloud). If a Mobile AP does not have such capability, the Mobile AP may also, for example, utilize data access provided by the end-user devices communicatively coupled thereto (e.g., leveraging the data plans of the end-user devices). The fourth example mode530may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example implementation, the fourth example mode530may be utilized in an early stage of a larger deployment, for example a deployment that will grow into another mode (e.g., the example first mode500, etc.) as more communication system equipment is installed. The fourth example mode530may, for example, be utilized in a scenario in which there is no fiber (or other) connection available for Fixed APs (e.g., in a maritime scenario, in a plantation scenario, etc.), or in which a Fixed AP is difficult to access or connect. For example, one or more Mobile APs of the mobile hotspot access network may be used as gateways to reach the Cloud. The fourth example mode530may also, for example, be utilized when a vehicle fleet and/or the Mobile APs associated therewith are owned by a first entity and the Fixed APs are owned by another entity, and there is no present agreement for communication between the Mobile APs and the Fixed APs. Note also that the fourth example mode530may be utilized in a scenario in which the fixed hotspot access network is normally available but are currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The fifth example mode (or configuration)540(e.g., a no mobile hotspots available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the mobile hotspot access network and communication links therewith. For example, the communication system in the fifth example mode540comprises a backbone provider network, a local infrastructure provider network, a fixed hotspot access network, end-user devices, and environment devices. As shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the fifth example mode540(or configuration) via one or more wired (or tethered) links. For example, the backbone provider network may be communicatively coupled to the local infrastructure provider network (or any component thereof), fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the fifth example mode540(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the backbone provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Also note that in various example configurations, the backbone provider network may also be communicatively coupled to the local infrastructure provider network via one or more wireless (or non-tethered) links. As additionally shown inFIG.5B, and inFIG.1in more detail, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the fifth example mode540(or configuration) via one or more wired (or tethered) links. For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network (or any component thereof), fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also note that in various example configurations, the local infrastructure provider network may also, at least temporarily, be communicatively coupled to the mobile hotspot access network (or any component thereof) via one or more wired (or tethered) links. Also, though not explicitly shown, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the fifth example mode540(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the local infrastructure provider network may be communicatively coupled to the backbone provider network, the fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Note that the communication link(s) shown in the fifth example mode540ofFIG.5Bbetween the local infrastructure provider network and the fixed hotspot access network may be wired and/or wireless. The fixed hotspot access network is also shown in the fifth example mode540to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the fifth example mode540to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. In the fifth example mode540(e.g., the no mobile hotspots available mode), information (or data) may be communicated between an end-user device and a server via the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the local infrastructure provider network, and/or the backbone provider network (e.g., skipping the fixed hotspot access network). Also for example, information communicated between an end user device and a server may be communicated via the backbone provider network (e.g., skipping the fixed hotspot access network and/or local infrastructure provider network). Similarly, in the fifth example mode540(e.g., the no mobile hotspots available mode), information (or data) may be communicated between an environment device and a server via the fixed hotspot access network, the local infrastructure provider network, and/or the backbone provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the fixed hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the local infrastructure provider network and/or backbone provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network and/or the backbone provider network (e.g., skipping the fixed hotspot access network). Also for example, information communicated between an environment device and a server may be communicated via the backbone provider network (e.g., skipping the fixed hotspot access network and/or local infrastructure provider network). Additionally for example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network (e.g., skipping the fixed hotspot access network and/or the backbone provider network). In the fifth example mode540, in an example implementation, the end-user devices and environment devices may communicate directly to Fixed APs (e.g., utilizing Ethernet, Wi-Fi, etc.). Also for example, the end-user devices and/or environment devices may communicate directly with the backbone provider network (e.g., utilizing cellular connections, etc.). The fifth example mode540may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example implementation in which end-user devices and/or environment devices may communicate directly with Fixed APs, such communication may be utilized instead of Mobile AP communication. For example, the fixed hotspot access network might provide coverage for all desired areas. Note also that the fifth example mode540may be utilized in a scenario in which the fixed hotspot access network is normally available but is currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The sixth example mode (or configuration)550(e.g., the no fixed/mobile hotspots and local infrastructure available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the local infrastructure provider network, fixed hotspot access network, mobile hotspot access network, and communication links therewith. For example, the communication system in the sixth example mode550comprises a backbone provider network, end-user devices, and environment devices. As shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the sixth example mode550(or configuration) via one or more wired (or tethered) links. For example, the backbone provider network may be communicatively coupled to the end-user devices and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also shown inFIG.5B, and inFIG.1in more detail, the backbone provider network may be communicatively coupled to any or all of the other elements present in the sixth example mode550(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the backbone provider network may be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. The end-user devices are also shown in the sixth example mode550to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. In the sixth example mode550(e.g., the no fixed/mobile hotspots and local infrastructure available mode), information (or data) may be communicated between an end-user device and a server via the backbone provider network. Similarly, in the sixth example mode550(e.g., the no fixed/mobile hotspots and local infrastructure mode), information (or data) may be communicated between an environment device and a server via the backbone provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). The sixth example mode550may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example implementation, for example in which an end-user has not yet subscribed to the communication system, the end-user device may subscribe to the system through a Cloud application and by communicating directly with the backbone provider network (e.g., via cellular link, etc.). The sixth example mode550may also, for example, be utilized in rural areas in which Mobile AP presence is sparse, Fixed AP installation is difficult or impractical, etc. Note also that the sixth example mode550may be utilized in a scenario in which the infrastructure provider network, fixed hotspot access network, and/or mobile hotspot access network are normally available but are currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The seventh example mode (or configuration)560(e.g., the no backbone and mobile hotspots available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the backbone provider network, mobile hotspot access network, and communication links therewith. For example, the communication system in the seventh example mode560comprises a local infrastructure provider network, fixed hotspot access network, end-user devices, and environment devices. As shown inFIG.5C, and inFIG.1in more detail, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the seventh example mode560(or configuration) via one or more wired (or tethered) links. For example, the local infrastructure provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wired links. Note that such a wired coupling may be temporary. Also, though not explicitly shown, the local infrastructure provider network may be communicatively coupled to any or all of the other elements present in the seventh example mode560(or configuration) via one or more wireless links (e.g., RF link, non-tethered optical link, etc.). For example, the local infrastructure provider network may be communicatively coupled to the fixed hotspot access network (or any component thereof), the end-user devices, and/or environment devices via one or more wireless links. Note that the communication link shown in the seventh example mode560ofFIG.5Cbetween the local infrastructure provider network and the fixed hotspot access network may be wired and/or wireless. The fixed hotspot access network is also shown in the seventh example mode560to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Additionally, the end-user devices are also shown in the seventh example mode560to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. In the seventh example mode560(e.g., the no backbone and mobile hotspots available mode), information (or data) may be communicated between an end-user device and a server via the fixed hotspot access network and/or the local infrastructure provider network. As will be seen in the various example modes presented herein, such communication may flexibly occur between an end-user device and a server via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an end user device and a server may be communicated via the local infrastructure provider network (e.g., skipping the fixed hotspot access network). Similarly, in the seventh example mode560(e.g., the no backbone and mobile hotspots available mode), information (or data) may be communicated between an environment device and a server via the fixed hotspot access network and/or the local infrastructure provider network. Also for example, an environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). As will be seen in the various example modes presented herein, such communication may flexibly occur between an environment device and a server (e.g., communicatively coupled to the local infrastructure provider network) via any of a variety of different communication pathways, for example depending on the availability of a network, depending on bandwidth utilization goals, depending on communication priority, depending on communication time (or latency) and/or reliability constraints, depending on cost, etc. For example, information communicated between an environment device and a server may be communicated via the local infrastructure provider network (e.g., skipping the fixed hotspot access network). The seventh example mode560may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example controlled space implementation, Cloud access might not be provided (e.g., for security reasons, privacy reasons, etc.), and full (or sufficient) coverage of the coverage area is provided by the fixed hotspot access network, and thus the mobile hotspot access network is not needed. For example, the end-user devices and environment devices may communicate directly (e.g., via Ethernet, Wi-Fi, etc.) with the Fixed APs Note also that the seventh example mode560may be utilized in a scenario in which the backbone provider network and/or fixed hotspot access network are normally available but are currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). The eighth example mode (or configuration)570(e.g., the no backbone, fixed hotspots, and local infrastructure available mode) may, for example, share any or all characteristics with the first example mode500, albeit without the backbone provider network, local infrastructure provider network, fixed hotspot access network, and communication links therewith. For example, the communication system in the eighth example mode570comprises a mobile hotspot access network, end-user devices, and environment devices. As shown inFIG.5C, and inFIG.1in more detail, the mobile hotspot access network is shown in the eighth example mode570to be communicatively coupled to the end-user devices and/or environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. Further, the end-user devices are also shown in the eighth example mode570to be communicatively coupled to the environment devices via one or more wireless links. Many examples of such wireless coupling are provided herein. In the eighth example mode570(e.g., the no backbone, fixed hotspots, and local infrastructure available mode), information (or data) might not (at least currently) be communicated between an end-user device and a server (e.g., a coupled to the backbone provider network, local infrastructure provider network, etc.). Similarly, information (or data) might not (at least currently) be communicated between an environment device and a server (e.g., a coupled to the backbone provider network, local infrastructure provider network, etc.). Note that the environment device may communicate with or through an end-user device (e.g., instead of or in addition to the mobile hotspot access network). The eighth example mode570may be utilized for any of a variety of reasons, non-limiting examples of which are provided herein. In an example implementation, the eighth example mode570may be utilized for gathering and/or serving data (e.g., in a delay-tolerant networking scenario), providing peer-to-peer communication through the mobile hotspot access network (e.g., between clients of a single Mobile AP, between clients of respective different Mobile APs, etc.), etc. In another example scenario, the eighth example mode570may be utilized in a scenario in which vehicle-to-vehicle communications are prioritized above vehicle-to-infrastructure communications. In yet another example scenario, the eighth example mode570may be utilized in a scenario in which all infrastructure access is lost (e.g., in tunnels, parking garages, etc.). Note also that the eighth example mode570may be utilized in a scenario in which the backbone provider network, local infrastructure provider network, and/or fixed hotspot access network are normally available but are currently unavailable (e.g., due to equipment failure, due to communication link failure, due to power outage, due to a temporary denial of service, etc.). As shown and discussed herein, it is beneficial to have a generic platform that allows multi-mode communications of multiple users or machines within different environments, using multiple devices with multiple technologies, connected to multiple moving/static things with multiple technologies, forming wireless (mesh) hotspot networks over different environments, connected to multiple wired/wireless infrastructure/network backbone providers, ultimately connected to the Internet, Cloud or private network infrastructure. FIG.6shows yet another block diagram of an example network configuration, in accordance with various aspects of the present disclosure. The example network600may, for example, share any or all characteristics with the other example networks and/or network components100,200,300,400, and500-570, discussed herein. Notably, the example network600shows a plurality of Mobile APs (or OBUs), each communicatively coupled to a Fixed AP (or RSU), where each Mobile AP may provide network access to a vehicle network (e.g., comprising other vehicles or vehicle networks, user devices, sensor devices, etc.). As discussed herein, a network of moving things (e.g., including moving access points, moving sensors, moving user client devices, etc.) may be supported by an infrastructure that comprises a mesh among fixed and mobile APs that can flexibly establish connections with the Internet, the Cloud, private networks, etc. The functionality of the various fixed and mobile elements of a network of moving things may include software or firmware that is executable by processors, and may include data used by such processors that, for example, control the establishment and management of communication over the various wired and wireless links, communicate data between various elements, enable configuration of various elements according to the use of the network portions, provide services to end users, and perform diagnostics and maintenance of network elements. The software, firmware, and data of the various fixed and mobile elements of a network of moving things may, for example, be in the form of programs, modules, functions, and/or subroutines made of any combination of one or more of machine executable instructions, intermediate language instructions, interpreted pseudocode instructions, and/or higher level or script language instructions. In accordance with the present disclosure, a network of moving things may provide functionality that enables the network to continue to evolve after network deployment, enabling the distribution of updated software, firmware, and/or data that provides new features and enhancements in a curable and reliable manner. In accordance with the present disclosure, such update information for updating software, firmware, and/or data may be referred herein to simply as a software update, an “update,” or “update file,” and may include digital information representing a configuration of a network entity, software, firmware, and/or the arrangement of the network entities with respect to one another. Such updates may be created to update the software, firmware, and/or data at any granularity including, by way of example and not limitation, at one or more of the program, module, function, and/or subroutine levels. Such updates may be agnostic of the location and expected behavior, and may be totally adaptable to any constraints and requirements desired by the system operator or users. FIG.7shows a block diagram of an example communication network700, in accordance with various aspects of the present disclosure. The example network700may, for example, share any or all characteristics with the other example methods, networks, and/or network components100-600,800,900, and1000discussed herein. As illustrated inFIG.7, the network700includes a number of network components (e.g., Cloud760; vehicles741,742; access points726,737,738; and mobility controller735). The vehicles741,742; the access points726,737,738; and the mobility controller735each contain what may be referred to herein as a “network unit” (NU), represented inFIG.7as having respective NUs. In the context of a vehicle, the NU may be part of, for example, an OBU, an AP, and an MC, as previously described above. In accordance with aspects of the present disclosure, the mobile NUs may have access to a number of communication methodologies including, for example, a “DIRECT” communication methodology that involves direct communication with the destination entity, an “OPPORTUNISTIC” communication methodology that communicates with the destination entity only when one specific communication technology is available (e.g., one of direct short-range communication (DSRC) connectivity to a specific access-point, Bluetooth wireless connectivity, or cellular connectively), and an “EPIDEMIC” communication methodology that may deliver the message to the next available networking neighbor of the entity sending a message. The networking neighbor that is sent the message is then responsible for continuing the delivery of the message to its own neighbor node(s), thereby transporting the message through various network entities until the final destination is reached. In accordance with various aspects of the present disclosure, a communication methodology may be chosen according to the time criticality and value of the information being transferred, and the cost effectiveness each of the communication methodologies. In accordance with aspects of the present disclosure, NUs that are “fixed” rather than “mobile” may be configured to rely on “DIRECT” communication methodologies. Additional details may be found, for example, in U.S. Provisional Patent Application No. 62/272,750, entitled “Systems and Methods for Remote Software Update and Distribution in a Network of Moving Things,”, filed Dec. 30, 2015; and U.S. Provisional Patent Application No. 62/278,662, entitled “Systems and Methods for Remote Configuration Update and Distribution in a Network of Moving Things,”, filed Jan. 14, 2016, the complete subject matter of each of which is hereby incorporated herein by reference, in its respective entirety. A network of moving things in accordance with aspects of the present disclosure is able to communicate data with both mobile and fixed NUs. For example, the mobile NUs724,725in their respective vehicles742,741ofFIG.7, also referred to herein as mobile access points or mobile APs, may not have continuous access or communication with data storage of Cloud760. In accordance with various aspects of the present disclosure, such mobile NUs may leverage any existing communication connections that are available. In accordance with aspects of the present disclosure, mobile NUs such as, for example, the NUs725,724of their respective vehicles741,742ofFIG.7may, for example, communicate with fixed NUs such as, for example, the NUs726,737,738ofFIG.7, using the EPIDEMIC communication methodology, described above. In accordance with various aspects of the present disclosure, various sensors (e.g., sensors connected to NU730) may not have direct communication with the data storage of the Cloud760, and therefore may leverage the connectivity provided by an NU such as, for example, the “relay” NU724of vehicle742, to which they may connect. Such relay NUs (RNUs) may communicate with any such sensor, in order to enable any such sensors to communicate sensor data with, for example, the Cloud760. The ever growing volume of information generated by the huge variety of connected devices raises constant challenges in providing reliable transport for that data. Within a few years, with the continued proliferation of the Internet of Things and further deployment of smart sensors, the transportation of the growing volume of data generated by such devices will present a tremendous challenge not only in terms of the amount of bandwidth required, but also with regard to connectivity costs. A network in accordance with various aspects of the present disclosure, which may be referred to herein as the “Internet of Moving Things” (IoMT), provides a platform that is highly optimized for the transport of data generated by, for example, various sensors in the area served by such a network, in a very scalable way. Additional details regarding interfacing among sensors and a network in accordance with various aspects of the present disclosure may be found, for example, in U.S. Provisional Patent Application No. 62/222,135, entitled “Systems and Methods for Collecting Sensor Data in a Network of Moving Things,”, filed Sep. 22, 2015. Additional details regarding adapting the granularity, bandwidth, and priority of sensing and disseminating data may be found, for example, in U.S. Provisional Patent Application No. 62/253,249, entitled “Systems and Methods for Optimizing Data Gathering in a Network of Moving Things,”, filed Nov. 10, 2015. The complete subject matter of each of the above-identified provisional patent applications is hereby incorporated herein by reference, in its respective entirety. As will be recognized by those of skill in the art, all of the data collected by elements in a network of moving things is potentially valuable for a wide variety of applications and insights, most of which are yet to be discovered. End-to-end data integrity is important in any network, and is particularly so in a network such as the IoMT of the present disclosure, considering the variety of elements and processes involved in its acquisition. At the present time, just a small fraction of the data collected from connected devices is actually being used. However, network support for the collection of high definition data is of increasing importance. A network in accordance with various aspects of the present disclosure provides the foundations for an analytics system that uses collected sensor and other data to provide, for example, optimizations and predictions in a wide variety of different areas (e.g., transportation, environment, and/or communication). The inventive concepts presented in the present disclosure define an architecture that supports powerful and scalable analysis of the data acquired from a network such as, for example, the IoMT, and defines the methods and best practices to guarantee data integrity in the IoMT. Aspects of the present disclosure define techniques for data filtering, so that only valid data may be used in further analysis. The concepts disclosed herein may be used in combination with approaches to collecting sensor data such as, for example, those disclosed in U.S. Provisional Patent Application No. 62/222,135, entitled “Systems and Methods for Collecting Sensor Data in a Network of Moving Things,” filed Sep. 22, 2015, the complete subject matter of which is hereby incorporated herein by reference, in its entirety. The concepts disclosed herein may also be used in combination with approaches to optimizing the gathering of data such as, for example, the inventive concepts disclosed in U.S. Provisional Patent Application No. 62/253,249, entitled “Systems and Methods for Optimizing Data Gathering in a Network of Moving Things,” filed Nov. 10, 2015, the complete subject matter of which is hereby incorporated herein by reference, in its entirety. In combination with the above concepts, an implementation in accordance with the present disclosure defines a data analytic system comprehending data acquisition, transport, storage, and analysis. In this manner, the inventive concepts discussed herein provide the basic framework and methodology to analyze the performance of, and the data collected by, a network in accordance with various aspects of the IoMT discussed herein. FIG.8is an illustration showing an example framework800supporting data analytics in a network of moving things, in accordance with various aspects of the present disclosure. In the example ofFIG.8, data may be generated in each one of the network elements shown and described herein (e.g., mobile APs802,803, fixed AP804, Internet805, and analytics809and the Cloud services810), which may correspond to the OBUs, fixed APs, local infrastructure providers and backbone provider, and cloud network elements, respectively, shown inFIGS.1-7. InFIG.8, arrows associated with communication paths are used indicate the primary flow of data. As shown inFIG.8, the example framework800includes first and second mobile APs802,803, a fixed AP804, cloud services810, and an analytics entity809. It should be noted thatFIG.8is merely an example configuration of network nodes, and that many other configurations having a greater number of network nodes and elements (e.g., sensors, mobile APs, fixed APs, etc.) may be seen in a network of moving things in accordance with various aspects of the present disclosure. InFIG.8, a “smart city” sensor such as, for example, the sensor A801may be linked to the first mobile AP802via a wireless communications path (e.g., Wi-Fi, Bluetooth, etc.). Sensor A801may be a sensor of any type including, for example, suitable circuitry, software, and logic to continually sense weather conditions (e.g., temperature, humidity, wind speed, precipitation, etc.), air quality (e.g., levels/concentration of carbon monoxide, nitrous oxide(s), pollen, hydrocarbons, etc.), environmental noise or traffic, or even information relating to pickup of garbage (e.g., a sensor indicating a level/amount of refuse in a container), to name only a few types of suitable sensors. The second mobile AP803ofFIG.8is also linked via a wireless communications path (e.g., DSRC) to the first mobile AP802. The first mobile AP802, in addition to receiving and forwarding the data or information from other sensors and APs (e.g., sensor A801and second mobile AP803) to a fixed AP such as the fixed AP804, may also include such data from their own operation (e.g., hardware and software debug logs, traffic counters/measurements, etc.) and sensors and devices such as, in the example ofFIG.8, an interface or sensor for receiving vehicle on-board diagnostics (OBD) information, a single or multi-axis accelerometer, a gyroscope, an altimeter, and enriched information from a global navigation satellite system/global positioning system (GNSS/GPS) receiver. APs such as, for example, the first mobile AP802, the second mobile AP803, and the fixed AP804may themselves generate a considerable amount of data during their operation, in addition to any data collected from sensors/devices/sources co-located with or remote from these network elements. In accordance with various aspects of the present disclosure, the data from such sensors/devices/sources may be collected and sent, for example in an opportunistic manner whenever possible, via a communication link to a network element such as, for example, the fixed AP804, passed on to the Internet805, and then forwarded to Cloud services element810for storage and immediate or future later processing. Alternatively, critical data/information may be sent via communication technologies other than those of the example ofFIG.8. For example, data may be transferred from, for example, mobile AP802to the databases807,808of Cloud services via a cellular data link and Internet805. Once the data/information from the various elements ofFIG.8reaches the Cloud services element810, the data may then be stored in a suitable database such as, for example, either or both of the two databases807,808ofFIG.8, and may then immediately be available to be analyzed by analytics functionality such as, for example, the analytics element809ofFIG.8. Such analysis may result in the production of various reports, and may be available in the form of a “dashboard” or key performance indicators (KPIs), or in the form of an enriched database that may, for example, combine data from various sources to provide greater value to users. It should be noted that each one of the previously described processes may lead to data loss or corruption. In some situations, sensors readings may be faulty although the communication between network elements is correct. In such instances, the information that is stored in the databases may end up being incorrect. In other instances, sensor information may be correct, but communication errors may occur, resulting in erroneous data arriving at the intended destination. In the constantly moving and changing wireless mesh network described herein, such errors are even more likely to happen. In addition to the examples of data loss and/or corruption given above, data loss may also occur at the databases and cloud services, even though data loss there is less likely. A data analytics framework in accordance with the present disclosure employs database transaction operations that provide atomicity, consistency, isolation, and durability (ACID). In addition, database replication (e.g., via the use of different data warehouses) may be employed to avoid the loss of critical information. Additional details regarding suitable forms of data storage and processing may be found, for example, in U.S. Provisional Patent Application No. 62/222,168, entitled “Systems and Methods for Data Storage and Processing in a Network of Moving Things,” filed Sep. 22, 2015, the complete subject matter of which is hereby incorporated herein, in its entirety. It should be noted that data loss due to communication issues is not limited to the communication of data between elements of a network of moving things, such as between the first mobile AP802and the fixed AP804ofFIG.8, for example. Communication issues between processes running in the same network element may also occur. The various elements of a network of moving things each rely on a number of different applications that interact with each other. If for some reason (e.g., memory space issues, high CPU usage, software malfunction), a first application that produces information or data is not able to correctly transfer the information or data to a second application that is responsible for uploading the information or data to storage (e.g., a database of the Cloud service810), the information or data may be lost. Due to the limited computing resources that may be available in some network elements such as, for example, mobile APs, an embodiment according to aspects of the present disclosure incorporates reliable mechanisms to transmit information between processes. For example, an element in a network of moving things in accordance with aspects of the present disclosure may use buffers that hold information during periods of high CPU usage, and may employ acknowledgment mechanisms that ensure that information being transferred from one process to another or one element to another is not cleared from the sending process/element without verification that the receiving process/element has correctly received the information. In addition, the correct use of shared memory is important, not only for the proper functioning of applications and processes, but also to guarantee the integrity of the data or information when the information shared. Careful analysis may reveal such issues, as long as one can determine that no other cause of error is present. Unit testing mechanisms may be deployed to stress the developed applications and reveal possible issues in a very early stage. A data analytics framework in accordance with various aspects of the present disclosure such as, for example, the example framework800ofFIG.8, is designed to be aware of all of these issues, so that the data analysis framework may detect data inconsistency and correct for such data inconsistency. On the analytics level, a data analytics framework according to aspects of the present disclosure is able to distinguish between information that is missing due to communication failures and information that is missing due to processing failures. In a network of moving things, loss of data due to communication failures is understandable, and may occur with greater frequency than the loss of data due to processing failures. It is, however, critical that these two situations be properly identified. Otherwise, determining the source of and correcting any misbehavior may be almost impossible. In accordance with various aspects of the present disclosure, a system as described herein employs multiple correlated sources of information, and both real-time and end-of-day verification of receipt of data samples, to enable the system to determine when data samples are missing, and to use information from a correlated information source as an alternate source of missing data samples. For example, a system according to the present disclosure may determine in real-time that a sample of data is missing from a source that regularly sends data samples, or the system may verify at an “end-of-day” or other point in time, that the correct number of samples have been received. In either instance, the system may use an alternate source of information that is correlated with the missing data, to reproduce the missing data sample(s). A network of moving things in accordance with the present disclosure employs certain approaches (that may also be referred to as strategies, mechanisms, procedures, processes, and/or algorithms) to avoid various kinds of data loss and data inconsistencies in the communication and storage of data or information such as, for example, the data sourced and/or received by the various elements of the data analytics framework ofFIG.8. The various approaches employed in a network of moving things in accordance with the present disclosure may act in parallel (i.e., concurrently) and or sequentially, or any suitable combination, based upon the requirements of the network and the capabilities of the network nodes. The approaches disclosed herein may be configured for use based on a type of data that is being gathered and/or transferred, or based on the number of available sources of the data of interest. Due to the challenging nature of data communication in a network of moving things, and the attendant reliability issues that may be of concern, the approaches used to provide maximum data integrity in accordance with aspects of the present disclosure may depend on the type of communication mechanism or link used to transfer the data of interest, and a number of other factors. For example, many factors may be used to derive a probability of whether to configure a specific approach in a particular network node or group of nodes in a network of moving things in accordance with aspects of the present disclosure. Factors such as, for example, the speed of the vehicle in which a particular network node (e.g., a mobile AP) is located or of the vehicles served by a particular network node (e.g., a fixed AP), and various characteristics of the roads being traveled by the vehicles in which the network node (e.g., mobile AP) is located or of the roads served by a particular network node (e.g., fixed AP). Additional such factors may include the number of fixed APs and/or mobile APs within a certain physical distance from the particular network node (e.g., a mobile AP), and the line-of-sight/non-line-of-sight wireless communication conditions experienced by the particular network node (e.g., mobile AP or fixed AP) may also be used to derive a probability of whether to configure a specific approach in a particular network node or group of nodes in a network of moving things in accordance with aspects of the present disclosure. In accordance with various aspects of the present disclosure, the periodicity of employing such approaches locally in particular network nodes or in the Cloud may be determined using, for example, the number of percentage of errors permitted for each type of data being communicated, and/or based on the usually observed number or percentage of errors experienced (i.e., expected reliability). Some applications running on the network may require that all data sent across the network arrives in a valid, error-free condition, while other applications may function properly or acceptably even when data is known to be valid only a part of the time (e.g., when valid data samples are only needed from time to time). In addition, network elements such as, for example, fixed APs may aid in the process of providing valid data, in that some network elements such as fixed APs and the Cloud may be configured to provide information such as, for example, correction vectors (e.g., provided by a Cloud-based service) for data that is currently being gathered and that needs to be made available to mobile APs such as, for example, correct number of sessions, received signal strength indicators (RSSI), or geographic location. As mentioned above, a network of moving things in accordance with the present disclosure may employ a variety of approaches in regard to maximizing data integrity. For example, a network of moving things in accordance with the present disclosure may take advantage of known regularity and reliability of data sources that produce data samples according to a known schedule. Thus, such data sources may have what is referred to herein as “expected reliability.” This approach to data validation may be applied to data sources that produce information that is updated on a regular periodic basis. In this case, if it is known that an element of the network of moving things (e.g., a sensor, mobile AP, fixed AP, etc. of an IoMT) should produce information at certain time intervals, for example, X seconds while in operation, the failure by a particular network element (e.g., a mobile AP or fixed AP that receives the data transfers) to receive a new transfer of data, or the presence of a gap in time between entries of a database of the respective cloud endpoint (e.g., Cloud services810) greater than, for example X seconds, is an indication that information is missing, and corrective action may be warranted and taken. In accordance with various aspects of the present disclosure, the data transferred from a source element to destination elements of a network of moving things according to the present disclosure may include redundant information that permits a receiver (e.g., a network element receiving digital data from a sensor, or a network node receiving a transmission from another network node) to verify the correctness of a received data transmission. Some sources of data may provide a checksum, hash value, or another form of information or code generated by applying a known algorithm, where the information or code is function of the data transmitted to the receiver, and that can be re-calculated at the receiver using the received data, to verify the accurate reception of the received information. The occurrence of errors at the receiver, detected via the use of such information, may indicate that actions are needed to improve communication links between network elements, or that particular portions of a wireless link is unreliable, and may permit the sending to the source of a request for retransmission of the data exhibiting the error. In addition, a network of moving things in accordance with the present disclosure may employ checks on what may be referred to herein as “partial integrity.” In a situation in which a data producer component (e.g., sensor A801) provides incomplete information, such as may be determined by a local check of the information (e.g., by mobile AP802), a message with the incomplete information may be sent to the respective cloud endpoint (e.g., Cloud services810), such that the message is plainly marked to indicate that limited or incomplete information was received from the source, so that the database entry for the received information records that the absence of information is due to sensor malfunction, or due to reporting of intentionally incomplete information (e.g., as in the case of a GNSS/GPS receiver having too few satellites in view to enable the GNSS/GPS receiver to produce complete satellite-based geolocation information reports), and not due to communication failures. A network of moving things in accordance with the present disclosure may also employ what may be referred to herein as “data source diversity.” This approach may be used to detect data inconsistencies by determining a certain metric using two different data sources, if available. While this process may require more processing than other approaches, it can reveal problems in one of the data acquisition methods. An example of this is the acquisition and use of wheel sensor data available via a vehicle data bus accessible using an On Board Diagnostic (OBD/OBD II) data port, to verify vehicle travel distance using such information compared against information available using, for example, GNSS/GPS satellite-based position information. For example, a network of moving things in accordance with the present disclosure may support improved communication reliability through the creation of a non-persistent sequence number for each message. The use of this technique allows elements of such a network to identify data loss during the process of communicating data from one network element to another through the network. The sequence number used by a network element may be reset at certain points in time such as, for example, when the network element is reinitialized. In this manner, communication issues are easily identified by the occurrence of a missing sequence number such as, for example, at an endpoint such as the example cloud services810network element ofFIG.8. For example, in accordance with various aspects of the present disclosure, the occurrence of sequence number resets may be cross-checked against an events database that is maintained for each network element, by checking the contents of the events database for the network element that experienced the reset, to confirm that the network element was reset due to, for example, a system reboot, rather than a failure of the network element or inter-element communication. A network of moving things in accordance with the present disclosure may also employ what may be referred to herein as “data source reliability.’ For example, in a situation in which a data producing component (e.g., sensor A801) provides erroneous information, as determined by a local check of the information (e.g., by mobile AP802), a message without sensor data content may be sent to the respective cloud endpoint (e.g., cloud services810), so that the database entry records that the absence of information is due to sensor malfunction and not due to a communication failure. A network of moving things in accordance with various aspects of the present disclosure may provide support for analytic processes that rely on data that is already stored (e.g., resident in a database), by defining a set of statistical data integrity principles or filtering procedures. Such statistical data integrity principles or filtering procedures may be used to refine the information used in the analysis. This approach identifies outliers that can occur, and evaluates the likelihood of occurrence of various values of the incoming data, taking into account the statistical distribution of historical data, and comparing data from several (e.g., redundant) data sources (e.g. GNSS/GPS, Radio Frequency (RF) Fingerprinting, and/or Dead Reckoning geolocation approached), and may use geographic boundaries of the information or data that has already been collected and stored. A network of moving things in accordance with various aspects of the present disclosure may, for example, apply a particular subset of a larger set of data validation approaches such as those discussed above, to validate data that may be transferred via the elements of a network of moving things. The following discussion describes an example illustrating the use of some of the above-identified approaches for the acquisition of location-related information in an IoMT. For example, each mobile AP (e.g., mobile APs802,803ofFIG.8) may be equipped with a GNSS/GPS receiver that produces positioning and satellite-related information on a regular basis (e.g., once every X seconds). In accordance with aspects of the present disclosure, such positioning and satellite-related information may be used not only for the correct tracking and visualization of the location of various network elements (e.g., mobile AP location), but may also be used as input to more complex positioning algorithms. Such a GNSS/GPS receiver may also regularly (e.g., once every X seconds) produce a set of “sentences” (e.g., strings of bytes or characters representing various geolocation or time related parameters) that may from time to time be affected by an error. In accordance with aspects of the present invention, a checksum present at the end of each sentence produced by the GNSS/GPS receiver may be verified, so that a network element receiving the data may validate such data and locally discard erroneous location information. In some instances, an information source such as, for example, a GNSS/GPS receiver may produce partial information. Such partial information may occur when, for example, a GNSS/GPS receiver attempts to determine its location and does not have a sufficient number of satellites “in view” (i.e., providing received satellite signals of sufficient strength). In such an instance, although some information may be provided by the GNSS/GPS receiver, the location coordinate information may not be provided. To aid in the analysis of information, a network of moving things in accordance with aspects of the present disclosure may record when partial geolocation information has been received at the database, to enable the network element(s) performing the analysis of received geolocation information to disregard any other issues that may occur such as, for example, communication errors. A network of moving things in accordance with aspects of the present disclosure may employ the concept of “expected reliability,” mentioned above. For example, because it may be known that a GNSS/GPS receiver provides a location/receiver information update on a regular periodic basis (e.g., every X seconds), at the point in time when analysis is performed, there should be available a GNSS/GPS receiver update for every regular time period (e.g., for every X seconds) that a mobile AP was in operation. Such a data source that produces information on a regular basis in time, allows checks to detect when that source is not operating normally, or that a data communication path through the network to a destination is not operating properly. A network of moving things in accordance with aspects of the present disclosure may use redundant sources of data to be used to corroborate one information source against another. For example, an example system may compare the distance traveled as determined by using GNSS (e.g., GPS) information, with the distance traveled as determined using information available from other means such as, for example, the OBD accessible sensors/devices of a vehicle, to validate the information from those two data sources. More than two data source may be used in such data validation. In such an approach, although it may not be expected that the two or more data sources will produce exactly the same distance traveled value, or any other chosen metric, a considerable difference (e.g., a discrepancy above a certain threshold amount) will reveal issues in one of the data acquisition systems (i.e., the GNSS receiver and/or the OBD sensor/device). By integrating the various types of information described above at the point in time when analysis of data is performed, it is therefore possible to determine those locations where, for example, a GNSS/GPS signal is poor and therefore that no location information is provided by the GNSS/GPS receiver, and those locations where there are typical data communication issues. By enabling the identification of such issues, the data integrity analysis and validation techniques of the present disclosure are a useful tool to debug a network of moving things (e.g., the IoMT) at different stages and layers, and to improve the reliability of data on an ongoing basis. It should be noted that while the examples given above refer to the acquisition of data related to geolocation, the principles disclosed are applicable to many other sources of data, in many other networks of moving and/or stationary network nodes. The following is an illustrative example of the use of a subset of approaches used for validation of data transferred between nodes of a network of moving things, in accordance with various aspects of the present disclosure. In this example, a subset of approaches selected from the set of example approaches discussed above may include four different data validation approaches. Example data validation approach 1 of the subset may check for data transmission errors by looking at a checksum portion of each packet received from the source. Example data validation approach 2 may check a sequence number of each packet received from a source to detect missing packets. Example data validation approach 3 may check the periodicity of data packets received from a source that is known to be periodic in production of data. Example data validation approach 4 may use one or more other data sources to validate or corroborate a particular metric represented by data in packet(s) received from a source. It should be noted that the above four data validation approaches are merely one subset of a large variety of approaches that may be used depending on the nature of the data source, the nature of the communication links in use for data transfer, and the available alternate data sources, and that the present disclosure is not limited merely to the above examples. In accordance with various aspects of the present disclosure, the data validation approaches described above may be always available in the Cloud entities, the FAPs, and/or the MAPs ofFIGS.1-8, and may be turned-on (i.e., enabled or activated) or turned-off (i.e., disabled or deactivated) where and when desired (e.g., on a per-network node basis) or based on the context of the vehicular environment of the network of moving things. In this way, a network of moving things according to various aspects of the present disclosure may provide an automatic and adaptable system that is able to dynamically adapt the level of data validation checks performed in the network, and to personalize them based on the type of data (e.g., real-time or delay-tolerant), the communication technology or mode (e.g., IEEE 802.11p, cellular, IEEE 802.11a/b/g/n/ac/ad, etc.) or traffic characteristics (e.g., periodic, deterministic, correlated with other sources of information, etc.). Using these data validation approaches, processes, strategies, mechanisms, procedures, and/or algorithms, a network in accordance with aspects of the present disclosure supports quickly and deeply understanding whether a problem actually exists in the network, without introducing high network overhead and processing cost to keep those traffic validation operations always running. The following examples illustrate situations in which it may be advisable to apply one or even a set including several of the approaches described herein. For instance, an example data validation approach 1 may be applied in the Cloud (e.g., Cloud760,809/810,1060), and when application packet losses start to rise above a certain threshold that may, for example, be configurable per communication technology (e.g., DSRC, cellular, Wi-Fi, Bluetooth, etc.) and/or data type, instructions may be sent to, for example, FAPs, MAPs, and/or other network elements, to request them to begin monitoring packet checksum information, or to perform other actions, so that problematic communication links may be identified. In cases in which a MAP is located in a vehicle moving at a very high rate of speed or is performing frequent handovers between communication technologies (e.g., between DSRC, cellular, Wi-Fi, etc.) or switching among different APs, it may be advisable to raise the thresholds for the loss of application packets, before beginning to apply example data validation approach 1. For local applications, it may be advisable to apply approach 1 at the mobile AP, because the Cloud is not involved in the process. The application of example data validation approach 2 may be used in situations where network interfaces employing IEEE 802.11p and cellular communication technologies are employed, and may involve the use of different thresholds for the analysis of the periodicity of the data (e.g., milliseconds in the case of IEEE 802.11p and seconds in case of cellular). The use of example data validation approach 2 may be appropriate when transfer of delay-tolerant data is involved, because packets containing such data may not arrive at the destination in order, and sequence numbers may be needed to determine whether data packets are missing and/or to restore packet order. The use of example data validation approach 3 may or may not be appropriate when the data being communicated is not periodic in nature (e.g., when the data is delay-tolerant data). However, example data validation approach 3 may be appropriate for the validation of periodic data, and a network element (e.g., MAPs, FAPs, etc.) may be configured according to the expected periodicity of the data transfers. The periodicity of the data transfers to be process according to this approach may be learned by, for example, reviewing historical data transfer activity patterns gathered during previous data transfer activity, or from known characteristics of the network element that generates the data being transmitted. In accordance with various aspects of the present disclosure, the example data validation approach 4 may be used to correlate metrics/data of interest with other sources of information in order to understand whether any issues of data validity exist. For example, if data received from (or lack of data received from), for example, network elements such as MAPs or FAPs suggests that there are no user traffic sessions (e.g., user Wi-Fi sessions) on a specific bus, during a particular period of time, or in a particular geographic region, validation of such data may be performed by checking whether the specific bus is in service and actually moving, or by checking whether there are user logins during that particular time period, or user traffic sessions on other buses currently in that geographic region. In addition, records of failed login attempts on, for example, a captive portal serving end users, may be used as a source of information with which to confirm the integrity of session traffic accounting information. It should be noted that in some situations, the use of example data validation approach 4 may involve levels of data processing resources and results may be delayed due to the process of correlating information from different DBs and information sources. For these reasons, the use of example data validation approach 4 may be appropriate when other approaches do not adequately diagnose a problem. Other data validation approaches may, for example, be employed until a certain threshold (e.g., packet loss-based), and then this data validation approach may be applied. It should be noted that the example approaches for data integrity validation given above are provided for illustrative purposes only, and are not to be taken as an exhaustive collection of possible approaches to the validation of various type of data. The four approaches discussed above are simply an example subset of the set of all approaches that may be used to validate data from the wide variety of data sources available. In some instances, only one or two of the example approaches may be appropriate. In other network instances, with other types of data not discussed above, new data validation approaches may be devised, and the need to accommodate such different data validation approaches is anticipated by the concepts disclosed and discussed herein. In accordance with various aspects of the present disclosure, the number of different approaches to be used in validating data integrity may be determined from, for example, the frequency or percentage of occurrence of errors that is acceptable, which may be different for each type of data. In some situations, the subset of different approaches to be used in validating data integrity may be based upon the number or percentage of errors that typically occur at a particular location or during a particular period of time (e.g., “busy hour” such as lunch, dinner, a popular soccer match) (i.e., expected reliability). As noted above, some applications of the network of moving things may require that all of the data for the application is valid at all times, while other applications may run satisfactorily when only a single sample of data is validated from time to time. For example, in a situation in which a mobile AP is moving at a very rate of speed, it may be possible to allow a high number of errors when the information is sent through fixed APs. The same may be true when a mobile AP is sharing the wireless medium with a large number of mobile and/or fixed APs. In accordance with various aspects of the present disclosure, the number of integrity checks employed in a network of moving things may be dynamically configured based on, for example, the line-of-sight/non-line-of-sight (LOS/NLOS) wireless communication status of a mobile AP, the number of fixed APs and/or mobile APs near a particular network node or element, and/or the number of running software applications that currently require a specific type of data. When data is not received correctly in a network such as the network examples described herein, particular actions may need to be taken to remediate the errors, instead of simply logging the observation that something is not working correctly. Some example actions that may be taken by elements of a network of moving according to the present disclosure include, for example, sending notifications to mobile APs to reset the affected software application, to notify local software applications that may be using such erroneous data, or to begin using other types of communication modes/technologies to send the data to the Cloud. In accordance with various aspects of the present disclosure, a system experiencing loss of data may increase a sampling rate of the data subject to the loss, in order to have more samples reach the destination. In instances where data is lost or corrupted at the source or while traversing the network, a system such as that described herein may resort to the use of historical information stored on, for example, the Cloud (e.g., Cloud100,760,809/810, or1060ofFIGS.1,7,8and10), or using data models that may be available, in order to derive the corrected data that should have been received. After deriving corrected data, such corrected data may be advertised to, for example, mobile APs and/or fixed APs within communication range. In addition, the elements of a network of moving things in accordance with aspects of the present disclosure may use sophisticated methods that look for and correlate others sources of information or, for example, derive additional/alternate metrics, to produce values for lost or corrupted data of a specific type. FIG.9shows a flowchart900illustrating an example process of validating the integrity of data transferred received by a node in a network of moving things, in accordance with various aspects of the present disclosure. The actions represented by the flowchart ofFIG.9may be performed by one or more of the network nodes (e.g., MAPs, FAPs, NCs) of the networks ofFIGS.1-8. The example process ofFIG.9begins at block910, where a first node of a network of moving things receives data using a network interface of the first node. In accordance with aspects of the present disclosure, the first node may be, for example, a mobile AP, a fixed AP, or another network element of a network of moving things such as those depicted inFIGS.1-8and10. Next, at block912, the process ofFIG.9identifies a subset of data integrity validation approaches from a set of data integrity validation approaches known to the first node, based on one of a type of the received data, the network interface used to receive the data, and a collection of configuration parameters for the first node distributed to the first node and a second node of the network. In accordance with aspects of the present disclosure, the type of the received data may be any of the types of data described herein such as, for example, periodic, delay tolerant, and non-delay-tolerant, to name just a few examples. The type of the network interface used to receive the data may be one of one or more network interfaces of the first node for use in communicating via networks such as, for example, a DSRC, cellular, Wi-Fi, or other wireless network and/or a wired network interface compatible with, for example, an Ethernet (e.g., IEEE 802.3) or other suitable wired network standard. The configuration parameters may be a collection of information representing values for configuring the operation of the first node that may be derived by the first node from a shared configuration file distributed among the nodes of the network of moving things. Additional information about the use of such a configuration file may be found in, for example, U.S. patent application Ser. No. 15/138,370, titled “Systems and Methods for Remote Configuration Update and Distribution in a network of Moving Things,” filed Apr. 26, 2016, the contents of which is hereby incorporated herein, in its entirety. Next, at block914, the first node may determine whether the received data integrity is valid using, for example, the subset of data integrity validation techniques, the configuration parameters, the received data, and the type of data received. The process then continues at decision block916. At decision block916, if it was determined that the received data is valid, the process ofFIG.9is directed to continue at block920, where the first node processes the received data. In accordance with various aspects of the present disclosure, processing the received data may include, for example, forwarding the received data to a particular software application or process in the first node, or to one or more other nodes of the network of moving things, in a suitable form (e.g., packets or bytes), using a network interface of the first node. Processing the received data may also or alternatively include storing the received data at the first node (e.g., in persistent storage), logging the receipt of the data for later reporting, analyzing the information represented by the received data, and/or reporting the receipt of the received data to another network entity (e.g., the Cloud). In this instance, the process ofFIG.9then ends. If, however, it is determined, at block914, that the received data is not valid, the process ofFIG.9then continues at block918, where the first node may notify another element of the network of moving things. This may be a network element responsible for recording or monitoring the occurrence of data integrity issues or the occurrence of a problem at the first node in regard to the received data. Such notification may include, for example, information identifying the first node, the network interface via which the received data was received, the source of the received data (e.g., the identity of the sending network node, sensor, device, or other network element), a portion of the received data (e.g., a packet header), and other information usable to aid in resolving the detected data integrity issue. In this alternate path, the process ofFIG.9then ends. FIG.10is a flowchart1000illustrating an example method of validating the integrity of geolocation-related data received by a network node, which may correspond to, for example, the actions of block914ofFIG.9, in accordance with various aspects of the present disclosure. The example actions represented by the flowchart1000ofFIG.10may be performed by one or more of the network nodes (e.g., OBUs/MAPs) of the networks ofFIGS.1-8in regard to data from, in this example, a GNSS receiver. The example method ofFIG.10begins at block1010, where a node of a network of moving things determines whether data from a GNSS (e.g., GPS) receiver has been acquired. If no GNSS data has been received, the method ofFIG.10simply loops, waiting for data from the GNSS receiver. It should be noted that such “looping” may, for example, be implemented in a manner that does not block other actions by the network node, and may be implemented as the cyclic sleeping/awakening of a software process that implements the validation of data from the GNSS receiver, or as any technique of permitting other actions to take place and later return to perform the determination of block1010again. In accordance with various aspects of the present disclosure, the network node may be, for example, a mobile AP, a fixed AP, or another network element of a network of moving things such as those depicted inFIGS.1-8. If it is determined, at block1010, that GNSS data has been received, the method ofFIG.10continues at block1012. At block1012, the example method ofFIG.10checks whether the GNSS data was received correctly, that is, without error. In the case of GNSS receivers, for example, many such receivers output “sentences” (e.g., strings of alphanumeric characters contain defined identifiers of the type of “sentence” and character strings representing various parameters and data produced by the GNSS receiver), typically at regular intervals. Such sentences may include information used for the detection of errors in the received string caused by the transmission path between the GNSS receiver and the recipient of such “sentences.” The example decision at block1012checks such error detection information (e.g., a checksum or check character) to determine whether the GNSS output (e.g., “sentence”) was received correctly. In accordance with some aspects of the present disclosure, for other type of data sources, the use of such a data validation action may not, for example, be configurable, appropriate, or necessary. If, at block1012, it is determined that the GNSS data was not received correctly, the method ofFIG.10then continues at block1014, where the occurrence of reception of GNSS data with one or more errors may be logged and, at block1016, the incorrectly received data may be discarded. The method ofFIG.10then continues at block1038, described below. However, if at block1012it is determined that the GNSS output was received correctly, the method ofFIG.10may continue at block1018. At block1018, the method ofFIG.10determines whether the data that was correctly received from the GNSS data source is complete. That is, the action of block1018determines whether all components/pieces/parameters of the data (e.g., “sentence”) from the GNSS source that should be present, are present. As discussed above, in the case of GNSS receivers, the data output (e.g., “sentences”) may have multiple components/pieces/parameters. If, at block1018, it is determined that not all components/pieces/parameters of the received GNSS data are present then, at block1020, the method may log the occurrence of receiving GNSS data that is incomplete, and may pass control to block1022. If, however, at block1018, it is determined that the received GNSS data is complete, then the method ofFIG.10continues at block1022. At block1022, the method determines whether the data from the GNSS receiver (e.g., where “sentences” are sent regular intervals) was received “on time.” If, at block1022, it is determined that the GNSS information was not received according to the timing schedule for the GNSS receiver, the method may then, at block1024, log the occurrence of the GNSS data not being received “on time,” and may then continue the method at block1026. Such a timing-related data validation technique may be configurable for data source various timing behaviors, depending upon the nature of the source of data. It should be noted that such timing-related checks may, for example, be performed on an ongoing basis, or may be performed at particular times during the day. For example, in accordance with some aspects of the present disclosure, a data integrity check may detect that data from a data source is late, at some point in time (e.g., within milliseconds, seconds, or minutes) after the data was expected to be received by the network node. In accordance with other aspects of the present disclosure, data integrity checks for such data may, for example, be done at any defined time such as “end-of-day,” by reviewing stored data received from such data source(s), to verify that the proper (e.g., expected) number of data entries or samples have been received or are present in storage. If, however, at block1022, it is determined that the GNSS information was received according to the timing schedule for the GNSS receiver, the method may then continue the method at block1026. At block1026, the network node performing the example method ofFIG.10may calculate distance traveled, based on the received GNSS data, if the network node is mounted in a vehicle (e.g., an OBU/MAP). The network node may then access data from other data sources (e.g., wheel rotation sensors) of the vehicle, which may be used, at block1030, to compute distance traveled over the same time interval as the information output by the GNSS receiver. Such vehicle sensor information may be available, for example, via an onboard diagnostic (e.g., OBD/OBD-II) port available on many motor vehicles. Then, at block1032, the method may then determine whether the distance traveled according to received GNSS data is consistent with the distance traveled according to received vehicle sensor data. If the distance traveled using the two different data sources is not consistent, the method ofFIG.10may then log the occurrence of the inconsistency, and may continue at block1036. If, however, at block1032, the distance traveled using the two different data sources is found to be consistent, the method ofFIG.10may continue at block1036. At block1036, the method may then make the GNSS (e.g., or other sensor) data available to functionality of, for example, appropriate Cloud-based system(s) or other network elements (e.g., network nodes), and at block1038, may send any log entries generated during the data integrity checks of the method ofFIG.10(e.g., at blocks1014,1020,1024,1034) to appropriate Cloud-based system(s) or other network elements. The example method ofFIG.10is then finished for the data currently received from the GNSS data source. If should be noted that the data integrity checks of the example method ofFIG.10such as, for example, those of blocks1012,1018,1022,1032are examples of only a few of many types of data integrity checks that may be used in systems and methods in accordance with the present disclosure, and that the use of various combinations of such a variety of data integrity verification approaches may be configured for each different type of data source. That is, while the example ofFIG.10uses four data integrity checks, for reasons of simplicity in illustrating these concepts, various combinations of data integrity checks selected from a large variety of data integrity verification approaches may be selected and configured for various types of data sources, in accordance with aspects of the present disclosure. Such configuration may be done at any time, and may be done locally to the network node or remotely from, for example, a Cloud-based system, according to the needs and desires of the operator and/or the customer/clients/users of a network of moving things in accordance with the present disclosure. FIG.11is a block diagram illustrating an example framework for accounting activities that monitor data traffic in a network of moving things, in accordance various aspects of the present invention. As illustrated in the example ofFIG.11, the network of moving things may include a Cloud1160, one or more fixed APs (FAPs)1110, one or more mobile APs (e.g., MAPs/OBUs)1120, one or more network controllers (NCs)1130, and one or more cellular networks1140, which may correspond to the Clouds, fixed APs, mobile APs, network controllers, and cellular networks shown and discussed above with regard toFIGS.1-8.FIG.11also shows a number of communication links that couple various elements ofFIG.11. The communication link(s)1114couple the fixed APs1110and the mobile APs1120and may be, for example, IEEE 802.11a/b/g/n/ac/af/p (e.g., Wi-Fi, DSRC) wireless communication links. The communication link(s)1116that couple the fixed APs1110and the network controllers1130may be, for example, any suitable wired or wireless communication links discussed herein with regard to communication between those elements. The communication link(s)1128that couple the mobile APs1120and the cellular network(s)1140may include, for example, data communication links using any suitable cellular air interface (e.g., CDMA, TDMA, GSM, UMTS, 4G LTE, 5G, etc.). The communication link(s)1132that couple the cellular networks1140and the network controllers1130may be, for example, any suitable wired or wireless communication network. The illustration ofFIG.11also shows a number of different logical streams of traffic accounting data1112,1122,1124,1126,1134,1136that flow from software applications running in the mobile APs1120, the fixed APs1110, and the network controllers1130that communicate with services/application in the Cloud1160. That traffic accounting data is carried from the mobile APs1020, fixed APs1110, and network controllers1130over the illustrated communication links1114,1116,1128,1132to the Cloud1160during a traffic accounting process described in greater detail, below. A network of moving things in accordance with various aspects of the present disclosure may include what may be referred to herein as a “traffic accounting framework.” Such a traffic accounting framework may include one or more software applications that run on various network nodes (e.g., mobile AP/OBU, fixed AP/RSU, network controller (NC)). A first such software application may, for example, use a set of “rules” or “policies” for each network interface on the network node, to enable it to monitor and periodically report via, for example logical data streams1122,1134ofFIG.11, the amount of data traffic (e.g., user or system data traffic or “packet traffic”) passing through each network interface of the network node such as, for example, the number of inbound and outbound packets and/or bytes of data being sent/receive that match each rule/policy. The term “logical data stream” may be used herein to refer to a series of data transfers that may or may not require or involve the use of a dedicated physical (e.g., wired and/or wireless) link, but may be carried with other digital information such as, for example, various system parameters, control information, and/or end-user data. Rules and/or policies such as those described herein may, for example, be expressed in a form accepted by a firewall utility such as, for example, the IPTables utility that is available for use with various distributions of, for example, the Linux operating system. For example, on a mobile AP, rules/policies may be defined that permit the first software application to differentiate traffic per interface by the type of wireless interface via which the data traffic was communicated such as, for example, a cellular data link, a DSRC data link, and/or a Wi-Fi data link (e.g., IEEE 802.11a/b/g/n/ac/ad/af). A traffic accounting framework in accordance with various aspects of the present disclosure may support monitoring and reporting via, e.g., logical data stream1122, the data traffic transferred between network nodes such as, for example, OBUs and various traffic monitoring applications, which may also be differentiated by the type of network interface used (e.g., via cellular, Wi-Fi, or DSRC). In the case of fixed APs (e.g., fixed APs1110), a particular network of moving things according to aspects of this disclosure may, for example, only monitor (i.e., measure) the data traffic that it is forwarded by the fixed AP between the DSRC interface of the mobile APs and the network controllers. In accordance with aspects of the present disclosure, network controllers such as the NCs1130ofFIG.11may monitor (i.e., measure) and periodically report, via logical data stream1134, all of the data traffic received and sent at their certain network interfaces (e.g., a “tunnel” interface) such as, for example, DSRC and/or cellular network interfaces, and may be able to differentiate the data traffic sent through according to each traffic monitoring application/source. In accordance with various aspects of the present disclosure, software applications running on the various network elements (e.g., OBUs/MAPs, FAPs, NCs, etc.) may send information to functionality using various APIs of a Cloud-based resource (e.g., the Cloud ofFIG.1, Could760ofFIG.7, or the Cloud elements ofFIG.8) employing the communication methodologies discussed herein. In accordance with some aspects of the present disclosure, some APIs and services may be available in network elements such as, for example, MAPs, FAPs, NCs, etc., in a system employing fog/distributed computing. In accordance with aspects of the present disclosure, information from different information sources may be reported to distinct APIs, thereby isolating problems and easing debugging of design/operational issues. A network of moving things in accordance with various aspects of the present disclosure may include a second software application that runs on a network controller (e.g., NCs1130) and that performs monitoring (i.e., measurement) and periodic reporting of inbound and outbound data traffic via, for example, logical data stream1136, from the network controller (e.g., the NCs1130), per mobile AP. Such a software application may, for example, differentiate data traffic for various network interfaces of a network controller, for each mobile Wi-Fi (e.g., IEEE 802.11a/b/g/n/ac/ad/af) user, and may differentiate that data traffic from data traffic generated by, for example, normal/typical OBU/MAP operation. It should be noted that the monitoring and reporting functionality described herein may be introduced into networks of moving things in which other, different network monitoring and reporting mechanisms may be in use, and that the concepts disclosed herein may be used in combination with those alternate or existing approaches. In accordance with various aspects of the present disclosure, additional traffic accounting applications running on various network elements may also perform various measurements and monitoring of network traffic at various granularities and aggregations, and may report such information via additional logical data streams such as, for example, logical data streams1124,1126, which may be also used by functional elements at, for example, the Cloud1160, as references for the traffic accounting system described herein. For example, these additional logical data streams may be used with other sources of data described herein, to perform cross-comparisons and/or other consistency checks, and that information identifying geographic locations at which and/or communication links on which data loss is detected are noted, so that such information may be used in scheduling the transfer of data from various network elements to the Cloud1160, in a manner and time to help avoid transfers of data when in those geographic locations or when in an area served by the communication links on which data loss has previously been detected, to aid in assuring that data loss is minimized so that data integrity is maintained. In a network of moving things in accordance with various aspects of the present disclosure, validation of a traffic accounting framework such as described herein may consist of analyzing the traffic accounting measurements provided by the accounting applications located at the different network nodes (e.g., MAPs, FAPs, NCs, etc.) as described above, as well as comparing the results of such traffic accounting measurements against other references that may be available, such as those that may be available from other, more basic traffic validation checks. For instance, in accordance with various aspects of the present disclosure, traffic validation may consist of determining whether aggregated traffic measurements over, e.g., DSRC and cellular network paths measured at all MAPS/OBUs match measurements reported by, e.g., the network controllers of the overall network. In addition, measurements of end to end traffic, such as traffic measurements monitored at the OBUs/MAPs, may be compared to the measurements reported by other network traffic monitoring approaches. While the measured traffic using different traffic accounting approaches would ideally be the same, aspects of the present disclosure permit the concurrent employment of such different approaches which aid in detecting and resolving discrepancies that may be observed due to various sources of packet losses such as, for example, in links employing wireless communication. A traffic accounting system in accordance with various aspects of the present disclosure may be configured to provide traffic differentiation granularity per communication technology (e.g., per wired interface or wireless air interface), per API, per user, per network element, etc.). Such a system supports measurement of overall packet losses of a network of moving things with confidence in the accuracy and also may be configured to provide a breakdown of network traffic at the per-OBU/MAP level, and permits further analysis to be performed based on the traffic accounting metrics described herein, to permit system operators to evaluate the performance of the various network nodes (e.g., OBUs/MAPs, FAPs/RSUs, and the overall vehicular network in general. The use of a variety of different traffic measurement mechanisms enables an operator of a system in accordance with the present disclosure to compare and validate different communication links of the system, thus permitting the operator to assess the integrity of a network of moving things as a whole. By employing the traffic accounting approach described herein, an operator of a network of moving things, or other data communication networks as well, can detect and validate problems with any network element in the network, in any specific communication link such as, for example, between mobile APs and fixed APs, between fixed APs and network controllers, between network controllers and the Cloud, or between Mobile APs and the Cloud, in any communication technology, in any network interface or API, or with any end-user. This is possible because a traffic accounting system in accordance with the present disclosure can measure data traffic separately in various elements of the network (e.g., MAPs/OBUs, FAPs, NCs, etc.), verifying communication links and data interconnections of the system, including APIs. It will be recognized by those of skill in the art that a complex system such as those described herein may be subject to many causes of failure, and that a well-designed analytics framework such as that disclosed herein is valuable for the correct operation and debugging of the procedures and interactions between elements. By employing the techniques and approaches described in the present disclosure, reliable and consistent data is collected and stored, and only valid information may be selected as input for complex analytic systems that may include “Big Data” clustering and machine learning, where information reliability is crucial for the correct training of such machine learning mechanisms. A reliable and consistent information database is also critical for many decision processes and key performance indicators. An important aspect of the present disclosure is the ability to configure any chosen subset of data validation approaches for use in validating the integrity of any source of data, and to automatically and dynamically select, adjust, or configure the data validation approaches used and their operation, based on observed conditions in a network of moving things. In accordance with aspects of the present disclosure, inconsistent data and outliers collected by elements of a network of moving things are removed before the analysis process begins. By improving the reliability of collected data, a network of moving things in accordance with the present disclosure (i.e., an IoMT) enables users of the collected data to derive sufficient added-value to deploy their own services, applications, and systems on top of the IoMT. In addition, a network of moving things according to the present disclosure enables the use of sophisticated statistic-related methods to filter data, a very important factor when building machine learning algorithms. Various aspects of the present disclosure may be seen in a method of managing data integrity in a network of moving things. The network may comprise a plurality of network nodes and a cloud-based system supporting services for the network. Such a method may comprise receiving data at a first network node of the plurality of network nodes, using a first data communication interface of the first network node; and identifying a subset of data validation techniques of a set of data validation techniques known to the first network node, using a collection of configuration parameters distributed to the first network node and a second network node of the plurality of network nodes. The method may also comprise determining, at the first network node, whether the data is valid, by applying to the data the data validation techniques in the identified subset; performing a first procedure to process the data, if the data received at the first network node is determined to be valid; and performing a second procedure, if the data received at the first network node is determined to be not valid. In accordance with various aspects of the present disclosure, the plurality of network nodes may comprise a first subset of network nodes that are at fixed geographic locations and a second subset of network nodes that are mobile within a geographic region of service, and wherein each network node in the plurality of network nodes is configured to directly communicate wirelessly with network nodes in the first subset and the second subset. A network node that is mobile may be configured with interface circuitry for communicating with sensors of a vehicle transporting the network node, and the data validation techniques in the subset may be identified using a designated portion of the collection of configuration parameters that corresponds to the first network node. The first network node may determine whether the data is valid by applying the data validation techniques in the identified subset to the data using a configuration parameter from the collection that represents one or both of a delay tolerance and a periodicity of sending of the data, the first procedure may comprise forwarding the received data to the second network node, and the second procedure may comprise notifying the cloud-based system of the invalidity of the data. The first network node may determine whether the received data is valid by applying the data validation techniques in the identified subset to the received data according to characteristics of the data communication interface used to receive the data. Further aspects of the present disclosure may be found in a non-transitory computer-readable medium having stored thereon a computer program. The computer program may have at least one code section comprising instructions executable by one or more processors for causing the one or more processors to perform steps of a method for managing data integrity in a network of moving things. The network may comprise a plurality of network nodes and a cloud-based system supporting services for the network, and the steps of the method may comprise the steps of the method described above. Additional aspects of the present disclosure may be observed in a system for managing data integrity in a network of moving things. Such a system may comprise one or more processors configured to communicatively couple to a plurality of network nodes and a cloud-based system supporting services for the network. The one or more processors in such a system may be operable to, at least, perform the steps of the method described above. Thus, an Internet of moving things in accordance with aspects of the present disclosure provides support for the application of updates (e.g., software, firmware, and/or data/configuration information) in a variety of device-types and hardware versions. Further, aspects of the present disclosure may be used to leverage an Internet of moving things to epidemically distribute updates at the lowest possible cost using low or zero cost communications technologies, and without the need to rely on cellular links. In accordance with various aspects of the present disclosure, a system may be configured to leverage the best available communication technology to download updates to various system components, and provides support for incremental updates as well as complete/full updates of parts of the operative system. In addition, a system in accordance with various aspects of the present disclosure provides support for geo-fenced updates and configurations. An Internet of moving things in accordance with various aspects of the present disclosure may be used to connect different types of devices that are physically on the move and also statically deployed. Such devices may present different kinds hardware versions and expected behaviors. In order to support the evolution of products that have already been deployed, use of an update mechanism such as the one presented herein allows for new features to be installed in already deployed network units, providing higher levels of security, reliability, and functionality. An Internet of moving things in accordance with various aspects of the present disclosure may provide a decentralized authentication mechanism for update validation, and may include a distributed update validation check. Further, such a system and network allows network units to download updates (e.g., software, firmware, and/or data/configuration information) for third-party and external network units. In addition, a system and network as described herein may support a distributed, cluster-based configuration management and decision mechanism for network units. Such a system may select the most plausible network configuration to use in any given situation. Aspects of an Internet of moving things in accordance with various aspects as described herein allow for updates to be downloaded and distributed epidemically in chunks. As provided herein, a communication network and/or node thereof implemented in accordance with various aspects of this disclosure may increase the connectivity between nodes (e.g., between fixed and mobile APs), throughput may increase, range may increase, latency may decrease, packet loss may decrease, overall network performance may increase, etc. Additionally, data communication may be substantially more economical than with other types of networks (e.g., cellular, etc.). Further, a node (e.g., a fixed AP) implemented in accordance with various aspects of this disclosure may be installed at a location that does not have ready access to power and/or to a traditional type of backhaul. Still further, a network implemented in accordance with various aspects of this disclosure may be operated with fewer APs than would otherwise with necessary, reducing overall cost. Additionally, a network implemented in accordance with various aspects of this disclosure, for example having multiple adaptive fixed APs that are collocated, provides immense flexibility to provide differentiation of services, network redundancy, load balancing, high reliability, and dedicated services. In an example implementation, different APs at a same location or serving a same coverage area may utilize different respective channels, thus providing bandwidth allocation flexibility, for example to prioritize particular services or service classes, increasing overall spectrum utilization, etc. In general, increasing the coverage of high-range wireless (e.g., DSRC) technology, which may be utilized as the wireless backbone of the network of moving things, will enhance all that the technology has to offer. In summary, various aspects of this disclosure provide systems and methods for enhancing node operation in a network of moving things. As non-limiting examples, various aspects of this disclosure provide systems and methods for adapting fixed access point coverage and/or power input/output in a network of moving things, adapting fixed access point backhaul communication, etc. While the foregoing has been described with reference to certain aspects and examples, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from its scope. Therefore, it is intended that the disclosure not be limited to the particular example(s) disclosed, but that the disclosure will include all examples falling within the scope of the appended claims.
177,806
11860852
The Figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION OF THE DRAWINGS The present embodiments may relate to, inter alia, systems and methods for reviewing the veracity of statements provided by individuals. The systems and methods described herein may be used with or integrated with a secondary insurance or loan application review system or a secondary insurance claim review system. As used herein, a statement is an assertion of fact made by a human being that may include associated text, audio, and/or video data. As used herein, an aspect of a statement is a portion of a statement including an assertion of fact that can be proven true of false. The systems and methods described herein may include generating at least one model by analyzing a plurality of historical statements to identify reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements. The reference indicators may include, for example, inflection or tone of voice correlating to inaccurate of a historical statements, body language correlating to inaccurate historical statements, inconsistencies between multiple associated historical statements (e.g., historical statements regarding the same accident), and/or inconsistencies between a historical statement and forensic evidence associated with the subject matter of the historical statement. Accordingly, the generated models may include, for example, an inflection of voice model, a body language model, an inconsistent statement model, and/or a forensic model. The systems and methods may further include receiving a data stream corresponding to at least one statement to be analyzed for accuracy (sometimes referred to herein as a “current statement”). The systems and methods may further include parsing the data stream using the models to identify at least one candidate indicator included in the current statement matching at least one of the reference indicators. The presence of the candidate indicators in the current statement that match the reference indicators that are correlated with inaccurate statements suggests that the current statement is also potentially inaccurate. In one exemplary embodiment, the process may be performed by a veracity analyzer (“VA”) computing device. As described below, the systems and methods described herein analyze current statements by comparing data included within the statements (e.g., audio, video, or text) to a plurality of reference indicators, wherein the reference indicators are generated based upon an analysis of a plurality of historical statements through, for example, artificial intelligence (AI) and/or machine learning techniques. By so doing, the systems and methods are able to analyze the veracity of the current statements and flag the statements as being likely true or potentially inaccurate. Further, the systems and methods may identify particular aspects of a statement that are potentially inaccurate. In some embodiments, the systems and methods may include a chatbot that is capable of generating text or voice messages that are provided to a user in response to text or voice messages being submitted by the user to simulate a conversation with a human being. For example, the chatbot may be a robo-advisor that assists a user, for example, in making an insurance claim, in obtaining an insurance policy, and/or in another financial application where the user may submit statements to the robo-advisor. The responses may be generated using, for example, keyword analysis, natural language processing, and/or machine learning techniques. Such questions and generated responses may be, for example, in the form of text (e.g., email, short message service (SMS) messages, or chat messages) or voice. In such embodiments, the chatbot may be configured to analyze the veracity of statements submitted to the chatbot by the user, flag the statements as being likely true or potentially inaccurate, and generate responses (e.g., text or voice messages) based upon the determination that the statements are likely true or potentially inaccurate. Generating Models by Identifying Reference Indicators Associated with False Statements The VA computing device may generate models by analyzing historical statements to identify reference indicators correlated to at least one inaccurate aspect included in the plurality of historical statements. The reference indicators may be portions of data corresponding to statements (e.g., audio, video, or text data) that are correlated with falsity of the statement and/or an aspect of the statement. For example, a determination that an analyzed statement includes aspects that conflict with another statement, or that an analyzed statement includes aspects that conflict with known forensics data, may show that the analyzed statement is likely inaccurate. Other reference indicators may include, for example, tone and inflection of voice and/or body language (e.g., posture, head and eye movement, and/or facial expressions) that are determined to be correlated with falsity of the statement. The VA computing device may generate models corresponding to different types of reference indicators (e.g., an inflection of voice model, a body language model, an inconsistent statement model, and/or a forensic model). The VA computing device may generate the models by analyzing a large number of historical statements. The historical statements may include, for example, audio data (e.g., voice calls an audio records of interviews), video data (e.g., video calls and video records of interviews), and text data (e.g., email, short message service (SMS) messages, online chat messages, or a transcription of audio). In certain embodiments, some of the historical statements may include aspects having a known true or false value. Using, for example, AI and/or machine learning techniques, as described below, the VA computing device may categorize each of the large number of historical statements as either true or false, and identify reference indicators that are correlated with inaccurate aspects of the historical statements. The VA computing device may generate models corresponding to the analysis of audio and/or video data (e.g., the inflection of voice model and/or the body language model). An individual making a statement may show certain behaviors that potentially indicate that the individual knows the statement is false and is trying to deceive the party receiving the statement. Even if the individual making the statement is not attempting deception, certain behaviors of the individual may indicate the individual's statement may not be accurate (e.g., behavior indicating the individual may have difficulty remembering the subject matter of the statement). Such reference indicators (e.g., tone and inflection of voice and/or body language), when captured and encoded into audio and video data, are manifested as certain data patterns within the audio and video data. The VA computing device may determine whether the historical statements are true and/or false, and determine that the reference indicators (e.g., the data patterns) are correlated with the inaccurate statements. Accordingly, the VA computing device may determine that future statements containing these audio and/or visual reference indicators are potentially inaccurate. In some embodiments, the large number of historical statements may be stored in a database. When the VA computing device receives additional statements, as described below, the VA computing device may store the additional statements in the database, enabling the VA computing device to continually generate and refine the models using AI and/or machine learning techniques based a larger and larger number of historical statements. In some embodiments, the VA computing device may store the generated models, for example, in a database. The VA computing device may then use the database of models when analyzing future statements. Receiving a Data Stream Corresponding to a Current Statement The VA computing device may receive a data stream corresponding to a statement to be analyzed by the VA computing device for accuracy (sometimes referred to herein as a “current statement”). The data stream may include a variety of different types of data corresponding to the current statement (e.g., audio, video, or text). The data may be received from a variety of different sources. In some embodiments, the data stream may be associated with a real-time telephone call or video call. The call may be, for example, a traditional or internet protocol (IP) telephone call, a video call through a video call service or platform, or a voice or video call received directly by the VA computing device, for example, through an online platform or mobile application (“app”). In certain embodiments, the VA computing device may transcribe an audio signal of the call to obtain text data. The VA computing device may analyze the incoming data stream as the call is occurring, enabling the VA computing device to identify potential inaccuracy of the current statement in real time and generate corresponding alerts. Alternatively, the VA computing device may generate or receive a recording of the call and analyze the recording after the call is completed. In some embodiments, the data stream received by the VA computing device may include previously recorded audio, video, and/or text. Text statements may include, for example, previous email, SMS, or online chat messages. In some embodiments where the VA computing device analyzes audio and video records, the VA computing device may transcribe a recorded audio signal to obtain text data. The VA computing device may retrieve such records from, for example, a database or another computing device associated with a third party. In some embodiments, the VA computing device may be a chatbot or robo-advisor, and the VA computing device may receive the statements as, for example, chat messages submitted by a user via a mobile app or website. In such embodiments, the VA computing device may generate and transmit responses to the user in addition to analyzing the veracity of the statements, as described below. The VA computing device may generate the responses using, for example, keyword analysis, natural language processing, and/or machine learning techniques. In certain embodiments, the generated responses may depend on detection by the VA computing device of inaccuracy in the statements submitted by the user. In some embodiments, the VA computing device may be in communication with a device capable of capturing a stream of data to be analyzed. For example, a representative interviewing an individual may capture audio and video data using a mobile device and/or wearable technology (e.g., smart glasses). This data may be transmitted to the VA computing device. Alternatively, the VA computing device may be implemented as a device capable of capturing such data (e.g., smart glasses). Parsing the Data Stream to Identify Candidate Indicators Matching Reference Indicators The VA computing device may parse the data stream to identify candidate indicators matching a portion of the data stream. Candidate indicators are portions of the data stream that match a reference indicator. Accordingly, the presence of candidate indicators in a data stream including a current statement is correlated with inaccuracy of the current statement. To identify portions of the data stream matching the indicators, the VA computing device may match the portions of the data stream to the reference indicators using the generated models. Accordingly, the VA computing device may determine that statements corresponding to such portions of the data stream are potentially inaccurate. In some embodiments, the VA computing device may parse text data associated with the data stream including the current statement. For example, the VA computing device may parse the text data for words or phrases determined to be correlated with falsity of the statement. Such words or phrases may include examples of candidate indicators that match reference indicators identified by the VA computing device as described above. In some embodiments, the VA computing device may receive a data stream including a story having a plurality of current statements. The VA computing device may analyze the content of each of the plurality of current statements and compare the current statements to each other to identify inconsistencies or conflicts that may indicate falsity of the current statements. Further, where a previous statement has been made regarding, for example, the same event, the VA computing device may compare the current statement to the previous statement to identify inconsistencies or conflicts. The VA computing device may determine that such conflicts indicate the statement is potentially inaccurate. In some embodiments, the VA computing device may parse audio and/or video data associated with the data stream. Candidate indicators such as tone and inflection of voice, head and eye movement, body language, or facial expressions may be manifested as patterns within the audio and/or video data. The VA computing device may parse the data stream using the models (e.g., the inflection of voice model and/or the body language model) for such candidate indicators matching the reference indicators determined to be correlated with falsity of the statement. Flagging Current Statements as Potentially False The VA computing device may further flag current statements as potentially false in response to identifying candidate indicators in the current statement that match reference indicators. In embodiments where the current statement is analyzed in real time, the VA computing device may display a real-time alert indicating that a potentially false statement has been detected, enabling an interviewer, for example, to ask a follow-up question to attempt to verify the current statement or obtain an accurate statement. In embodiments where the current statement corresponds to a previous recording, the VA computing device may flag the aspect (e.g., a portion) of the current statement that is potentially false (e.g., text of the potentially false statement or a timestamp of an audio and/or video recording), so that the aspect of the current statement may be investigated further. In embodiments where the VA computing device is a chatbot or robo-advisor, the VA computing device may generate responses to statements (e.g., chat messages) submitted by a user that depend on the flagging of the submitted statement as potentially false. The VA computing device may further generate recommendations to a user for obtaining accurate statements in response to determining that the current statement is potentially inaccurate. For example, where the VA computing device identifies a current statement conflicting with forensic evidence and/or a previous statement made by the same or another individual, the VA computing device may display an alert message prompting an interviewer to inquire about the forensic evidence and/or differences between the two statements. The alert message may include, for example, the forensic evidence and/or at least one aspect of the previous statement conflicting with the current statement. Chatbot Applications of the VA Computing Device In some embodiments, the VA computing device may be a chatbot computing device configured to generate text and/or voice messages that are provided to a user in response to text or voice messages being submitted by the user. For example, the chatbot may be a robo-advisor that assists a user, for example, in making an insurance claim, in obtaining an insurance policy, and/or in another financial application where the user may submit statements to the robo-advisor. The VA computing device may generate the responses using, for example, keyword analysis, natural language processing, and/or machine learning techniques. Such questions and generated responses may be, for example, in the form of text (e.g., email, SMS messages, or chat messages) or voice. The VA computing device, being utilized as a chatbot computing device, may receive a current statement from a user. The current statement may be, for example, an audio (e.g., voice), video, or text message (e.g., an email, SMS message, or chat message). Such messages may be submitted to the VA computing device using, for example, a mobile app running on a mobile device. The message may be submitted by the user, for example, in response to a question message transmitted to the user by the VA computing device. The VA computing device may analyze the veracity of the submitted messages and flag the messages as potentially false, as described above. The VA computing device may generate a response message including a response to the current statement submitted by the user. For example, the response message may include a question following up on the current statement. The response message may be generated, for example, using keyword analysis, natural language processing, and/or machine learning techniques. In a keyword analysis, keywords (e.g., particular predetermined words or phrases) are identified in the current statement, and the response depends upon the identified keywords. In natural language processing, natural language (e.g., speech) of the user is analyzed to determine the meaning of the natural language. Machine learning techniques may be utilized to generate a response based on identified keywords, meanings, or other patterns in the current statement. The response may further depend upon the determination that the current statement is potentially inaccurate. For example, the response message may identify information that contradicts the current statement of the user (e.g., a previous statement of the user inconsistent with the current statement. Insurance Applications of the VA Computing Device In some embodiments, the VA computing device may be used to verify the veracity of statements made by insured individuals to their insurer. For example, an insured individual may submit one or more statements to the insurer, such as a statement by a claimant describing a loss or a statement by an individual purchasing insurance about the value or condition of property to be covered. An insurance representative receiving the statement may use the VA computing device in order to review the statement. In such embodiments, the statement may be received by the VA computing device in a variety of ways. For example, the insured individual may submit the statement through a phone call, video call, an email, an SMS message, or an online chat message. In certain embodiments, the insurer may provide an app through which the insured individual may submit a statement through a chat message, voice call, or video call. In some embodiments, the VA computing device may be a chatbot or robo-advisor associated with a service provided by the insurer, and the user may submit the statements, for example, as chat messages to the VA computing device. In some embodiments, an insurance representative may carry a wearable technology device (e.g., smart glasses) to capture audio and/or video of a statement made by the insured individual in person. In embodiments where the insurance representative may conduct a real-time interview of the individual making the statement (e.g., in person or through a voice or video call), the VA computing device may detect potentially inaccurate statements in real time and display the alerts to the insurance representative. The alerts may be displayed, for example, through the wearable technology device. In some embodiments, users (e.g., insured individuals making statements to be analyzed) may enroll in services provided by the VA computing device. For example, enrolling in insurance services utilizing the VA computing device may include an opt-in procedure where users “opt-in” (e.g., provide informed consent) to having statements made to the insurer by the users recorded and analyzed by the VA computing device. This allows any services using the VA computing device to be in compliance with consumer protection laws and privacy regulations. Thus, a user consents to having the user's statements recorded and analyzed by the VA computing device when they enroll in a service that uses the VA computing device. In other embodiments, the user may opt in and provide consent by transmitting an affirmative consent message to the VA computing device. The consent message may indicate user consent from the user in having the user's statements recorded and analyzed by the VA computing device. At least one of the technical problems addressed by this system may include: (i) inability of computing devices to identify indicators of inaccuracy of a human statement; (ii) inability of computing devices to detect inaccuracy of a human statement; (iii) delay in verifying human statements; (iv) delay in computer-based processes due to the need for human input; (v) increased security risk due to the need for human input; and (vi) increased risk or error due to reliance on human input. A technical effect of the systems and processes described herein may be achieved by performing at least one of the following steps: (i) generating at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements; (ii) receiving a data stream corresponding to a current statement; (iii) parsing the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators; and (iv) flagging, in response to identifying the at least one candidate indicator, the current statement as potentially false. The technical effect achieved by this system may be at least one of: (i) ability for computing devices to identify and generate indicators of inaccuracy of a human statement; (ii) ability for computing devices to detect inaccuracy of a human statement; (iii) ability of a computing device receiving a statement to generate real-time alerts that the statement is potentially inaccurate; (iv) increased efficiency of a computerized insurance claims process by verifying the accuracy of statements submitted to an insurer in real time; (v) increased security due to reduced reliance on human input; and (vi) fewer system processing errors due to reduced reliance on human input. Exemplary System for Analyzing Statement Veracity FIG.1depicts an exemplary VA system100. VA system100may include a VA computing device102. VA computing device102may be in communication with a server computing device104having a database server106. VA computing device102may communicate with, for example, one or more of a database108, a mobile device110, a wearable technology device112, or a third party computing device114via server computing device104. VA computing device102may generate models by analyzing historical statements to identify reference indicators correlated to at least one inaccurate aspect included in the plurality of the historical statements. The reference indicators may be portions of data corresponding to statements (e.g., audio, video, or text data) that are correlated with falsity of the statement and/or an aspect of the statement. For example, a determination that an aspect of an analyzed statement conflicts with another statement, or that an aspect of an analyzed statement conflicts with known forensics data, may show that the analyzed statement is likely inaccurate. Other reference indicators may include, for example, tone and inflection of voice, posture, head and eye movement, and/or facial expressions that are determined to be correlated with falsity of the statement. VA computing device102may generate models corresponding to different types of reference indicators (e.g., an inflection of voice model, a body language model, an inconsistent statement model, and/or a forensic model). VA computing device102may generate the models by analyzing a large number of historical statements. The historical statements may include, for example, audio data (e.g., voice calls an audio records of interviews), video data (e.g., video calls and video records of interviews), and text data (e.g., email, short message service (SMS) messages, online chat messages, or a transcription of audio). In certain embodiments, some of the historical statements may include aspects having a known true or false value. Using, for example, AI and/or machine learning techniques, as described below, VA computing device102device may categorize each of the large number of historical statements as either true or false, and identify reference indicators that are correlated with inaccurate aspects of the historical statements. VA computing device102may generate models corresponding to the analysis of audio and/or video data (e.g., the inflection of voice model and/or the body language model). An individual making a statement may show certain behaviors that potentially indicate that the individual knows the statement is false and is trying to deceive the party receiving the statement. Even if the individual making the statement is not attempting deception, certain behaviors of the individual may indicate the individual's statement may not be accurate (e.g., behavior indicating the individual may have difficulty remembering the subject matter of the statement). Such reference indicators (e.g., tone and inflection of voice, posture, head and eye movement, or facial expressions), when captured and encoded into audio and video data, are manifested as certain data patterns within the audio and video data. VA computing device102may determine whether the historical statements are true and/or false, and determine that the reference indicators (e.g., the data patterns) are correlated with the inaccurate statements. Accordingly, VA computing device102may determine that future statements containing these audio and/or visual reference indicators are potentially inaccurate. In some embodiments, the large number of historical statements may be stored in database108. When VA computing device102receives additional statements, as described below, VA computing device102may store the additional statements in database108, enabling VA computing device102to continually generate and refine the models using AI and/or machine learning techniques based a larger and larger number of historical statements. In some embodiments, VA computing device102may store the generated models, for example, in database108. VA computing device102may then use models stored in database108when analyzing future statements. VA computing device102may receive a data stream corresponding to a statement to be analyzed by the VA computing device102for accuracy (sometimes referred to herein as a “current statement”). The data stream may include a variety of different types of data corresponding to the current statement (e.g., audio, video, or text). The data may be received from a variety of different sources. In some embodiments, the data stream may be associated with a real-time telephone call or video call. The call may be, for example, a traditional or internet protocol (IP) telephone call, a video call through a video call service or platform, or a voice or video call received directly by VA computing device102, for example, through an online platform or mobile application (“app”). In certain embodiments, VA computing device102may transcribe an audio signal of the call to obtain text data. VA computing device102may analyze the incoming data stream as the call is occurring, enabling VA computing device102to identify potentially inaccuracy of the current statement in real time and generate corresponding alerts. Alternatively, VA computing device102may generate or receive a recording of the call and analyze the recording after the call is completed. In some embodiments, the data stream received by VA computing device102may be of previously recorded audio, video, and/or text. Text statements may include, for example, email, SMS, or online chat messages. In embodiments where VA computing device102analyzes audio and video records, VA computing device102may transcribe a recorded audio signal to obtain text data. VA computing device102may retrieve such records from, for example, database108or third party computing device114associated with a third party. In some embodiments, VA computing device102may be a chatbot or robo-advisor, and the VA computing device may receive the statements as, for example, chat messages submitted by a user via a mobile app or website (e.g., using mobile device110). In such embodiments, VA computing device102may generate and transmit responses to the user (e.g., via mobile device110) in addition to analyzing the veracity of the statements. VA computing device102may generate the responses using, for example, keyword analysis, natural language processing, and/or machine learning techniques. In certain embodiments, the generated responses may depend on detection by VA computing device102of inaccuracy in the statements submitted by the user. In some embodiments, VA computing device102may be in communication with a device capable of capturing a stream of data to be analyzed. For example, a representative interviewing an individual may capture audio and video data using a device such as wearable technology device112(e.g., smart glasses). This data may be transmitted to VA computing device102. Alternatively, VA computing device102may be implemented as a device capable of capturing such data (e.g., smart glasses). VA computing device102may parse the data stream to identify candidate indicators matching a portion of the data stream. Candidate indicators are portions of the data stream that match a reference indicator. Accordingly, the presence of candidate indicators in a data stream including a current statement is correlated with inaccuracy of the current statement. To identify portions of the data stream matching the indicators, VA computing device102may match the portions of the data stream to the reference indicators using the generated models. Accordingly, VA computing device102may determine that statements corresponding to such portions of the data stream are potentially inaccurate. In some embodiments, VA computing device102may parse text data associated with the data stream including the current statement. For example, VA computing device102may parse the text data for words or phrases determined to be correlated with falsity of the statement. Such words or phrases may include examples of candidate indicators that match reference indicators identified by the VA computing device102as described above. In some embodiments, VA computing device102may receive a data stream including a story having a plurality of current statements. VA computing device102may analyze the content of each of the plurality of current statements and compare the current statements to each other to identify inconsistencies or conflicts that may indicate falsity of the current statements. Further, where a previous statement has been made regarding, for example, the same event, VA computing device102may compare the current statement to the previous statement to identify inconsistencies or conflicts. VA computing device102may determine that such conflicts indicate the statement is potentially inaccurate. In some embodiments, VA computing device102may parse audio and/or video data associated with the data stream. Candidate indicators such as tone and inflection of voice, head and eye movement, body language, or facial expressions may be manifested as patterns within the audio and/or video data. VA computing device102may parse the data stream using the models (e.g., the infliction of voice model and/or the body language model) for such candidate indicators matching the reference indicators determined to be correlated with falsity of the statement. VA computing device102may further flag current statements as potentially false in response to identifying candidate indicators in the current statement that match reference indicators. In embodiments where the current statement is analyzed in real time, VA computing device102may display a real-time alert (e.g., using wearable technology device112) indicating that a potentially false statement has been detected, enabling an interviewer, for example, to ask a follow-up question to attempt to verify the current statement or obtain an accurate statement. In embodiments where the current statement corresponds to a previous recording, VA computing device102may flag the aspect (e.g., a portion) of the current statement that is potentially false (e.g., text of the potentially false statement or a timestamp of an audio and/or video recording), so that the aspect of the current statement may be investigated further. In embodiments where VA computing device102is a chatbot or robo-advisor, VA computing device102may generate responses to statements (e.g., chat messages) submitted by a user that depend on the flagging of the submitted statement as potentially false. VA computing device102may further generate recommendations to a user for obtaining accurate statements in response to determining that the current statement is potentially inaccurate. For example, where VA computing device102identifies a current statement conflicting with forensic evidence and/or a previous statement made by the same or another individual, VA computing device102may display an alert message (e.g., using wearable technology device112) prompting an interviewer to inquire about the forensic evidence and/or differences between the two statements. The alert message may include, for example, the forensic evidence and/or at least one aspect of the previous statement conflicting with the current statement. In some embodiments, VA computing device102may be a chatbot computing device configured to generate text and/or voice messages that are provided to a user in response to text or voice messages being submitted by the user. For example, VA computing device102may be a robo-advisor that assists a user, for example, in making an insurance claim, in obtaining an insurance policy, and/or in another financial application where the user may submit statements to the robo-advisor. VA computing device102may generate the responses using, for example, keyword analysis, natural language processing, and/or machine learning techniques. Such questions and generated responses may be, for example, in the form of text (e.g., email, SMS messages, or chat messages) or voice. VA computing device102, being utilized as a chatbot computing device, may receive a current statement from a user. The current statement may be, for example, an audio (e.g., voice), video, or text message (e.g., an email, SMS message, or chat message). Such messages may be submitted to VA computing device using, for example, a mobile app running on mobile device110. The message may be submitted by the user, for example, in response to a question message transmitted to the user by VA computing device102. VA computing device102may analyze the veracity of the submitted messages and flag the messages as potentially false, as described above. VA computing device102may generate a response message including a response to the current statement submitted by the user. For example, the response message may include a question following up on the current statement. The response message may be generated, for example, using keyword analysis, natural language processing, and/or machine learning techniques. In a keyword analysis, keywords (e.g., particular predetermined words or phrases) are identified in the current statement, and the response depends upon the identified keywords. In natural language processing, natural language (e.g., speech) of the user is analyzed to determine the meaning of the natural language. Machine learning techniques may be utilized to generate a response based on identified keywords, meanings, or other patterns in the current statement. The response may further depend upon the determination that the current statement is potentially inaccurate. For example, the response message may identify information that contradicts the current statement of the user (e.g., a previous statement of the user inconsistent with the current statement. In some embodiments, VA computing device102may be used to verify the veracity of statements made by insured individuals to their insurer. For example, an insured individual may submit one or more statements to the insurer, such as a statement by a claimant describing a loss or a statement by an individual purchasing insurance about the value or condition of property to be covered. An insurance representative receiving the statement may use VA computing device102in order to review the statement. In such embodiments, the statement may be received by VA computing device102in a variety of ways. For example, the insured individual may submit the statement through a phone call, video call, an email, an SMS message, or an online chat message. In certain embodiments, the insurer may provide a mobile app through which the insured individual may submit a statement through a chat message, voice call, or video call on mobile device110. In some embodiments, VA computing device102may be a chatbot or robo-advisor associated with a service provided by the insurer, and the user may submit the statements, for example, as chat messages to VA computing device102. In some embodiments, an insurance representative may carry wearable technology device112(e.g., smart glasses) to capture audio and/or video of a statement made by the insured individual in person. In embodiments where the insurance representative may conduct a real-time interview of the individual making the statement (e.g., in person or through a voice or video call), VA computing device may detect potentially false statements in real time and display the alerts to the insurance representative. The alerts may be displayed, for example, through wearable technology device112. In some embodiments, users (e.g., insured individuals making statements to be analyzed) may enroll in services provided by VA computing device102. For example, enrolling in insurance services utilizing VA computing device102may include an opt-in procedure where users “opt-in” (e.g., provide informed consent or authorization) to having statements made to the insurer by the users recorded and analyzed by VA computing device102. This allows any services using VA computing device102to be in compliance with consumer protection laws and privacy regulations. Thus, a user consents to having the user's statements recorded and analyzed by VA computing device102when they enroll in a service that uses VA computing device102, and in return the user may be entitled to insurance discounts, lower premiums, or other cost-savings. In other embodiments, the user may opt in and provide consent by transmitting an affirmative consent message to VA computing device102. The consent message may indicate user consent from the user in having the user's statements recorded and analyzed by VA computing device102. Exemplary Client Computing Device FIG.2depicts an exemplary client computing device202that may be used with VA system100shown inFIG.1. Client computing device202may be, for example, at least one of VA computing devices102, mobile device110, wearable technology device112, and/or third party computing device114(all shown inFIG.1). Client computing device202may include a processor205for executing instructions. In some embodiments, executable instructions may be stored in a memory area210. Processor205may include one or more processing units (e.g., in a multi-core configuration). Memory area210may be any device allowing information such as executable instructions and/or other data to be stored and retrieved. Memory area210may include one or more computer readable media. In the exemplary embodiments, processor205may be configured to generate, based upon statement data corresponding to a plurality of statements, a plurality of indicators, wherein the plurality of indicators are correlated with false statements. Processor205may be further configured to receive a data stream corresponding to at least one statement. Processor205may be further configured to parse the data stream to identify, from the plurality of indicators, at least one indicator matching a portion of the data stream. Processor205may be further configured to flag, in response to identifying the at least one indicator, the statement as potentially false. In exemplary embodiments, processor205may include and/or be communicatively coupled to one or more modules for implementing the systems and methods described herein. Processor205may include an analytics module230configured to generate at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements. Analytics module230may utilize AI and/or machine learning techniques to generate the model. Processor205may further include a parsing module232configured to parse a data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators and flag, in response to identifying the at least one candidate indicator, the current statement as potentially false. In exemplary embodiments, client computing device202may also include at least one media output component215for presenting information to a user201. Media output component215may be any component capable of conveying information to user201. In some embodiments, media output component215may include an output adapter such as a video adapter and/or an audio adapter. An output adapter may be operatively coupled to processor205and operatively couplable to an output device such as a display device (e.g., a liquid crystal display (LCD), light emitting diode (LED) display, organic light emitting diode (OLED) display, cathode ray tube (CRT) display, “electronic ink” display, or a projected display) or an audio output device (e.g., a speaker or headphones). Media output component215may be configured to, for example, display an alert message identifying a statement as potentially false. In some embodiments, media output component215may include smart glasses (e.g., wearable tech device112) including a display device and/or an audio output device. Client computing device202may also include an input device220for receiving input from user201. Input device220may include, for example, a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector, or an audio input device (e.g., a microphone). A single component such as a touch screen may function as both an output device of media output component215and input device220. In some embodiments, input device220may include smart glasses (e.g., wearable tech device112) including, for example, an audio input device and/or camera. Client computing device202may also include a communication interface225, which can be communicatively coupled to a remote device such as VA computing device102(shown inFIG.1). Communication interface225may include, for example, a wired or wireless network adapter or a wireless data transceiver for use with a mobile phone network (e.g., Global System for Mobile communications (GSM), 3G, 4G or Bluetooth) or other mobile data network (e.g., Worldwide Interoperability for Microwave Access (WIMAX)). In exemplary embodiments, communication interface225may enable, for example, VA computing device102to receive a data stream corresponding to a current statement. Stored in memory area210may be, for example, computer-readable instructions for providing a user interface to user201via media output component215and, optionally, receiving and processing input from input device220. A user interface may include, among other possibilities, a web browser and client application. Web browsers may enable users, such as user201, to display and interact with media and other information typically embedded on a web page or a website. A client application may allow user201to interact with a server application from VA computing device102via server computing device104(both shown inFIG.1). Memory area210may include, but is not limited to, random access memory (RAM) such as dynamic RAM (DRAM) or static RAM (SRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and non-volatile RAM (NVRAM). The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program. Exemplary Server System FIG.3depicts an exemplary server system301that may be used with VA system100illustrated inFIG.1. Server system301may be, for example, server computing device104(shown inFIG.1). In exemplary embodiments, server system301may include a processor305for executing instructions. Instructions may be stored in a memory area310. Processor305may include one or more processing units (e.g., in a multi-core configuration) for executing instructions. The instructions may be executed within a variety of different operating systems on server system301, such as UNIX, LINUX, Microsoft Windows®, etc. It should also be appreciated that upon initiation of a computer-based method, various instructions may be executed during initialization. Some operations may be required in order to perform one or more processes described herein, while other operations may be more general and/or specific to a particular programming language (e.g., C, C #, C++, Java, or other suitable programming languages, etc.). Processor305may be operatively coupled to a communication interface315such that server system301is capable of communicating with VA computing device102, mobile device110, wearable technology device112, and third party computing device114all shown inFIG.1), or another server system301. For example, communication interface315may receive requests from wearable technology device112via the Internet. Processor305may also be operatively coupled to a storage device317, such as database120(shown inFIG.1). Storage device317may be any computer-operated hardware suitable for storing and/or retrieving data. In some embodiments, storage device317may be integrated in server system301. For example, server system301may include one or more hard disk drives as storage device317. In other embodiments, storage device317may be external to server system301and may be accessed by a plurality of server systems301. For example, storage device317may include multiple storage units such as hard disks or solid state disks in a redundant array of inexpensive disks (RAID) configuration. Storage device317may include a storage area network (SAN) and/or a network attached storage (NAS) system. In some embodiments, processor305may be operatively coupled to storage device317via a storage interface320. Storage interface320may be any component capable of providing processor305with access to storage device317. Storage interface320may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing processor305with access to storage device317. Memory area310may include, but is not limited to, random access memory (RAM) such as dynamic RAM (DRAM) or static RAM (SRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and non-volatile RAM (NVRAM). The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program. Exemplary Method for Analyzing Statement Veracity FIG.4depicts an exemplary computer-implemented method400for reviewing the veracity of statements. Method400may be performed by VA computing device102(shown inFIG.1). Method400may include generating402at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements. In some embodiments, each of the plurality of historical statements includes at least one of audio data, video data, and text data. In some embodiments, generating402the at least one model may be performed by analytics module230(shown inFIG.2). Method400may further include receiving404a data stream corresponding to a current statement. In some embodiments, the data stream includes at least one of audio data, video data, and text data. In certain embodiments, the data stream is received from a wearable technology device (e.g., wearable technology device112). In some embodiments, the data stream includes an audio signal and method400further includes generating406text data by transcribing the audio signal. In certain embodiments, wherein VA computing device102is a wearable technology device, method400includes recording408at least one of audio and video of the statement. Method400may further include parsing410the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators. In some embodiments, parsing410the data stream may be performed by parsing module232(shown inFIG.2). Method400may further include flagging412, in response to identifying the at least one candidate indicator, the current statement as potentially false. In some embodiments, flagging412the current statement as potentially false may be performed by parsing module232(shown inFIG.2). In some embodiments, method400may further include displaying414an alert message identifying the current statement as potentially false in response to identifying the at least one candidate indicator. In some embodiments, displaying414the alert message may be performed using media output component215(shown inFIG.2). Additionally or alternatively, displaying414the alert message may be performed using a device external to VA computing device102, such as wearable technology device112. The method400may include additional, less, or alternate actions, including those discussed elsewhere herein. Exemplary Method for Identifying Conflicting Statements FIG.5depicts an exemplary computer-implemented method500for identifying conflicting statements. Method500may be performed by VA computing device102(shown inFIG.1). Method500may include identifying502a conflict by comparing the current statement and a previous statement. In some embodiments, identifying502the conflict may be performed by parsing module232(shown inFIG.2). Method500may further include flagging, in response to identifying the conflict, the current statement as potentially false. In some embodiments, flagging the current statement as potentially false may be performed by parsing module232(shown inFIG.2). In some embodiments, method500may further include displaying506a conflict alert message identifying the current statement as potentially false in response to identifying the conflict, wherein the conflict alert message includes the previous statement. In some embodiments, displaying506the conflict alert message may be performed using media output component215(shown inFIG.2). Additionally or alternatively, displaying506the alert message may be performed using a device external to VA computing device102, such as wearable technology device112. The method500may include additional, less, or alternate actions, including those discussed elsewhere herein. Exemplary Method for Implementing a Chatbot FIG.6depicts an exemplary computer-implemented method600for implementing a chatbot capable of analyzing the veracity of a statement submitted by a user. Method600may be performed by VA computing device102(shown inFIG.1). Method600may include generating602at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements. In some embodiments, each of the plurality of historical statements includes at least one of audio data, video data, and text data. In some embodiments, generating602the at least one model may be performed by analytics module230(shown inFIG.2). Method600may further include receiving604, from a user computing device (e.g., mobile device110) associated with a user, a data stream corresponding to a current statement. In some embodiments, the data stream includes at least one of audio data, video data, and text data. In some embodiments, the data stream includes an audio signal and method400further includes generating406text data by transcribing the audio signal. Method600may further include parsing606the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators. In some embodiments, parsing606the data stream may be performed by parsing module232(shown inFIG.2). Method600may further include flagging608, in response to identifying the at least one candidate indicator, the current statement as potentially false. In some embodiments, flagging608the current statement as potentially false may be performed by parsing module232(shown inFIG.2). Method600may further include generating610a response message based upon the current statement and the flag and transmitting612the response message to the user computing device. In some embodiments, the response message is at least one of an audio message, a video message, and a text message. In some embodiments, generating610the response message may be performed by analytics module230(shown inFIG.2). Machine Learning and Other Matters The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors (such as processors, transceivers, servers, and/or sensors mounted on vehicles or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer-executable instructions stored on non-transitory computer-readable media or medium. Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium. A processor or a processing element may be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, or a combined learning module or program that learns in two or more fields or areas of interest. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs. Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as images, object statistics and information, audio and/or video records, text, and/or actual true or false values. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples. The machine learning programs may include Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing—either individually or in combination. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or other types of machine learning or artificial intelligence. In supervised machine learning, a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so that when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may be required to find its own structure in unlabeled example inputs. As described above, the systems and methods described herein may use machine learning, for example, for pattern recognition. That is, machine learning algorithms may be used by VA computing device102to identify patterns within a large number of historical statements to generate models including reference indicators correlated with inaccuracy of a statement. Accordingly, the systems and methods described herein may use machine learning algorithms for both pattern recognition and predictive modeling. Exemplary Embodiments The present embodiments may relate to secondary systems that verify potential fraud or the absence thereof. Artificial intelligence, machine learning, and/or chatbots may be employed to verify veracity of statements used in connection with insurance or loan applications, and/or insurance claims. For instance, a veracity analyzer (VA) computing device includes a processor in communication with a memory device, and may be configured to: (1) generate at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements; (2) receive a data stream corresponding to a current statement; (3) parse the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators; and/or (4) flag, in response to identifying the at least one candidate indicator, the current statement as potentially false. The VA computing device may be a chatbot or robo-advisor in some embodiments. The computing device may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for reviewing veracity of statements may be provided. The computer-implemented method may be performed by a veracity analyzer (VA) computing device, which may be a chatbot or a robo-advisor in some embodiments, that includes at least one processor in communication with a memory device. The computer-implemented method may include: (1) generating, by the VA computing device, at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements; (2) receiving, by the VA computing device, a data stream corresponding to a current statement; (3) parsing, by the VA computing device the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators; and/or (4) flagging, by the VA computing device, in response to identifying the at least one candidate indicator, the current statement as potentially false. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a non-transitory computer-readable media having computer-executable instructions embodied thereon may be provided that, when executed by a veracity analyzer (VA) computing device including a processor in communication with a memory device, cause the processor to: (1) generate at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements; (2) receive a data stream corresponding to a current statement; (3) parse the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators; and/or (4) flag, in response to identifying the at least one candidate indicator, the current statement as potentially false. The instructions may direct additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a chatbot computing device comprising at least one processor in communication with a memory device may be provided. The processor may be configured to (1) generate at least one model by analyzing a plurality of historical statements to identify a plurality of reference indicators correlating to at least one inaccurate aspect included in the plurality of historical statements; (2) receive, from a user computing device associated with a user, a data stream corresponding to a current statement; (3) parse the data stream using the at least one model to identify at least one candidate indicator included in the current statement matching at least one of the plurality of reference indicators; (4) flag, in response to identifying the at least one candidate indicator, the current statement as potentially false; (5) generate a response message based upon the current statement and the flag; and/or (6) transmit the response message to the user computing device. The chatbot computing device may include additional, less, or alternate functionality, including that discussed elsewhere herein. ADDITIONAL CONSIDERATIONS As will be appreciated based upon the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure. The computer-readable media may be, for example, but is not limited to, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), and/or any transmitting/receiving medium such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network. These computer programs (also known as programs, software, software applications, “apps,” or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The “machine-readable medium” and “computer-readable medium,” however, do not include transitory signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. As used herein, a processor may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.” As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are example only, and are thus not limiting as to the types of memory usable for storage of a computer program. In one embodiment, a computer program is provided, and the program is embodied on a computer readable medium. In an exemplary embodiment, the system is executed on a single computer system, without requiring a connection to a sever computer. In a further embodiment, the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Washington). In yet another embodiment, the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark of X/Open Company Limited located in Reading, Berkshire, United Kingdom). The application is flexible and designed to run in various different environments without compromising any major functionality. In some embodiments, the system includes multiple components distributed among a plurality of computing devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium. The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. As used herein, an element or step recited in the singular and preceded by the word “a” or “an” should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited. Furthermore, references to “example embodiment” or “one embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The patent claims at the end of this document are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being expressly recited in the claim(s). This written description uses examples to disclose the disclosure, including the best mode, and also to enable any person skilled in the art to practice the disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
68,268
11860853
DETAILED DESCRIPTION In other systems, data health management is accomplished using low level programming, which requires information technology (IT) and programming skills to implement. Other approaches may provide some text description about data health, but do not enable any enforcement of data health standards. Aspects of the present disclosure are directed to automatic data health management. In other systems, rules to validate data health and triggered alerts are hand coded. Hand coding these rules and triggers requires IT and programming skills, and also delays the addition of new data streams to a repository until such hand coding can be completed. In a system that receives heterogeneous data from a large number of sources, the hand coding of rules and triggers can quickly become unmanageable. In some alternative approaches, text descriptions of data health requirements are provided to external data providers, and the external data providers are required to adhere to the data health requirements expressed in those text descriptions. In a larger system that has a significant number of data providers, the task of ensuring that each data provider has correctly interpreted the text descriptions and is adhering to its respective data health requirements is very challenging and time consuming to manage. Aspects of the present disclosure address the above and other deficiencies by providing a system for automatic data health reasoning, which derives data health metrics automatically from received data, creates rules to validate in-coming data, and triggers data health management events automatically. A collection of data received from a data source is referred to as a data set. A data set shares the same characteristics, or data metrics. A data set consists of multiple data batches, received over time. A data batch is a data partition or data version. When the data batch is only part of a dataset then it is defined as data set partition. When the data batch is a complete set of data, then it is defined as a version of data set. In one embodiment, the system maintains metadata for each data batch, as well as for the data set. The system utilizes a metamodel language describing any characteristics that may be applicable to any data set. The metamodel defined by the metamodel language includes one or more predefined characteristics that are general and may be applicable any data set. The metamodel is extendible, and can be used to characterize any data set. The metamodel defines the characteristics that data health metrics are based on. Data health metrics are the measured characteristics of the data, which can include data schema, size, frequency, arrival time, and other characteristics of the expected data batches for a particular data set. The data health rules, also referred to as data validation assertions, are the characteristics that are expected to be met by data batches of a particular data set. The data health management events can include alerts to systems using the data, when one or more of the data validation assertions are not met. In some embodiments, the data health management events can trigger automatic actions, such as the exclusion of a data set from a particular calculation. The disclosed technologies in one embodiment further provide the ability to define custom data health triggers for users, such that users receive notifications when one or more of the data health conditions are not met by data that they use. The disclosed technologies, in one embodiment, additionally enable the data providers to perform pre-tests to determine whether any proposed changes to characteristics of data provided by the data providers would violate any custom-defined data health conditions. The present system is designed to handle a large number of data sets and data volume of terabytes of data. The system is designed to regularly receive new data sets, from new sources. The new data sets may not have a known set of characteristics. The ability of the system to derive the data health metrics from the observed data streams enables handling of large scale data, and heterogenous data sets. In one embodiment, the present system is utilized with a data repository that has exabytes of data from hundreds of different data sources, and billions of events per hour. Traditional systems that require setting up and verifying assertions on a per data stream basis cannot handle this type of volume. However, the present system is capable of addressing such data streams in a scalable way. FIG.1illustrates an example computing system100that includes a data health reasoner150in accordance with some embodiments of the present disclosure. In the embodiment ofFIG.1, computing system100includes a user system110, a network120, an application software system130, a data store140, a data health reasoner150, and a data health repository160. User system110includes at least one computing device, such as a personal computing device, a server, a mobile computing device, or a smart appliance. User system110includes at least one software application, including a user interface112, installed on or accessible by a network to a computing device. For example, user interface112can be or include a front-end portion of application software system130. For simplicity, the present application will use as an example a social application system. Social application systems include but are not limited to connections network software, such as professional and/or general social media platforms, and systems that are or are not based on connections network software, such as digital content distribution services, general-purpose search engines, job search software, recruiter search software, sales assistance software, advertising software, learning and education software, or any combination of any of the foregoing. However, the present system can be used with any application that utilizes large data sets. User interface112is any type of user interface as described above. User interface112can be used to input search queries and view or otherwise perceive output that includes data produced by application software system130. For example, user interface112can include a graphical user interface and/or a conversational voice/speech interface that includes a mechanism for entering a search query and viewing query results and/or other digital content. Examples of user interface112include web browsers, command line interfaces, and mobile apps. User interface112as used herein can include application programming interfaces (APIs). Data store140is a data repository. Data store140stores a plurality of heterogeneous data sets, each data set including a plurality of data batches, received from external data sources170. Heterogeneous data sets include data sets that have different content, schemas, delivery frequencies, and times, and/or other differentiators. The data sets can be from different providers, e.g., various third parties. The data may be received from heterogeneous external data sources170, which include different systems. The systems can be within the same company. In some embodiments, the systems generating the data may be running on the same computing system100. But the data source is considered external, in one embodiment, when it originates outside the data store140. In the social application system example provided, the heterogeneous data sources for example can include data sources of information about user social connections, data sources of information indicating user posts on the social application, data sources that collect user interactions on third party websites, such as media sites that are affiliated with the social application system. Some of these data sets are generated by the social application. However, they would still be considered external data sources because they are not generated by the data repository management system of computer system100. The data sets provided as examples are heterogeneous because they are provided on a different schedule, for different data, with different data schemas. However, any one of those differentiators is sufficient to consider data sets heterogenous. In one embodiment, there can be different heterogeneous external data sources170which provide different types, quantities, and frequencies of data to the data store140. Data store140can reside on at least one persistent and/or volatile storage device that can reside within the same local network as at least one other device of computing system100and/or in a network that is remote relative to at least one other device of computing system100. Thus, although depicted as being included in computing system100, portions of data store140can be part of computing system100or accessed by computing system100over a network, such as network120. Application software system130is any type of application software system that includes or utilizes functionality provided by the data health reasoner150. Examples of application software system130include but are not limited to connections network software, such as social media platforms, and systems that are or are not based on connections network software, such as general-purpose search engines, job search software, recruiter search software, sales assistance software, advertising software, learning and education software, or any combination of any of the foregoing. The application software system130can include a system that provides data to network software such as social media platforms or systems. While not specifically shown, it should be understood that any of user system110, application software system130, data store140, data health reasoner150, and data health repository160includes an interface embodied as computer programming code stored in computer memory that when executed causes a computing device to enable bidirectional communication with any other of user system110, application software system130, data store140, data health reasoner150and data health repository160using a communicative coupling mechanism. Examples of communicative coupling mechanisms include network interfaces, inter-process communication (IPC) interfaces and application program interfaces (APIs). A client portion of application software system130can operate in user system110, for example as a plugin or widget in a graphical user interface of a software application or as a web browser executing user interface112. In an embodiment, a web browser can transmit an HTTP request over a network (e.g., the Internet) in response to user input that is received through a user interface provided by the web application and displayed through the web browser. A server running application software system130and/or a server portion of application software system130can receive the input, perform at least one operation using the input, and return output using an HTTP response that the web browser receives and processes. Each of user system110, application software system130, data store140, data health reasoner150, and data health repository160is implemented using at least one computing device that is communicatively coupled to electronic communications network120. Any of user system110, application software system130, data store140, data health reasoner150, and data health repository160can be bidirectionally communicatively coupled by network120, in some embodiments. User system110as well as one or more different user systems (not shown) can be bidirectionally communicatively coupled to application software system130. A typical user of user system110can be an administrator or end user of application software system130, data health reasoner150, and data health repository160. User system110is configured to communicate bidirectionally with any of application software system130, data store140, data health reasoner150, and data health repository160over network120, in one embodiment. In another embodiment, the user system110communicates with application software system130and health data reasoner150, but does not directly communicate with the data health repository160. The features and functionality of user system110, application software system130, data store140, data health reasoner150, and data health repository160are implemented using computer software, hardware, or software and hardware, and can include combinations of automated functionality, data structures, and digital data, which are represented schematically in the figures. User system110, application software system130, data store140, data health reasoner150, and data health repository160are shown as separate elements inFIG.1for ease of discussion but the illustration is not meant to imply that separation of these elements is required. The illustrated systems, services, and data stores (or their functionality) can be divided over any number of physical systems, including a single physical computer system, and can communicate with each other in any appropriate manner. Network120can be implemented on any medium or mechanism that provides for the exchange of data, signals, and/or instructions between the various components of computing system100. Examples of network120include, without limitation, a Local Area Network (LAN), a Wide Area Network (WAN), an Ethernet network or the Internet, or at least one terrestrial, satellite or wireless link, or a combination of any number of different networks and/or communication links, as well as wired networks, or computer busses when the system100is implemented on a single computer system. The various elements can be connected with different networks and/or types of networks. The computing system110includes a data health reasoner component150that can evaluate the data health of data from external data sources170, and a data health repository160which can be queried by user systems110associated with data consumers. In some embodiments, the application software system130includes at least a portion of the data health reasoner150. As shown inFIG.9, the data health reasoner150can be implemented as instructions stored in a memory, and a processing device902can be configured to execute the instructions stored in the memory to perform the operations described herein. The data health reasoner150can automatically create data validation assertions for heterogeneous data sets, and apply those data validation assertions to verify the quality of new data in the data store140. The data health repository160stores this metadata about the data. The disclosed technologies can be described with reference to the large number of types of data utilized in a social graph application such as a professional social network application. The disclosed technologies are not limited to data associated with social graph applications but can be used to perform data quality validation more generally. The disclosed technologies can be used by many different types of network-based applications which consume large heterogeneous data sets. For example, any predictive system which receives large volumes of different types of data which change over time, could take advantage of such a system. The data health repository160stores the metadata generated by the data health reasoner150. Further details with regards to the operations of the data health reasoner150and the data health repository160are described below. FIG.2is a data flow diagram of an example method200to provide data health reasoning in accordance with some embodiments of the present disclosure. The external data sources170can be any data source that provides data202to data store140. In one embodiment, the external data source170provides data202to application software system110, to process the data into data store140. The data202is additionally provided to the data health reasoner150. The data health reasoner150utilizes the data batches202in a data set to identify the data characteristics, based on a metamodel which represents the predefined characteristics for a data set. A metamodel is a schema representing a collection of existing metrics, one or more of which apply to any data set. The metamodel language provides a formal language that enables semantic description of data health and data validation assertions. In one embodiment, the metamodel language is human readable, as well as computer parseable. The metamodel can be implemented using XML. In one embodiment, the metamodel provides a set of predefined characteristics that are collected for the data set, for example frequency, time of arrival, schema, etc. The metamodel may be extended with additional characteristics, based on business needs and/or the specifics of the data sets received. The measured values of these characteristics are used to formulate data health metrics for the data set. Embodiments use the collected data about the data set to determine which subset of the predefined data characteristics applies to the data set. In one embodiment, the system uses a statistical analysis of multiple data batches in the data set to determine the data characteristics. In one embodiment, a machine learning system can be used to derive the data characteristics. Once these characteristics are identified, the system derives the data health metrics for the data. In one embodiment, the system includes a set of predefined characteristics, defined by the metamodel, and compares the actual characteristics of the data batches received in the data set to those predefined characteristics. For example, a characteristic may be “time of arrival.” The system observes the actual time of arrival of the data batches, and based on that observation determines the data health metrics for the data model. In one embodiment, the system initially sets the data health metrics based on the observed conditions of the first data batch. As subsequent data batches are received the values are refined. In one embodiment, the system continuously refines these values. In one embodiment, the system collects data over multiple data batches before defining the initial values. In one embodiment, the initial values are defined after three data batches have been received. In one embodiment, the system may use a standard statistical model to exclude outliers. In one embodiment, data which is more than two standard deviations outside the expected value range is dropped as outlying data. In one embodiment, the metamodel language provides the ability to define additional characteristics for data sets, beyond the predefined characteristics. In one embodiment, such added characteristics may be based on existing characteristics, in which case it may be applied to existing data batches in the data set. In another embodiment, such added characteristics may be new, in which case the above process of generating the data health metrics based on the observed characteristics is used. For example, a new characteristic may be the presence or absence of a particular field, the use of Unicode characters, or any other aspect of that data which can be checked and stored as metadata. The data health metrics are the data characteristics that must be met by each data batch. The data health metrics describe, for example, the format, frequency, size, and other characteristics of the data set. For example, the data health metrics for a data set can be that the data is partitioned, has a defined data scheme, is provided daily and available no later than 9 am in the morning for each day. These data health metrics can be stored as data204in data health repository160. In one embodiment, each of the predefined characteristics that is consistent for the data set can be considered as a candidate for a data health metric. In one embodiment, the predefined characteristics include data format, data volume, data frequency, and time of arrival for the data. The system initially creates data health metrics based on the predefined characteristics in the metamodel. In one embodiment, users may create additional data metrics for individual data sets and/or groups of data sets. Such additional data metrics may be added to the metamodel, and become predefined characteristics. There may be characteristics that vary among each data batch in the data set. Those characteristics would not be used as data health metrics. For example, if the data arrives at various times throughout the day, but not consistently, the time of arrival characteristic may not be used as a data health metric. The system monitors subsequent data batches of the data set to determine whether they meet the data health metrics. In the above example, when a new data batch is received the system verifies that the new data batch is partitioned, matches the defined data scheme, and was received by 9 am in the morning. The measured data health metrics for the data batch are metadata that can be stored as data204in data health repository160. When the data health reasoner150indicates that a particular data batch does not meet the data health metrics, the data health reasoner150can send an alert, data206, via application software system110. As will be described below, the systems that utilize the data can customize their preferred metrics for alerts. For example, a data consuming application that accesses the data at 10 am may wish to be alerted only when the data expected at 9 am doesn't arrive by 9:45 am, since the 45 minute delay does not impact their use. Thus, the user can set up a custom alert, based on their own needs. This reduces unnecessary warnings and alerts. The alert can be used by data consuming applications and/or users to determine whether to use the data from data store140. For example, for some uses, if the current data is missing (e.g., a data batch expected is not received) the user may choose to exclude the data, or use a prior data batch in their processing. For some uses, the user may delay the processing until the data becomes available. If the current data batch has a different schema, this can make the data unusable for some systems. Thus, the user may choose to exclude the data from their processing, or verify that the schema change does not impact their use. Other options for addressing unavailable or unusable data can be chosen. However, by having this information, the user can affirmatively choose how to react. In this way, the system automatically generates and provides data health information to users of the data simplifying data processing and reducing wasted processing of out-of-date data. FIG.3is a flow diagram of an example method300to provide data health reasoning in accordance with some embodiments of the present disclosure. At operation302, data is received for anew data set over time. In one embodiment, new data sets can be added to the repository at any time. In one embodiment, new data sets are added to the repository without pre-processing or setting up rules. The data health reasoner collects and uses the metadata, and does not interfere with data in the data repository. In one embodiment, the system collects information about the data batch to establish the data health characteristics. In one embodiment, the system initially collects data but does not provide alerts. The system may provide information initially. In one embodiment, the system analyzes the data periodically, until sufficient data is collected to have consistent data characteristics. In one embodiment, the system calculates a confidence interval for the data characteristic, and when the confidence interval is above a threshold, alerts are sent. In one embodiment, the confidence interval is 95%. At operation304, data validation assertions are generated for the data set based on the data health metrics. As discussed above, the data health metrics can include partitioning, schema, size, and timing of the data batches. These data validation assertions are characteristics of the data that should be met by each new data set. At operation306, the data validation assertions are applied to new data batches received into the data set. The data health reasoner automatically tests the new data batch against the data validation assertions. That is, the system determines whether the new data batch meets all of the data validation assertions for the data set. In one embodiment, the data health reasoner stores the result of the testing in the data health repository as metadata associated with the data batch. In one embodiment, the data health reasoner also updates the metadata associated with the data set, based on the data batch evaluation results. In one embodiment, the data set status indicates the results of the applied data validation assertions against the latest data batch. At operation308, an alert is generated if one or more of the data validation assertions are not met by the data batch. The alert can be sent via a user interface. The alert can be an email. Other ways of providing the alert can be used. In one embodiment, the alert can be received by an automatic system that utilizes data. This method300is used continuously as new data sets are added to the repository. In one embodiment, the method300continuously monitors new data batches added to the data repository, and applies the data validation assertions against new data batches. In another embodiment, the data repository can notify the method300that a new data batch is received, and trigger the application of the data validation assertions. For existing data sets, the method300utilizes operations306and308only. The methods described above can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, one or more of the methods above are performed by the data health reasoner component150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. FIG.4is a flow diagram of an example method400to create data validation assertions for data health reasoning in accordance with some embodiments of the present disclosure. At operation402, data for a new data set is received. The new data set has unknown characteristics. At operation404, the metamodel representing the predefined data set characteristics are retrieved. The predefined characteristics in one embodiment are a set of meta-data characteristics, that describe some of the characteristics associated with a data set. The meta-data characteristics in one embodiment are based on an XML schema. One exemplary XML schema is defined based on data validation assertions.<xs:complexType name=“DataHealthAssertion”><xs:sequence><xs:element name=“event” type=“dhm:EventType” minOccurs=“1” maxOccurs=“unbounded” /><xs:element name=“dataset” type=“dhm:DatasetType” minOccurs=“1” maxOccurs=“1” /><xs:element name=“metric” type=“dhm:HealthMetricType” minOccurs=“1” maxOccurs=“unbounded” /><xs:element name=“alert” type=“dhm:AlertType” minOccurs=“1” maxOccurs=“unbounded” /></xs:sequence><xs:attribute name=“dataAssertionID” type=“xs:ID” /><xs:attribute name=“dataAssertionName” type=“xs:string” /><xs:attribute name=“dataAssertionDescription” type=“xs:string” /><xs:attribute name=“startingDate” type=“xs:dateTime” /><xs:attribute name=“expirationData” type=“xs:dateTime” /> A data validation assertion in one embodiment is further defined by data assertion identifier (ID), data assertion name, starting data and expiration date, a definition of dataset, a collection of events, a collection of metrics and a collection of alerts. The dataset type in one embodiment is further defined as:<xs:complexType name=“DatasetType”><xs:attribute name=“databaseName” type=“xs:string” use=“required” /><xs:attribute name=“tableName” type=“xs:string” use=“required” /><xs:attribute name=“storageLocation” type=“xs:string” use=“optional” /><xs:attribute name=“updateFrequency” type=“xs:string” use=“required” /><xs:attribute name=“retention” type=“xs:duration” use=“optional” /><xs:attribute name=“schema” type=“xs:duration” use=“optional” /></xs:complexType> The health metric type in one embodiment is further defined as:<xs:complexType name=“HealthMetricType”><xs:sequence><xs:element name=“MetricIdentification”><xs:complexType><xs:attribute name=“metricName” type=“xs:string” use=“required” /><xs:attribute name=“metricScope” type=“dhm:MetricScopeType” /></xs:complexType></xs:element><xs:element name=“metricScopeID” type=“dhm:FunctionType” minOccurs=“1” maxOccurs=“1” /><xs:element name=“metricCalculation” type=“dhm:FunctionType” minOccurs=“1” maxOccurs=“1” /></xs:sequence></xs:complexType> In the definition of health metric type, both metricScopeID and metricCalculation are in one embodiment further defined as Metric Calculation Function Type, which is further defined as:<xs:complexType name=“MetricCalculationFunctionType”><xs:choice><xs:element name=“FonnulaBasedFunction” type=“dhm:FormulaBasedFunctionType” /><xs:element name=“ExternalFunction” type=“dhm:ExternalFunctionType” /></xs:choice></xs:complexType> Metric Calculation Function Type in one embodiment can be either Formula Based Function or External Function. In one embodiment, a Formula Based Function is further defined as:<xs:complexType name=“FormulaBasedFunctionType”><xs:complexContent><xs:extension base=“dhm:FunctionType”><xs:attribute name=“expression” type=“xs:string” use=“required” /></xs:extension></xs:complexContent></xs:complexType> and an External Function is further defined as:<xs:complexType name=“ExternalFunctionType”><xs:complexContent><xs:extension base=“dhm:FunctionType”><xs:sequence><xs:element name=“InputParameter” minOccurs=“0” maxOccurs=“unbounded”><xs:complexType><xs:sequence><xs:element name=“ValueRetrieval”type=“dhm:MetricCalculationFunctionType” /></xs:sequence><xs:attribute name=“name” type=“xs:string” use=“required” /></xs:complexType></xs:element></xs:sequence><xs:attribute name=“methodName” type=“xs:string” /><xs:attribute name=“serviceURL” type=“xs:string” /><xs:attribute name=“serviceType” type=“xs:string” /><xs:attribute name=“serviceSpecificationURL” type=“xs:string” /></xs:extension></xs:complexContent></xs:complexType> In one embodiment, the parameter of External Function is recursively defined as Metric Calculation Function Type. Both Metric Calculation Function Type and External Function in one embodiment extends Function Property type, which in one embodiment is further defined as:<xs:complexType name=“FunctionPropertyType”><xs:attribute name=“returnType” type=“xs:string” use=“required” /><xs:attribute name=“functionName” type=“xs:string” /><xs:attribute name=“functionID” type=“xs:string” /></xs:complexType> The collection of events in data validation assertion define the source information, which in one embodiment is defined as Event Type:<xs:complexType name=“EventType”><xs:sequence><xs:element name=“attribute” type=“dhm:EventAttributeType” minOccurs=“1” maxOccurs=“unbounded”></xs:element></xs:sequence><xs:attribute name=“eventName” type=“xs:string” use=“required” /></xs:complexType> There is a collection of attributes in an event, in one embodiment, which is further defined as:<xs:complexType name=“EventAttributeType”><xs:attribute name=“attributeName” type=“xs:string” /><xs:attribute name=“attributeType” type=“xs:string” /><xs:attribute name=“isID” type=“xs:boolean” default=“false” /><xs:attribute name=“isArray” type=“xs:boolean” default=“false” /></xs:complexType> With events and metrics, alerts in one embodiment can be defined by specifying the condition and action.<xs:complexType name=“AlertType”><xs:sequence><xs:element name=“condition” type=“dhm:ConditionType” /><xs:element name=“action” type=“dhm:ActionType” /></xs:sequence><xs:attribute name=“assertionName” type=“xs:string” use=“required” /><xs:attribute name=“assertionID” type=“xs:ID” use=“required” /></xs:complexType> The condition type in one embodiment is further defined as one of three different kinds of types, namely: simple condition, unary operation on a condition, and binary operation with two operands.<xs:complexType name=“ConditionType”><xs:choice><xs:element name=“simpleCondition” type=“dhm:SimpleConditionType” /><xs:sequence><xs:element name=“unaryOperation” type=“xs:string” /><xs:element name=“condition” type=“dhm:ConditionType” /></xs:sequence><xs:sequence><xs:element name=“booleanOperation” type=“xs:string” /><xs:element name=“leftCondition” type=“dhm:ConditionType” /><xs:element name=“rightCondition” type=“dhm:ConditionType” /></xs:sequence></xs:choice></xs:complexType> Simple condition type in one embodiment can be further defined as one of three conditions, namely point condition, enumeration condition, and rang condition.<<xs:complexType name=“SimpleConditionType”><xs:choice><xs:element name=“PointCondition” type=“dhm:PointConditionType” minOccurs=“1” maxOccurs=“1” /><xs:element name=“EnumerationCondition” type=“dhm:EnumerationConditionType” minOccurs=“1” maxOccurs=“1” /><xs:element name=“RangeCondition” type=“dhm:RangeConditionType” minOccurs=“1” maxOccurs=“1” /></xs:choice><xs:attribute name=“metricName” type=“xs:string” /></xs:complexType> Point condition, enumeration condition, and rang condition in one embodiment can be further defined as:<xs:complexType name=“PointConditionType”><xs:sequence><xs:element name=“compareOperation” type=“xs:string” /><xs:element name=“leftOperand” type=“dhm:FunctionType” /><xs:element name=“rightOperand” type=“dhm:ValueType” /></xs:sequence></xs:complexType><xs:complexType name=“RangeConditionType”><xs:sequence><xs:element name=“LowerBound” minOccurs=“0”><xs:complexType><xs:complexContent><xs:extension base=“dhm:FunctionType”><xs:attribute name=“inclusive” type=“xs:boolean” use=“required” /></xs:extension></xs:complexContent></xs:complexType></xs:element><xs:element name=“UpperBound” minOccurs=“0”><xs:complexType><xs:complexContent><xs:extension base=“dhm:FunctionType”><xs:attribute name=“inclusive” type=“xs:boolean” use=“required” /></xs:extension></xs:complexContent></xs:complexType></xs:element></xs:sequence></xs:complexType><xs:complexType name=“EnumerationConditionType”><xs:sequence><xs:element name=“EnumerationElement” type=“dhm:FunctionType” maxOccurs=“unbounded” /></xs:sequence></xs:complexType> In one embodiment part of the Alert Type is action, which is defined as Action Type as:<xs:complexType name=“ActionType”><xs:sequence><xs:element name=“alertMessage” type=“dhm:AlertMessageType” minOccurs=“1” maxOccurs=“1” /><xs:element name=“action” type=“dhm:ExternalFunctionType” minOccurs=“1” maxOccurs=“1” /></xs:sequence> Alert message can be defined in one embodiment as<xs:complexType name=“AlertMessageType”><xs:sequence><xs:element name=“messageTitle” type=“dhm:FunctionType” maxOccurs=“1” /><xs:element name=“messageStrFunction” type=“dhm:FunctionType” maxOccurs=“1” /></xs:sequence><xs:attribute name=“emailAddress” type=“xs:string” /></xs:complexType> At operation406, system collects the measured values of the predefined characteristics. As noted above, measured values are associated with each of the predefined characteristics in the metamodel. Some of the characteristics may not have a value. For example, some data is not partitioned. Thus, the partitioning metamodel characteristic is a null, e.g., unvalued. The predefined characteristics include the set of meta-characteristics that are available, and thus can be used to describe the data. As shown inFIG.6characteristics can include arrival frequency, partition type, expected arrival time, and schema. Other characteristics can include data structure, data size, and any other characteristic describing the data batch. In one embodiment, the metamodel is extendible using the metamodel language, and thus additional characteristics may be added to the metamodel. At operation408, data health metrics are determined based on the measured values. The health metrics define the characteristics of the data. In one embodiment, the data health metrics are based on the range of values for the various characteristics, and are chosen so that the data batches evaluated meet those health metrics. For example, if data batches were received between 8:45 am and 8:59 am, the health metric can be that the data batch arrives between 8:45 and 8:59 a.m. As another example, if the data batches were between 500 KB and 1 MB, the health metric would indicate that size range as being appropriate. At operation410, data validation assertions are formulated, based on the health metrics. The data validation assertions are the guarantees about the format and availability of the data. They reflect the consistent data health metrics that can be used to evaluate whether the data meets expectations. In one embodiment, the data validation assertions are formulated to set the range of sizes, arrival times, and other characteristics based on the health metrics of the data set. For example, if the health metric is that the data batch arrives between 8:45 and 8:59, the data validation assertion can be that the data arrives before 8:59 a.m. Data validation assertions may include a combination of multiple data health metrics. For example, the data validation assertion may be that a data batch larger than 50 KB of data was received by 9 am. Data validation assertions for a data set may include metrics based on more than one data batch. For example, a data validation assertion may be “each data batch received today is more than 50 KB of data.” This is referred to as a compound metric. Operations402through410set up the data validation assertions for a data set. In one embodiment, this process is initially performed when a new data set is added into the data repository. In one embodiment, the data validation assertions are refined as more data batches are received. Operations412through420describe the use of the data validation assertions. At operation412, the data validation assertions are applied to the data set. That is, the method400tests the data validation assertions against the data batches received in the data set. None of the data batches should fail the data validation assertions. This is used to verify that the data validation assertions are accurate. Subsequently, the data validation assertions are applied to each data batch as it is received, in one embodiment. The characteristics of the data batch are compared to the data validation assertions, and if any data validation assertions are not met, they are flagged. As noted above, the method400can monitor the data repository, detect a new data batch, and apply the data validation assertions, in one embodiment. In another embodiment, the data repository can notify the data health reasoner that new data has been received, which can trigger the data health reasoner to apply the data validation assertions to the new data. In one embodiment, the data validation assertions can be applied during a processing period before the data batch is made available to users. In another embodiment, the assertions can be applied as the data streams are received into the repository. The data health reasoner does not alter the operation of the data repository, but rather utilizes the metadata to provide information about the data in the repository. At operation414, the system determines whether a data request is received. A data request is a request to access one or more data batches of the data set, and can be originated by an application, system, or user. In one embodiment, a data request is a pull request. The data request can be a in any format such as HTTP, JSON, application programming interface (API) formatted request, etc. The data request can be a request to send data, or a data pull, pulling the data from the repository. When a data request is received, in one embodiment, at operation416, the data validation assertions are provided to the requesting user, device, or system. The process then returns to operation412to continue applying the data validation assertions to the new data batches in the data set. If no data request was received at operation414, at operation418the system determines whether an export request was received by the data health reasoner. Export requests, in one embodiment, come from an administrator to propagate the data validation assertions to another system. The export request can be received via an application programming interface (API), or another format. An export request enables the system to propagate the data validation assertions to another repository which receives the data set. This increases efficiency because it allows the first system to determine the data validation assertions, and the other systems to take advantage of this determination. If an export request is received, at operation420the data validation assertions are exported in a transferrable format, such as XML. In one embodiment, the portable format makes the transfer of data seamless. The process then returns to applying the data validation assertions to data sets. In this way, the present system determines the appropriate assertions, and applies them to data streams. The methods described above can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, one or more of the methods above are performed by the data health reasoner component150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. FIG.5is a flow diagram of an example method500to edit customized data validation assertions in accordance with some embodiments of the present disclosure. At operation502, the data validation assertions for the data set are displayed. In one embodiment, the data validations assertions are accessed via the data health reasoner150.FIG.6is an example of a user interface600to enable editing of customized data validation assertions in accordance with some embodiments of the present disclosure. At operation504, a proposed customization can be received. The proposed customization changes one or more of the health data assertions for the data set, in one embodiment. The proposed customization may also add a new data health assertion. This change only applies to the assertions for the particular system, device, or user making the request. As seen inFIG.6, in one embodiment, each of the health data assertions602,610have an editing option associated with them606,614. If no customization is received, the user can activate or deactivate alarms at operation518. As seen inFIG.6, some changes have alerts set612, while others have no alert604. The user can change the alert settings. As discussed above, when one or more of the data validation assertions are not met by a data batch an alert is sent to those systems, applications, and/or users that have alerts set for the missed data validation assertions. In one embodiment, there is a default set of alarms for each data set. In one embodiment, the default set is to send an alarm for any unmet data validation assertions. Returning toFIG.5, if a proposed customization was received at operation504, at operation506, the system verifies that the proposed customization complies with the existing data validation assertions. For example, a customized data validation assertion can require that data be available at 9 am Pacific Time, if the existing data validation assertion states that the data should arrive by 8:30 am. However, a data validation assertion requiring arrival by 7:30 for data that has as its default data validation assertion an arrival by 8:30 would not be permitted. If the user requests a customization that is outside the existing parameters, it is rejected in one embodiment. In one embodiment, the user interface may limit the options to, for example, moving the time forward. At operation508, the customized data validation assertion is stored. At operation510, the alarms for the requesting system, application, and/or user are updated with the customized assertion. The alarms correspond to the user's customized assertions. Thus, if the user sets the customized data validation assertion as data availability at 9:00 am Pacific time, if the data is late from its calculated data validation assertion of 8:30 am Pacific, but arrives prior to the 9:00 am deadline, the user will not receive an alert. At operation512, the system determines whether there are any duplicate alarms. Duplicate alarms can arise, for example, if the user had an existing alarm for the default data validation assertion which they then customized. If there are duplicate alarms, at operation514, the alarms are deduplicated. The customized alarms are given priority over default alarms, in one embodiment. The customization method500then ends at operation516. The methods described above can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, one or more of the methods above are performed by the data health reasoner component150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. FIG.7is a flow diagram of an example method700to provide data health reasoning in accordance with some embodiments of the present disclosure. At operation702, data validation assertions are created for the data set. Method400described above can be used create the data validation assertions. At operation704, the data validation assertions are applied to each instance of the data set. As described above, the data validation assertions can be applied to each data batch when it is added to the data repository. At operation706, a visualization of the data state is provided. In one embodiment, the visualization can provide status of one or more data sets. In one embodiment, the visualization can include data sets selected by a user, and provide a status indicator for each of the data sets selected. The status indicator can indicate whether each data set is currently meeting its data validation assertions. At operation710, the method700determines whether a customization has been received for one or more of the data validation assertions. If no customization was received, at operation712, the alerts are set based on the baseline data validation assertions. If customizations have been received, then at operation714alerts are set based on the customized assertions. At operation716, the data validation assertions are applied to the new data batches as they are received. At operation718, alerts are sent when one or more assertions are not met by the data set, after a new data batch is received. The visualization is also updated based on the data validation assertions. In this configuration, the system provides two types of notice to data consumers, the visualization of the data state and individual alerts. In one embodiment, the individual alerts can be sent via a dashboard, email, text, or in another format. The methods described above can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, one or more of the methods above are performed by the data health reasoner component150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. FIG.8is a flow diagram of an example method800to evaluate proposed changes to data characteristics in accordance with some embodiments of the present disclosure. This method800in one embodiment is made available to external data providers who already have data in the data repository, with associated data validation assertions. At operation802, the data validation assertions for the data set are displayed. At operation804, the method800determines whether a proposed change to the data set was received. The proposed change can be a change to any data characteristic, including format, frequency, time of arrival, schema, etc. If no change is proposed, at operation814, the method800in one embodiment enables review of the existing customized data validation assertions. In one embodiment, the system provides the default data validation assertions, and indicates any which have been customized by any users, applications, or devices. If there is a proposed change received, the proposed change is compared to all data validation assertions, at operation806. The method800at operation808determines whether the proposed change violates any of the data validation assertions. If so, a warning against the proposed change is provided810. In one embodiment, the violation is specified, so that the change can be adjusted to comply with the assertions. If no violation is found, the method800indicates that the proposed change is acceptable, at operation812. This method800enables data providers to test their proposed changes, before implementing them, and potentially breaking systems that rely on their data. This further provides an improvement to the functioning of the computer systems which utilize such data. The methods described above can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, one or more of the methods above are performed by the data health reasoner component150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. FIG.9illustrates an example machine of a computer system900within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system900can correspond to a component of a networked computer system (e.g., the computer system100ofFIG.1) that includes, is coupled to, or utilizes a machine to execute an operating system to perform operations corresponding to the data health reasoner150ofFIG.1. Data health reasoner150and data health repository160are shown as part of instructions912to illustrate that at times, portions of data health reasoner150and/or data health repository160are executed by processing device902. However, it is not required that data health reasoner150and/or data health repository160be included in instructions912at the same time and any portions of data health reasoner150and/or data health repository160are stored in other components of computer system900at other times, e.g., when not executed by processing device902. The machine can be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in a client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a smart phone, a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system900includes a processing device902, a main memory904(e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a memory906(e.g., flash memory, static random-access memory (SRAM), etc.), an input/output system 'Z10, and a data storage system940, which communicate with each other via a bus930. Data health reasoner150and data health repository160are shown as part of instructions914to illustrate that at times, portions of data health reasoner150and/or data health repository160can be stored in main memory904. However, it is not required that data health reasoner150and/or data health repository160be included in instructions914at the same time and any portions of data health reasoner150and/or data health repository160can be stored in other components of computer system900. Processing device902represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device902can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device902is configured to execute instructions912for performing the operations and steps discussed herein. The computer system900can further include a network interface device908to communicate over the network920. Network interface device908can provide a two-way data communication coupling to a network. For example, network interface device908can be an integrated-services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface device908can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented. In any such implementation network interface device908can send and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. The network link can provide data communication through at least one network to other data devices. For example, a network link can provide a connection to the world-wide packet data communication network commonly referred to as the “Internet,” for example through a local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). Local networks and the Internet use electrical, electromagnetic, or optical signals that carry digital data to and from computer system computer system900. Computer system900can send messages and receive data, including program code, through the network(s) and network interface device908. In the Internet example, a server can transmit a requested code for an application program through the Internet628and network interface device908. The received code can be executed by processing device902as it is received, and/or stored in data storage system940, or other non-volatile storage for later execution. Data health reasoner150and data health repository160are shown as part of instructions944to illustrate that at times, portions of data health reasoner150and/or data health repository160can be stored in data storage system940. However, it is not required that data health reasoner150and/or data health repository160be included in instructions944at the same time and any portions of data health reasoner150and/or data health repository160can be stored in other components of computer system900. The input/output system910can include an output device, such as a display, for example a liquid crystal display (LCD) or a touchscreen display, for displaying information to a computer user, or a speaker, a haptic device, or another form of output device. The input/output system910can include an input device, for example, alphanumeric keys and other keys configured for communicating information and command selections to processing device902. An input device can, alternatively or in addition, include a cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processing device902and for controlling cursor movement on a display. An input device can, alternatively or in addition, include a microphone, a sensor, or an array of sensors, for communicating sensed information to processing device902. Sensed information can include voice commands, audio signals, geographic location information, and/or digital imagery, for example. The data storage system940can include a machine-readable storage medium942(also known as a computer-readable medium) on which is stored one or more sets of instructions944or software embodying any one or more of the methodologies or functions described herein. The instructions912,914,944can also reside, completely or at least partially, within the main memory904and/or within the processing device902during execution thereof by the computer system900, the main memory904and the processing device902also constituting machine-readable storage media. In one embodiment, the instructions926include instructions to implement functionality corresponding to data health reasoner (e.g., the data health reasoning component140ofFIG.1). While the machine-readable storage medium942is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system, such as the computing system100, can carry out the computer-implemented method of generating data validation assertions and verifying that data batches meet these data validation assertions, in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
65,232
11860854
DETAILED DESCRIPTION Example methods and systems for an anomaly detection system are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one of ordinary skill in the art that embodiments of the invention may be practiced without these specific details. Online transactions typically consume resources of one or more servers. Such resources include memory allocated to various services hosted by the servers. The amount of memory that each server can allocate is typically physically limited. As such, over time, the servers may need to be rebooted in order to re-allocate the memory resources. Rebooting the servers too often can increase lag experienced by the end users or services that use the servers, as data needs to be retrieved and allocated to the memory more often. However, rebooting the servers infrequently can keep stale data allocated on the memory which also slows down the end user experience as less memory is available to the services the users consume. Typical systems are configured to be automatically or manually rebooted at periodic intervals to address these shortcomings. However, such periodic rebooting operations may not account for certain abnormal or unexpected behavior of the server resources, such as if memory is being allocated at a greater than expected rate. The disclosed embodiments provide systems and methods to identify resource utilization anomalies to automatically or manually trigger an anomaly detection operation, such as a rebooting operation. The disclosed embodiments are discussed in relation to server query language (SQL) type servers and are similarly applicable to any other type of server or resource. Specifically, the disclosed embodiments access a data set that has been collected over a given time interval and compute an angle representing a data growth rate of the data set at least within a given time interval relative to a reference data point. The disclosed embodiments determine that the angle representing the data growth rate of the data set within the given time interval exceeds a specified threshold and, in response, trigger an anomaly detection operation. In this way, rather than waiting for a periodic reboot operation to be performed, the disclosed embodiments can detect abnormal behaviors of the server resources ahead of the periodic reboot operation and can generate an anomaly detection operation to address the abnormal behavior. This can allow resources of a server to be reallocated more quickly and efficiently which can improve the quality of service that an end user experiences. FIG.1is a block diagram showing an example system100according to various exemplary embodiments. The system100can be a server system that allocates memory resources to one or more services for consumption by one or more client devices110. The system100includes one or more client devices110, a database operator device120, an anomaly detection system150, and one or more servers140that are communicatively coupled over a network130(e.g., Internet, telephony network). As used herein, the term “client device” may refer to any machine that interfaces with a communications network (such as network130) to obtain resources from one or more servers140. The client device110may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smart phones, a wearable device (e.g., a smart watch), tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network or a service hosted by the servers140. The network130may include, or operate in conjunction with, an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless network, a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, fifth generation wireless (5G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. The servers140host one or more services that are accessed by the client devices110. For example, the servers140can host a teleconference or video conference service that enables multiple client devices110to communicate with each other. To instantiate and enable the teleconference or video conference service, the teleconference or video conference service can be allocated memory resources of the server140. The amount of memory resources that are allocated to the teleconference or video conference service can grow over time. In some instances, when many client devices110request access to the services hosted by the servers140, the memory allocations of the server140can grow rapidly beyond an expected rate. In some instances, memory leaks of the server140grow when services hosted by the servers140incorrectly manage memory allocations and fail to release unused memory. Any of these can be classified as abnormal behaviors of the resource utilization of the servers140which can be addressed in many ways. One way to address these abnormal behaviors is to reboot the servers140before the scheduled reboot operations or to allow the servers140to allocate more memory than previously configured and/or to resolve memory leaks. Other ways to address these abnormal behaviors can be contemplated. The anomaly detection system150monitors the resource utilization of the servers140. The anomaly detection system150analyzes the resource utilization over a given time interval to detect the abnormal behaviors. For example, the anomaly detection system150can access a data set that has been collected over a given time interval and can compute an angle representing a data growth rate of the data set at least within a given time interval relative to a reference data point. In some instances, the data growth rate represents memory leaks (e.g., memory allocated to services that have not been recently used or for which the services are no longer running). If the angle representing the data growth rate of the data set within the given time interval exceeds a specified threshold the anomaly detection system150can trigger an anomaly detection operation. As an example, the anomaly detection system150can transmit a communication to a database operator device120that identifies the server140on which the abnormal behavior was detected. The database operator device120can present a prompt to a database operator (user) that identifies the server140on which the abnormal behavior was detected and provides an option to address the abnormal behavior. For example, the prompt provided by the database operator device120can include a reboot option that causes the server140on which the abnormal behavior was detected to be rebooted by the anomaly detection system150. In some embodiments, the anomaly detection system150analyzes the growth rate of the resource utilization across each server140on an individual basis. Specifically, the anomaly detection system150can obtain a data point that indicates the current memory that is being utilized or allocated by a particular server140. The anomaly detection system150can also obtain a data point representing a previous utilization of the resource on the server140, such as in the past 24 hours. Namely, the anomaly detection system150can maintain a history of resource utilizations in a 24-hour period in a database152or any other suitable time interval. The anomaly detection system150can compute an angle representing a resource utilization growth rate (e.g., a data growth rate) by drawing a line from the previous utilization data point to the current data utilization point. An angle of the line relative to a common axis (e.g., the x-axis) is determined. The anomaly detection system150can obtain a resource utilization threshold (e.g., a maximum angle of, for example, 5 degrees) for the specific server140or for the collection of servers140from the database152. The anomaly detection system150can compare the computed angle with the threshold to determine if the resource utilization exceeds the threshold. In response to determining that the resource utilization exceeds the threshold, the anomaly detection system150triggers the anomaly detection operation. In some embodiments, the anomaly detection system150generates a data set that includes a history of resource utilizations, such as a plurality of resource utilizations of server140over a given time interval. For example, the anomaly detection system150captures a collection of data at a capture rate. Specifically, at each point in time within a capture rate (e.g., every ten minutes), the anomaly detection system150can query a given server140to obtain the current resource utilization of the server140. The anomaly detection system150can store the current resource utilization of the given server140in the database152. In some cases, rather than capturing the resource utilization data every ten minutes, the anomaly detection system150obtains the resource utilization of the given server140once every minute. Anomaly detection system150may obtain the resource utilization of the server140at other time intervals as appropriate (e.g., once every five minutes, once every 20 minutes, and the like), all of which are within the scope of the present disclosure. The anomaly detection system150may then collect a set of, e.g., ten data points that have been collected over the past ten minutes and aggregate them (e.g., compute a sum and/or average of the past ten data points) into a given data point. In this example, a single data point in a collection represents resource utilization of a server140across a ten minute interval. In some implementations, the anomaly detection system150applies a first order central gradient to the collected data (e.g., the past ten data points collected over the past 10 minutes) to smooth the data and reduce noise. Specifically, each time that the memory utilization of a server is captured as part of the set of data points, its values can vary in a zig zag looking pattern; except for the reboot time in which the data drops significantly. In an example, the value of an event that has been captured in the past minute can be in the higher side of the zig zag pattern while the value that was captured three minutes prior to the past minute (e.g., 3 minutes ago), can lean towards the lower side of the zig zag pattern. In some cases, calculation of the difference between an event that is in the low side of zig zag versus another event in the higher side of zig zag can be avoided (e.g., such an event can be characterized as noise). One way to reduce the noise is to use a first order central gradient method. In the first order central gradient method, for every event (e.g., each point of the ten data points), the average utilization value is selected for its preceding and proceeding events and such an average value is considered as the value in further analysis (e.g., instead of the actual value that was observed). This process can smooth the zig zag behavior of data and prepares the data for more accurate analysis. Another technique that is can be used for smoothing the data (in addition to or instead of the first order central gradient method) is a moving average technique. In this technique, the average of “n” prior events can be used as a substitute of the value of each event (e.g., each point of the past ten data points) to reduce the noise. This method only looks at the past events and depending on the value of a chosen “n”, (e.g. 2, 3, 4, 5, 10, . . . ), the average may be in favor of higher or lower zig zag trended values. In some embodiments, the anomaly detection system150obtains a reference data point that corresponds to a start of a given time interval. For example, the anomaly detection system150queries the database152to obtain the smoothed and aggregated data point that represents the resource utilization of a given server140at a point in time 24 hours prior to the current time (e.g., if the time interval is 24 hours). The anomaly detection system150can then obtain a window of samples that includes a plurality of consecutive data samples collected over a period of time following the given time interval. For example, the anomaly detection system150aggregates in ten minute intervals data points that represent the current resource utilization of the server140. In one case, the anomaly detection system150can obtain data every minute representing the resource utilization of the server140and then aggregates, sums and/or averages, ten data points into a single data point. In another case, the anomaly detection system150can obtain data representing current resource utilization every ten minutes and store that data as a data point representing resource utilization of the server140. In an embodiment, to compute or determine a growth rate of resource utilization (e.g., the memory leaks), the anomaly detection system150computes a first angle representing a first growth rate of the data at the server140between the reference data point and a first data point in the window of samples. For example, the current time may be LOAM and the anomaly detection system150retrieves the resource utilization at LOAM on the prior day as the reference data point. The anomaly detection system150can obtain a first data point that represents the current resource utilization by aggregating ten minutes worth of resource utilization following LOAM. Namely, the first data point represents the resource utilization of the server from 10 AM-10:10 AM. The anomaly detection system150can compute a first angle between the reference data point of LOAM and the first data point that represents the resource utilization of the server from 10 AM-10:10 AM. The anomaly detection system150can compare the computed angle to a threshold associated with the server140. For example, the anomaly detection system150determines if the computed angle exceeds a threshold of 5 degrees. Next, the anomaly detection system150can continue computing a plurality of additional angles for at least ten more ten minute intervals (e.g., for a total of 100 minutes following the given time interval). For example, the anomaly detection system150obtains a second data point that represents the current resource utilization by aggregating ten minutes worth of resource utilization following 10:10 AM. Namely, the second data point represents the resource utilization of the server from 10:10 AM-10:20 AM. The anomaly detection system150can compute a second angle between the reference data point of LOAM and the second data point that represents the resource utilization of the server from 10:10 AM-10:20 AM. The anomaly detection system150can compare the computed angle to a threshold associated with the server140. For example, the anomaly detection system150determines if the computed angle exceeds 5 degrees. If the anomaly detection system150determines that a majority of the plurality of angles (e.g., if seven or six or more) of the computed angles relative to the reference point exceed the 5 degree angle, then the anomaly detection system150can trigger an anomaly detection operation. In some cases, instead of using a common reference point (e.g., the point corresponding to LOAM) for computing the plurality of angles for the 100 minute time window, the anomaly detection system150can continuously adjust the reference data point by 10 minutes. In this example, the current data point under consideration is always 24 hours (or a threshold time interval) away from the reference data point. In some embodiments, the anomaly detection system150detects a negative angle in the set of computed angles. The anomaly detection system150can determine whether the negative angle is immediately adjacent and precedes the positive growth rate. In response to determining that the computed angle representing the positive data growth rate of the data set is adjacent to the negative angle, the anomaly detection system150can determine whether the computed angle is greater than the specified threshold by more than a reboot amount. Specifically, prior to triggering the anomaly detection operation invariable whenever the growth rate exceeds a specified threshold (e.g. exceeds 5 degrees), the anomaly detection system150can determine whether a prior condition (e.g., a prior reboot operation) occurred before the growth rate reached the value that exceeded the threshold. Namely, if a server140is rebooted, it can be expected that following a reboot, the growth rate of data will be very high in the beginning and then slowly stabilize. Such a condition may be determined to be normal and the anomaly detection system150can be configured to prevent detecting such a condition as abnormal to trigger the anomaly detection operation. To determine this condition, the anomaly detection system150can determine that a negative growth rate precedes immediately a growth rate that exceeds the growth rate threshold for a server. If so, a further comparison between the growth rate and a reboot growth rate threshold (e.g., 79 degrees) can be performed to determine whether to prevent triggering the anomaly detection operation. Namely, if the positive growth rate following the negative growth rate exceeds the reboot growth rate threshold (e.g., the positive growth rate has an angle that exceeds 79 degrees), the anomaly detection system150can prevent triggering the anomaly detection operation. In some embodiments, the anomaly detection system150counts a number of times within a specified time period that the anomaly detection operation has been triggered. For example, the anomaly detection system150tracks how often the angle (or a majority of a plurality of angles) representing the data growth rate of the data set within a given time interval exceeds a specified threshold. Each time the data growth rate exceeds the specified threshold, the anomaly detection system150can trigger the anomaly detection operation. However, if the anomaly detection system150determines that more than a specified amount (e.g., more than 3 times) of anomaly detection operations have been triggered within a set period of time (e.g., 1 day), the anomaly detection system150may temporarily and selectively discontinue determining that the angle representing the data growth rate exceeds the specified threshold. FIG.2is an example database152that may be deployed within the system ofFIG.1, according to some embodiments. As shown, the database152includes resource utilization data210and growth rate threshold220. The resource utilization data210can store a collection of data points representing resource utilization, such as memory allocations, on each server140on a per server basis. The resource utilization data210can include aggregated data points, such as data collected every minute and averaged and/or summed into a single data point. The resource utilization data210stores seventy-two hours' worth of resource utilization but any more or less amount of data can be maintained and tracked. The growth rate threshold220can store one or more thresholds for all of the servers140or for each individual server140. The growth rate threshold220can also store the reboot growth rate threshold for all of the servers140or for each individual server140. As an example, a first server140may be assigned and associated with a first growth rate threshold220(e.g., 4.5 degrees) and a second server140may be assigned and associated with a second growth rate threshold220(e.g., 5 degrees). In such circumstances, resource utilization of the first server may be more likely to trigger anomaly detection operations than the second server because a slower growth of data on the first server relative to the second server may be characterized as exceeding the threshold which controls and triggers the anomaly detection operation. For example, if within a given time interval, resource utilization grows at the same rate on the first and second servers which corresponds to a resource utilization angle of 4.8 degrees, an anomaly detection operation will be triggered for the first server (which has the first growth rate threshold of 4.5 degrees) and not for the second server (which has the first growth rate threshold of 5 degrees). FIG.3illustrates exemplary resource utilization data300collected by the system ofFIG.1, according to some embodiments. For example, the resource utilization data300includes a plurality of points each representing current resource utilization (e.g., memory allocations) for a particular server140. In some embodiments, the anomaly detection system150obtains a reference point310corresponding to a point at a start of a time interval (e.g., a data point representing resource utilization 24 hours prior to the current time). The anomaly detection system150obtains a first data point330corresponding to the current resource utilization (e.g., a point that aggregates 10 minutes worth of resource utilization of the server140). The anomaly detection system150computes an angle relative to a common axis320between the reference point310and the first data point330. In response to determining that the angle exceeds a specified threshold (e.g., is greater than 5 degrees), the anomaly detection system150can trigger an anomaly detection operation. The anomaly detection system150can also detect negative data growth rate340at a particular time point. This may have occurred because of an automated or manual reboot operation performed on the server140. In such cases, the anomaly detection system150can compute an angle of the data growth rate immediately adjacent to the negative growth rate340. Namely, the anomaly detection system150can compute the angle of the growth rate350. The anomaly detection system150can determine if the angle of the growth rate350exceeds a reboot threshold (which may be greater than the specified threshold of 5 degrees set for triggering an anomaly detection operation). If the angle of the growth rate350exceeds the reboot threshold (exceeds 79 degrees), the anomaly detection system150can prevent triggering the anomaly detection operation. FIG.4illustrates an example data growth rate400that is determined by the system ofFIG.1, according to some embodiments. In some embodiments, the anomaly detection system150computes a plurality of angles410. The anomaly detection system150can trigger the anomaly detection operation if a majority or supermajority (e.g., 7 out of 10 angle comparisons) exceed the specified threshold for the server140. Specifically, the anomaly detection system150can obtain a reference point (corresponding to resource utilization at a start of a time interval) and computes multiple angles between the reference point and a collection of samples (e.g., 10 samples) obtained following the time interval. For example, the anomaly detection system150computes a first angle between the reference point and ten minutes past the end of the time interval (e.g., if the reference point corresponds to LOAM on a prior day, the first angle represents data growth from LOAM on the prior day to 10:10 AM on the current day). The anomaly detection system150can compute a second angle between the reference point and a period between ten minutes past the end of the time interval and twenty minutes past the end of the time interval (e.g., if the reference point corresponds to LOAM on a prior day, the second angle represents data growth from LOAM on the prior day to 10:20 AM on the current day or from 10:10 AM on the prior day and 10:20 AM on the current day). The anomaly detection system150can continue computing a plurality of angles (e.g., ten or more angles) for sequential time periods following the given time interval corresponding to the reference point. The anomaly detection system150can then determine whether a majority or seven or more out of the number of angles that have been computed exceeds the specified threshold (e.g., whether seven or more angles are greater than 5 degrees). If so, the anomaly detection system150can trigger an anomaly detection operation. FIG.5is an example anomaly detection alert500generated by the system ofFIG.1, according to example embodiments. For example, triggering an anomaly detection operation by the anomaly detection system150may transmit a communication to the database operator device120to generate the anomaly detection alert500in a graphical user interface of the database operator device120. The anomaly detection alert500may be presented to a system or database administrator. The anomaly detection alert500may include information that identifies resource utilization across a collection of servers140. In response to receiving the anomaly detection operation communication from the anomaly detection system150, the anomaly detection alert500can visually highlight or visually distinguish the information for the server140associated with the anomaly detection operation in the anomaly detection alert500. For example, the anomaly detection system150may determine that server number2out of the servers140has a data growth rate that is growing at an angle that exceeds a specified threshold. In response, the anomaly detection system150identifies server number2to the database operator device120. The database operator device120draws a red or blue border or flashes a region510of the display corresponding to server number2. The database operator device120also presents a textual alert indicating a recommended action to address the detected anomaly for server number2. The region510corresponding to server number2may include a reboot option512and a change threshold option514. In response to receiving input that selects the reboot option512, the database operator device120can send a message to the server140associated with the region510instructing the server140to perform a reboot operation. In response to receiving input that selects the change threshold option514, the database operator device120can retrieve from the database152the growth rate threshold220associated with the server number2. The database operator device120presents the current growth rate threshold (e.g., 5 degrees) and allows the user to increase or decrease the currently set growth rate threshold. Once the user is satisfied with the newly set value, the database152can be updated to cause future anomaly detection operations to be triggered based on the newly set growth rate threshold. The user can similarly change thresholds set for other servers by selecting the corresponding change threshold option presented in a region for the other servers. Other servers can also similarly be rebooted using reboot options presented in their respective user interface regions even though anomaly detection operations have not been triggered for the other servers. FIG.6is a flowchart illustrating example operations of the anomaly detection system in performing process600, according to example embodiments. The process600may be embodied in computer-readable instructions for execution by one or more processors such that the operations of the process600may be performed in part or in whole by the functional components of the system100; accordingly, the process600is described below by way of example with reference thereto. However, in other embodiments, at least some of the operations of the process600may be deployed on various other hardware configurations. Some or all of the operations of process600can be in parallel, out of order, or entirely omitted. At operation601, the anomaly detection system150accesses a data set that has been collected over a given time interval. For example, the anomaly detection system150collects data that represents memory utilization or allocation across the servers140. At operation602, the anomaly detection system150selects a reference data point from the data set. For example, the anomaly detection system150selects a data point at the start of a 24 hour time interval (e.g., a point representing memory allocations on an adjacent prior day relative to the current time). At operation603, the anomaly detection system150computes an angle representing data growth rate of the data set at least within the given time interval relative to the reference data point. At operation604, the anomaly detection system150determines that the angle representing the data growth rate of the data set within the given time interval exceeds a specified threshold. For example, the anomaly detection system150determines if within a 24 hour time interval, data grew at more than 5 degrees. At operation605, the anomaly detection system150triggers an anomaly detection operation. For example, the anomaly detection system150presents the anomaly detection alert500on a user interface of the database operator device120. FIG.7is a block diagram illustrating an example software architecture706, which may be used in conjunction with various hardware architectures herein described.FIG.7is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture706may execute on hardware such as machine800ofFIG.8that includes, among other things, processors804, memory814, and input/output (I/O) components818. A representative hardware layer752is illustrated and can represent, for example, the machine800ofFIG.8. The representative hardware layer752includes a processing unit754having associated executable instructions704. Executable instructions704represent the executable instructions of the software architecture706, including implementation of the methods, components, and so forth described herein. The hardware layer752also includes memory and/or storage devices memory/storage756, which also have executable instructions704. The hardware layer752may also comprise other hardware758. The software architecture706may be deployed in any one or more of the components shown inFIG.1or2. The software architecture706can be utilized to detect anomalies on one or more servers when a data growth rate exceeds a specified threshold and trigger an anomaly detection operation. In the example architecture ofFIG.7, the software architecture706may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture706may include layers such as an operating system702, libraries720, frameworks/middleware718, applications716, and a presentation layer714. Operationally, the applications716and/or other components within the layers may invoke API calls708through the software stack and receive messages712in response to the API calls708. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware718, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system702may manage hardware resources and provide common services. The operating system702may include, for example, a kernel722, services724, and drivers726. The kernel722may act as an abstraction layer between the hardware and the other software layers. For example, the kernel722may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services724may provide other common services for the other software layers. The drivers726are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers726include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries720provide a common infrastructure that is used by the applications716and/or other components and/or layers. The libraries720provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system702functionality (e.g., kernel722, services724and/or drivers726). The libraries720may include system libraries744(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries720may include API libraries746such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPREG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render two-dimensional and three-dimensional in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries720may also include a wide variety of other libraries748to provide many other APIs to the applications716and other software components/devices. The frameworks/middleware718(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications716and/or other software components/devices. For example, the frameworks/middleware718may provide various graphic user interface functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware718may provide a broad spectrum of other APIs that may be utilized by the applications716and/or other software components/devices, some of which may be specific to a particular operating system702or platform. The applications716include built-in applications738and/or third-party applications740. Examples of representative built-in applications738may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications740may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications740may invoke the API calls708provided by the mobile operating system (such as operating system702) to facilitate functionality described herein. The applications716may use built-in operating system functions (e.g., kernel722, services724, and/or drivers726), libraries720, and frameworks/middleware718to create UIs to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as presentation layer714. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.8is a block diagram illustrating components of a machine800, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.8shows a diagrammatic representation of the machine800in the example form of a computer system, within which instructions810(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions810may be executed by the anomaly detection system150to access data collected over a given interval to detect a data growth anomaly and trigger an anomaly detection operation. As such, the instructions810may be used to implement devices or components described herein. The instructions810transform the general, non-programmed machine800into a particular machine800programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine800operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800may comprise, but not be limited to a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a STB, a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions810, sequentially or otherwise, that specify actions to be taken by machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions810to perform any one or more of the methodologies discussed herein. The machine800may include processors804, memory/storage806, and I/O components818, which may be configured to communicate with each other such as via a bus802. In an example embodiment, the processors804(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor808and a processor812that may execute the instructions810. The term “processor” is intended to include multi-core processors804that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.8shows multiple processors804, the machine800may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory/storage806may include a memory814, such as a main memory, or other memory storage, database110, and a storage unit816, both accessible to the processors804such as via the bus802. The storage unit816and memory814store the instructions810embodying any one or more of the methodologies or functions described herein. The instructions810may also reside, completely or partially, within the memory814, within the storage unit816, within at least one of the processors804(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. Accordingly, the memory814, the storage unit816, and the memory of processors804are examples of machine-readable media. The I/O components818may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements. The specific I/O components818that are included in a particular machine800will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components818may include many other components that are not shown inFIG.8. The I/O components818are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components818may include output components826and input components828. The output components826may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components828may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components818may include biometric components839, motion components834, environmental components836, or position components838among a wide array of other components. For example, the biometric components839may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components834may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components836may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components838may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components818may include communication components840operable to couple the machine800to a network837or devices829via coupling824and coupling822, respectively. For example, the communication components840may include a network interface component or other suitable device to interface with the network837. In further examples, communication components840may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices829may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components840may detect identifiers or include components operable to detect identifiers. For example, the communication components840may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components840, such as location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Glossary “CARRIER SIGNAL” in this context refers to any intangible medium that is capable of storing, encoding, or carrying transitory or non-transitory instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Instructions may be transmitted or received over the network using a transitory or non-transitory transmission medium via a network interface device and using any one of a number of well-known transfer protocols. “CLIENT DEVICE” in this context refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, PDA, smart phone, tablet, ultra book, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronics, game console, set-top box, or any other communication device that a user may use to access a network. “COMMUNICATIONS NETWORK” in this context refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a LAN, a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. “MACHINE-READABLE MEDIUM” in this context refers to a component, device, or other tangible media able to store instructions and data temporarily or permanently and may include, but is not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. “COMPONENT” in this context refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an ASIC. A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “PROCESSOR” in this context refers to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands,” “op codes,” “machine code,” etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a CPU, a RISC processor, a CISC processor, a GPU, a DSP, an ASIC, a RFIC, or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. “TIMESTAMP” in this context refers to a sequence of characters or encoded information identifying when a certain event occurred, for example giving date and time of day, sometimes accurate to a small fraction of a second. Changes and modifications may be made to the disclosed embodiments without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure, as expressed in the following claims. The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
58,666
11860855
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION Various embodiments of methods and apparatus for providing a distributed data storage service that automatically performs data transformation is described herein. According to some embodiments, a system includes a distributed data storage service. The distributed data storage service includes multiple physical storage devices configured to store multiple data objects for clients of the distributed data storage service in logical storage locations. For example, a logical storage location may be implemented on multiple ones of the physical storage devices or a single physical storage device. Also, for example, a physical storage device may implement portions of multiple logical storage locations and different ones of the logical storage locations may be allocated to different clients of the distributed data storage service. The distributed data storage service also includes one or more computing devices configured to receive instructions specifying one or more transformations to be applied for one or more data objects stored in a particular logical storage location when data representing the one or more data objects is made available outside of the particular logical storage location. The one or more computing devices of the distributed data storage service are also configured to, in response to an event causing data representing the one or more data objects stored in the particular logical storage location to be made available outside of the particular logical storage location, automatically cause the one or more transformations to be performed prior to the data being made available outside of the particular logical storage location. The one or more transformation may be any process that takes on or more of the data sets stored in the particular storage location as an input and provides an altered version of the one or more data sets stored in the particular storage location as an output of the transformation. For example, a transformation may filter data in a data set, redact data in a data set, obfuscate information in a data set, add data to a data set, encrypt a data set, concatenate information of a data set or data sets, de-concatenate information of a data set or data sets, extract a subset of data from a data set, perform a computation based on data in a data set and return results, or involve various other pre-defined or user-defined operations. According to some embodiments, a method includes receiving, by a storage service storing a plurality of data sets in a plurality of storage locations, instructions specifying one or more transformations that are to be applied for one or more data sets stored in a particular storage location of the storage service when data representing the one or more data sets stored in the particular storage location is made available outside of the particular storage location. The method also includes automatically performing, by the storage service, prior to making data representing the one or more data sets available outside of the particular storage location, the one or more transformations. According to some embodiments, a non-transitory computer-readable storage medium stores program instructions that, when executed by a computing device of a storage service, cause the computing device to receive instructions specifying one or more transformations that are to be applied for one or more data sets stored in a particular storage location of the storage service when data representing the one or more data sets is made available outside of the particular storage location; and prior to making data representing the one or more data sets available outside of the particular storage location, cause the one or more transformations to be performed. Distributed storage systems often include an interface that allows users of the distributed storage system to store data in the distributed storage system and retrieve data stored in the distributed storage system. Some distributed storage systems may further store metadata along with stored data sets, such as metadata including a time-stamp indicating when a data set was last stored or accessed. However, in such distributed storage systems, if a user desires to transform data stored in the distributed storage system, the user is required to retrieve the data stored in the distributed storage system via the interface of the distributed storage system, transform the data using an outside computing resource, and then store the transformed data back in the distributed storage system. Such an approach may be time consuming and difficult to manage for users of a distributed storage system. Furthermore, such an approach may be expensive in terms of resource consumption. For example, if transformations are to be performed on a large quantity of data sets stored in a distributed storage system, considerable network resources may be consumed transmitting data sets across a network of the distributed storage system to an interface of the distributed storage system, transmitting the data sets from the interface of the distributed storage system to computing resources that perform the transformations, transmitting the transformed data sets from the computing resources back to the interface of the distributed storage system, and transmitting the transformed data sets across the network of the distributed storage system to a new storage location in the distributed storage system. Also, in such distributed storage systems, performing transformations may be logistically complicated for users of the distributed storage system and/or for an operator of the distributed storage system. For example, different users of a distributed storage system may choose to have transformations performed on multiple types of computing resources with different interfaces, such that a user or an operator of the distributed storage system may be required to develop customized solutions for each different user to support differences in interfaces between an interface of the computing resources performing the transformations and an interface of the distributed storage system. Also, some users of a distributed storage system may not be large enough to justify automating the performance of transformations, thus these users may manually execute transformations on data retrieved from a distributed storage system. For example, a human operator may be required to open a data set and manually remove data that is not desired to be shared. In some embodiments, a distributed data storage service with event triggered transformations may allow a user or client of the distributed data storage service to define or select one or more transformation that are to be applied to data stored in particular storage locations of the distributed data storage service. The transformations may be applied when the data is made available outside of the particular storage location. A user or client may also specify different triggering events to trigger different transformations that are to be applied to data stored in the particular storage location. For example, triggering events may include the data being made available outside of the particular storage location. For example, if data stored in a particular storage location is to be copied to a particular destination location a first user-defined or user-selected transformation may be applied and if data stored in the particular storage location is to be copied to a different particular destination location, a different user-defined or user-selected transformation may be applied. In some embodiments, a distributed data storage service may manage execution of user-defined or user-selected transformations in response to defined triggering events without any user interaction subsequent to defining or selecting the transformations that are to be applied. For example, a user or client may store data in a particular storage location, and may also specify a transformation to be applied that is associated with the particular storage location that removes sensitive data from the data set any time the data set or a portion of the data set is made available outside of the particular storage location. Once the transformation is defined or selected and associated with the particular storage location, a data storage service may cause the transformation to be performed on data being made available outside of the particular storage location each time the data is made available outside of the particular storage location. Thus, the user does not need to manage transforming the data each time it is shared. Furthermore, in some embodiments, hardware resources such as field programmable gate arrays (FPGAs), reduced instruction set computer (RISC) processors, or other suitable types of processors may be distributed throughout a distributed data storage service such that respective ones of the processors are located proximate to respective sets of physical storage devices that store data sets in particular storage locations of the distributed data storage service. Thus, transformations may be performed locally within a distributed data storage service, which may reduce network traffic as compared to transformations that are performed outside of the distributed data storage service. Also, in some embodiments, a distributed data storage service may be part of a larger provider network that offers additional services in addition to data storage services. For example, a provider network may also offer computing services in addition to data storage services. In such embodiments, a distributed data storage service may perform transformations using hardware included in the distributed data storage service or may manage coordination with another service of the provider network, such as a computing service, to automatically perform transformations on data being made available outside of a particular storage location using hardware resources of the other service of the provider network. For example, the distributed data storage service may recognize that a triggering event has taken place and may coordinate with a computing service to provision a computing resource of the computing service to execute a transformation stored by the distributed data storage service and assigned to be applied to data being made available outside of a particular storage location. The distributed data storage service may coordinate with the computing service to perform the transformation using the provisioned computing resource. The distributed data storage service may then provide a transformed version of the data stored in the particular storage location outside of the particular storage location, wherein the transformed version of the data has been transformed according to an assigned transformation for the particular storage location. In such embodiments, no user interaction may be required to execute a transformation in response to a triggering event once the transformation is assigned to be performed for data made available from a particular storage location. Also, in such embodiments, multiple services of the provider network, such as a distributed data storage service and computing service, may be implemented on physical resources that are geographically proximate to one another, for example in a same data center. Thus, network traffic may be reduced in such embodiments as compared to transformations being managed by a user outside of a data storage service. In some embodiments, multiple transformations may be assigned to be performed for data sets stored in a particular storage location when made available outside of the particular storage location. For example, in some embodiments, different transformations may be assigned when data is being made available at different destination locations. For example, if data is moved to another storage location within an a same account of a client of a distributed data storage service, a first transformation may be applied, and if data is moved to a storage location outside of the client's account, a different transformation may be applied. As discussed in more detail below, transformations to be applied to data being made available outside of a particular storage location of a distributed data storage service may be defined in many ways and may be automatically performed by a distributed data storage service in response to a variety of triggering events. In some embodiments, event triggered data transformations may be automatically performed by various types of distributed data storage systems. For example, an object based distributed data storage system may be configured to automatically perform event triggered data transformations when an object is made available outside of a given storage location. Also, other types of distributed data storage systems, such as block storage systems, relational database data storage systems, file-structured storage systems, and other various types of distributed data storage systems may be configured to automatically perform event triggered data transformations. In some embodiments, an event triggered data transformation may be user-defined or may be selected by a user from a set of pre-defined transformations. For example, in some embodiments, a user may submit a transformation that is to be applied to data made available from a particular storage location or may select an already defined transformation from a set of transformations. Some example transformations that may be applied include, but are not limited to:Filtering at least some data out of data stored in a particular storage location;Redacting at least some information from a document stored in a particular storage location;Obfuscating at least some information in a document stored in a particular storage location;Adding a watermark to a document stored in a particular storage location;Adding information such as a date or time of access to a document stored in a particular storage location;Encrypting at least some data being made available outside of a particular storage location with a client's particular encryption key;Enforcing a data privacy policy by removing or redacting data;Concatenating documents stored in a particular storage location;De-concatenating data stored in a particular storage location. For example, de-concatenating data may include splitting an indexed table into separate tables. Also in some embodiments, different transformed data sets may be made available for different destination locations. For example, if a transformation causes a table to be split, a first split portion may be made available at first destination location and another split portion may be made available at another destination location;Extracting portions of a video that include motion, for example from a surveillance video;Performing a computation, aggregation, or calculation based on the data stored in the particular storage location and providing the result with the data or providing the result without providing the data stored in the particular storage location;Performing user provided transformations; orPerforming various other types of transformations. FIGS.1-3illustrate distributed data storage services configured to perform data transformations, according to some embodiments. WhileFIGS.1-3are described in terms of an object-based distributed data storage service, it should be noted that various other types of distributed data storage services, such as block storage systems, relational database data storage systems, file-structured storage systems, and other various types of distributed data storage systems may be configured to automatically perform event triggered data transformations. ThusFIGS.1-3should not be interpreted as limiting in scope but instead should be interpreted as providing an example implementation of a distributed data storage system configured to perform data transformations out of several other possible implementations of a distributed data storage system configured to perform data transformations included within the scope of Applicant's disclosure. An example distributed data storage service, configured to perform transformations, based on an object storage model for providing virtualized storage resources to clients as a service, such as a web service, is illustrated inFIG.1. In the illustrated model, storage service interface110is provided as a client-facing interface to storage service140. Storage service interface110may, for example, be implemented as, or alternatively may include, an application programming interface (API). According to the model presented to client105by interface110, the storage service may be organized as an arbitrary number of logical storage locations, such as buckets120aand120bthrough120n, accessible via interface110. Each bucket120may be configured to store an arbitrary number of objects130a,130b,130cthrough130n, which in turn may store data specified by client105of the storage service140. One or more clients105may submit requests106to the storage service interface to store data objects, retrieve data objects, and, as described in more detail below, assign one or more transformations to be performed when data objects are made available outside of a particular storage location. Storage service interface110may provide responses108to the requests, which may include acknowledgements and/or retrieved data, for example. Generally, in addition to storage and retrieval of data objects, the requests or commands that the storage service140may perform may include commands that cause data transformations to be performed within the storage service140, such as command to move, copy, read, download, etc. data stored in the storage service140. In this way, the clients105are not burdened with removing the data from the storage service140, performing the transformations, and then returning the transformed data to the storage service. This configuration may save network bandwidth and processing resources for the clients105, for example. In some embodiments storage service interface110may be configured to support interaction between the storage service140and its client(s)105according to a web services model. For example, in one embodiment, interface110may be accessible by clients as a web services endpoint having a Uniform Resource Locator (URL) to which web services calls generated by service clients may be directed for processing. Generally speaking, a web service may refer to any type of computing service that is made available to a requesting client via a request interface that includes one or more Internet-based application layer data transport protocols, such as a version of the Hypertext Transport Protocol (HTTP) or another suitable protocol. In at least some embodiments, the object storage service140may be configured to internally replicate data objects for data redundancy and resiliency purposes. Storage service140also stores transformations150. Transformations150may be pre-defined transformations offered by storage service140for selection by users of the storage service, such as clients105. The users may select one or more of the pre-defined transformations to be applied to data stored in particular storage locations, such as particular ones of buckets120, when the data stored in the buckets, such as ones of objects130, are made available outside of the particular respective storage locations, for example buckets120. Also, in some embodiments, transformations150may include user-defined transformations submitted by users of storage service140. The user-defined transformations may be submitted as excerpts of code that are to be executed in response to particular triggering events, such as an object130stored in a particular bucket120being made available outside of the particular bucket120. In some embodiments, a storage service140may include transformation engines implemented on hardware included in the storage service that perform transformations150in response to a triggering event. Also, in some embodiments, a storage service140may coordinate with another service, such as a computing service to perform transformations150in response to a triggering event. For example,FIG.1shows object130abeing made available outside of bucket120a. For example, object130amay be moved or copied to bucket120b. Also, client105may have assigned transformation160to be applied to objects made available outside of bucket120a. For example, transformation160may be a user-selected or a user-defined transformation included in transformations150. In response to determining that object130ais to be made available outside of bucket120a, storage service140may invoke transformation160to be performed on data representing object130aprior to moving object130aor copying object130ato bucket120b. For example, object130amay be a document including patient names, social security numbers, and test results. Transformation160may include instructions to obfuscate patient names and filter out social security numbers. Thus, a transformed object130athat has been transformed by performing transformation160may obfuscate patient names and may not include patient social security numbers, but may still include patient test results. For example, transformed object130aincludes test results for patients “a, b, and c,” whereas object130a(prior to being transformed) includes the names “Ann, Beth, and Cathy” and also includes the patients' respective social security numbers. In the example described inFIG.1, client105may select or define transformation160. The selected or user-defined transformation, for example transformation160, may be applied to objects stored in bucket120aeach time one of the objects stored in bucket120ais made available outside of bucket120a. For example, storage service140may automatically perform transformation160on any object stored in bucket120aprior to making the object available outside of bucket120a. Thus client105does not need to manually remove patient names or social security numbers when sharing data stored in bucket120aand can instead rely on storage service140to automatically transform any data stored in bucket120asuch that patient names are obfuscated and patient social security numbers are removed. This may be true even if additional records with additional patient names and social security numbers are added to or removed from object130aor bucket120a. As described above, various types of transformations may be applied on data sets being made available outside of a particular storage location and obfuscating names and removing social security numbers are given only as example transformations that may be applied from among many possible transformations that may be applied. FIG.2illustrates a more detailed view of a distributed data storage service configured to perform data transformations, according to some embodiments. For example, storage service240illustrated inFIG.2may be the same storage service as storage service140illustrated inFIG.1. In some embodiments, a user of a distributed data storage service configured to automatically perform data transformations, may submit instructions specifying one or more transformations to be applied for one or more data objects stored in a particular logical storage location of the distributed data storage service. For example, client205may submit instructions202specifying transformations to be applied for objects stored in a particular bucket, such as bucket220aor220b. In some embodiments, the instructions may be submitted via an interface of the distributed data storage service, such as via storage service interface210. In response to receiving the instructions, a distributed data storage service, such as storage service240, may provide a response, such as response204, indicating that the instructions have been received and/or enacted for the particular storage location. In some embodiments, a client, such as client205, may submit instructions, such as instructions202, programmatically via an interface of a distributed data storage service, such as storage service interface210. Also, in some embodiments, a storage service, such as storage service240, may provide a graphical user interface (GUI) through which a client, such as client205, may submit instructions specifying one or more transformations to be applied for one or more data objects stored in a particular logical storage location. In some embodiments, the instructions specifying the one or more transformations to be applied for the one or more data objects stored in the particular logical storage location may include user-defined transformations, such as code excerpts included in the instructions, or may include an indication of one or more pre-defined transformations that are stored by a distributed data storage service, such as storage service240, that are to be applied to the one or more data objects stored in the particular storage location when the one or more objects are made available outside of the particular storage location. In some embodiments, user-defined and/or pre-defined transformations may be stored in a transformation depository, such as transformation depository250, of a distributed data storage service, such as storage service240. In some embodiments, multiple transformation depositories may be stored in local storage locations of a distributed data storage service, which are local to storage locations for which the transformations are to be applied. Also, in some embodiments, a transformation depository may be more centrally stored in a distributed data storage service and may store transformations that are to be applied at various storage locations within the distributed data storage service. In some embodiments, transformations may be implemented as application program interfaces (APIs) behind a storage service interface, such as storage service interface210. For example, an object being retrieved from a certain logical storage location, such as bucket220aor220b, may be passed through one or more transformations represented by one or more APIs, such one or more of APIs252,254,256, or258, before being presented to a storage service interface, such as storage service interface210, to be delivered to a destination location outside of a storage service, such as storage service240. Also, for destination locations within a distributed data storage service, such as storage service240, an object in a particular storage location, for example object222a, may be passed through one or more APIs, such as one or more of APIs252,254,256, or258, before being copied or moved to another storage location (e.g., object222b, object224b) in the distributed data storage service, such as bucket220b. In some embodiments, each storage location, such as each of buckets220may include an access policy, such as bucket access policies226aand226b. The access policies may define which transformations are to be applied for certain objects stored in the storage location when being made available at particular destination locations outside of the respective storage location. For example, bucket access policy226amay specify that when object222ais made available (for example moved or copied) to bucket226b, the object is to pass through API254to create a transformed version of object222athat will be available at bucket220b. In some embodiments, a bucket access policy may indicate different transformations are to be applied to objects stored in a given bucket when being made available at different destination locations. In some embodiments, an access policy may be updated in response to receiving instructions specifying one or more transformations that are to be applied for one or more data sets stored in a particular storage location of the storage service. For example, bucket access policy226amay be updated in response to instructions202specifying a particular transformation corresponding to a particular one of APIs252,254,256, or258is to be applied when data stored in bucket220ais made available at particular destination locations outside of bucket220a. In some embodiments, APIs corresponding to assigned transformations may be destination specific or general. For example, an API stored in transformation depository250, such as API252, may define both a transformation and a destination location for a transformed object where the transformed version of the object is to be delivered. Also, an API stored in a transformation depository, such as API252, may be general and define a transformation, wherein the API is generic to multiple destination locations. In some embodiments, instead of a transformation being applied to data “leaving” a particular storage location, a transformation may be applied to data being added to a particular storage location. For example, bucket220bmay include bucket access policy226bthat specifies that all incoming objects must pass through API258before being added to bucket220b. As an example, API258may be a virus scan that scans incoming objects that are to be added to bucket220bfor viruses. Other examples may include a bucket access policy that requires incoming objects to be encrypted with a particular encryption key, wherein one of the APIs stored in transformation depository250is configured to encrypt data objects with the particular encryption key, which may be a customer defined encryption key. In some embodiments, an API, such as one of the APIs stored in transformation depository250, may reject an incoming object that is to be added to a particular storage location, such as bucket220b, for example if a virus scan shows viruses. In response to an object being rejected, a storage service, such as storage service240, may issue a message, such as a message indicating that bucket220bwill not accept the object. In some embodiments, APIs stored in a transformation depository, such as transformation depository250, may include both pre-defined APIs and user-defined APIs. For example, instructions202may include a user defined API, such as a user-specific encryption key that is to be applied, and a pre-defined API, such as a virus scan. The user-defined API may be added to the transformation depository and instructions to invoke the user-defined API for certain triggering events may be added to an access policy for a particular storage location. In the case of a pre-defined API, instructions to invoke an API already stored in the transformation depository for certain triggering events may be added to an access policy for a particular storage location. In some embodiments, a distributed data storage service that is configured to perform event triggered transformations may support linked transformations. For example, instructions202may specify a sequence of transformations that are to be performed for data coming into or being made available from a particular storage location. For example, for incoming data a bucket access policy may specify that API258is to be invoked to perform a virus scan and subsequently API252is to be invoked to encrypt incoming data with a user defined encryption key. In some embodiments, in addition to or in place of bucket access policies, an application program interface may be configured to accept objects leaving a particular storage location, such as one of buckets220aor220b, and may be linked to a directory of transformations to be applied to objects leaving the particular storage location that are destined for particular destination locations. For example, an API may accept object224afrom bucket220athat is destined for bucket220b. The API may be linked to a directory of transformations to be applied to objects that indicates API256is to be applied to all objects from bucket220adestined for bucket220b. The API may cause the object to pass from the directory of transformations API through API256before being made available at bucket220b. In some embodiments, various other combinations of access policies, APIs, etc. may be used by a distributed data storage service to automatically perform transformations on data sets in response to triggering events. FIG.3illustrates a provider network including a distributed data storage service configured to perform data transformations, and that includes additional services, according to some embodiments.FIG.3also illustrates a more detailed view of example destinations for transformed data, according to some embodiments. For example, storage service340illustrated inFIG.3may be the same storage service as storage service140illustrated inFIG.1or storage service240illustrated inFIG.2. In some embodiments, a distributed data storage service, such as any of the distributed data storage services described herein, may be included in a provider network of a service provider. The provider network may provide one or more other services to clients of the service provider network, such as compute services, networking services, etc., in addition to providing storage services. In some embodiments, a client of a distributed data storage service, such as storage service340, may submit instructions specifying one or more transformations to be performed for the client's data, such as instructions specifying one or more of the transformations stored in transformation depository250described inFIG.2. The transformation may be applied to the client's data in response to certain triggering events. For example, a client or user may submit instructions specifying that objects (e.g., object330aand/or330b) made available from bucket320ato destinations (e.g. bucket320b) within the client's account with storage service340are to have a particular transformation applied when made available outside of bucket320a. For example, a client may specify that transformation A352is to be applied when data from bucket320ais made available within client account342. A client may also submit instructions specifying that objects made available from bucket320ato destinations within storage service340but outside the client's account are to have a different particular transformation applied when made available outside the client's account. For example, a client may specify that transformation B354is to be applied when data from bucket320ais made available within storage service340but outside client account342, for example at bucket325of Client B's account344. As another example, a client may submit instructions specifying that objects made available from bucket320ato destinations such as other services within provider network300are to have one or more other particular transformations applied when made available outside of bucket320a. For example, a client may specify that transformation C356is to be applied when data from bucket320ais made available to additional service360. As yet another example, a client may submit instructions specifying that objects made available from bucket320ato destinations outside of provider network300are to have one or more particular transformations applied when made available outside of bucket320a. For example, a client may specify that transformation D358is to be applied when data from bucket320ais made available, e.g., via intermediate network380, to data consumers370outside of provider network300. In some embodiments, a distributed data storage service, such as storage service340, may generate a pre-signed URL that can be provided to access data stored in the distributed data storage service. For example, a client of a distributed data storage service may provide a pre-signed URL generated by a distributed data storage service to a third party data consumer to allow the data consumer to access a view of the client's data. However, the client may not desire for the third party data consumer to be able to view raw data or all of the data stored for the client in a particular storage location. Instead, the client may desire data stored in the particular storage location be transformed in any of a number of ways before being made available to the third party data consumer. In such a situation, the client may submit instructions specifying that objects made available from a particular bucket, such as bucket320a, via a particular URL, such as a pre-signed URL used by data consumers370, are to have one or more particular transformations applied before being made available outside of bucket320a. For example, a client may specify that transformation D358is to be applied when data from bucket320ais made available to third party data consumers370outside of provider network300via a particular URL. In some embodiments, different transformations may be specified for different URLs. In some embodiments, in which a transformation is associated with a particular URL, the transformation may be considered “ephemeral” meaning that the transformation is performed on a stream of data as the data is being read via the URL. However, the underlying data stored in the particular storage location, such as bucket320a, may not be transformed. In some embodiments, for data that is read multiple times via a URL, a transformed version of the data may be cached for a threshold amount of time to improve efficiency of performing the transformation. FIGS.4A-4Cillustrate an example hardware implementation of a distributed data storage service configured to perform data transformations, according to some embodiments. Any of the distributed data storage services described herein may be implemented on a hardware arrangement as described inFIGS.4A-4Cor may be implemented on other suitable hardware arrangements. In the illustrated embodiment shown inFIG.4A, a data center400is shown including two areas410a-b. Each of areas410a-bincludes a respective coordinator instance420a-b. Areas410a-bmay also include various combinations of storage nodes460and keymap instances440, as well as other components. For example, area410aincludes four storage nodes460, and area410bincludes three storage nodes460and a keymap instance440. In one embodiment each of areas410a-bmay be considered a locus of independent or weakly correlated failure. That is, the probability of any given area410experiencing a failure may be generally independent from or uncorrelated with the probability of failure of any other given area410, or the correlation of failure probability may be less than a threshold amount. Areas410may include additional levels of hierarchy (not shown). For example, in one embodiment areas410may be subdivided into racks, which may be further subdivided into individual nodes, such as storage nodes460, although any suitable area organization may be employed. Generally speaking, areas410may include computing resources sufficient to implement the storage service system components deployed within the area. For example, each storage node460may be implemented as an autonomous computer system that may include a variety of hardware and software components. Similarly, each keymap instance440may be implemented via a number of computer systems. In addition to computing resources sufficient to implement the storage service system components deployed within the area, areas410, may also include computing resources that implement one or more transformation engines. For example, as shown inFIG.4B, a coordinator node, such as one of coordinator nodes420may further include computing resources that implement a transformation engine, such as transformation engine424along with other computing resources that implement a coordination controller422. In some embodiments, a transformation engine may be implemented on a low-cost commodity processor included in a coordinator node to perform transformations. For example, in some embodiments, additional FPGA processors or ARM processors may be included in a coordinator node in addition to computing resources that that implement the storage service, such as computing resource that implement coordination controller422, wherein the additional FPGA or ARM processors are configured to perform transformations. In some embodiments, computing resources included in an area, such as one of areas410aor410bmay have excess capacity that is beyond a capacity sufficient to merely implement a storage service. In some embodiments, the excess capacity may be used to perform transformations. For example, in some embodiments, instead of including additional FPGA or ARM processors in a coordinator, such as coordinator420, to perform transformations, processors that implement a coordinator controller, such as coordination controller422, may be sized such that the processors also have capacity to implement a transformation engine, such as transformation engine424. Additionally, different storage service system components may communicate according to any suitable type of communication protocol. For example, where certain components ofFIG.1-3are implemented as discrete applications or executable processes, they may communicate with one another using standard interprocess communication techniques that may be provided by an operating system or platform (e.g., remote procedure calls, queues, mailboxes, sockets, etc.), or by using standard or proprietary platform-independent communication protocols. Such protocols may include stateful or stateless protocols that may support arbitrary levels of handshaking/acknowledgement, error detection and correction, or other communication features as may be required or desired for the communicating components. For example, in one distributed data storage service embodiment, a substantial degree of inter-component communication may be implemented using a suitable Internet transport layer protocol, such as a version of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) or a similar standard or proprietary transport protocol. However, it is also contemplated that communications among storage service system components may be implemented using protocols at higher layers of protocol abstraction. FIG.4Cillustrates a storage node of a storage system of a distributed data storage service that comprises one or more transformation engines, according to some embodiments. In an example hardware implementation, such as the hardware implementation described inFIGS.4A-4B, storage nodes, such as storage nodes460may generally operate to provide storage for the various objects, e.g. objects130,230,330, managed by the distributed data storage service, e.g. storage service140,240,340as described inFIGS.1-3. One exemplary embodiment of a storage node460is shown inFIG.4C. In the illustrated embodiment, storage node460includes a storage node management (SNM) controller461configured to interface with a transformation engine463and a logical file input/output (I/O) manager465. Manager465is configured to interface with a file system467, which is in turn configured to manage one or more storage devices469. In various embodiments, any of SNM controller461, transformation engine463, logical file I/O manager465or file system467may be implemented as instructions that may be stored on a computer-accessible medium and executable by a computer to perform the functions described below. Alternatively, any of these components may be implemented by dedicated hardware circuits or devices. In some embodiments, transformation engines may be included in storage nodes, as shown inFIG.4C, may be included in coordinators, such as coordinator420as shown inFIG.4B, or may be included in both. In some embodiments, storage node may include a transformation engine, such as transformation engine463, at various levels within the storage node. For example, in some embodiments a transformation engine, such as transformation engine463, may be included at a level such that the transformation engine interacts with logical file input/output (I/O) manager465, or may be included at another level such that the transformation engine interacts with file system467. In some embodiments, transformation engines may be included at other levels in place of or in addition to a transformation engine, such as transformation engine463as pictured inFIG.4C, that is included at level such that the transformation engine interacts with storage node management controller461. In some embodiments, file system467or logical file input/output (I/O) manager465may be omitted. For example, storage node controller461may interact with storage devices469directly or may interact with file system467directly. In one embodiment, SNM controller461may be configured to provide an object storage API to a client of node460as well as to coordinate the activities of other components of node460to fulfill actions according to the API. For example, a controller420may be configured to store and retrieve objects to and from a given node460via the API presented by SNM controller461. While API management is described herein as a feature of SNM controller461, it is contemplated that in some embodiments, the API processing functions of node460may be implemented in a module or component distinct from SNM controller461. The object storage API may support object put, get and release operations. In one such embodiment, an object put operation, which may also be generically referred to as a store operation or a write operation, may specify the data and/or metadata of an object as an argument or parameter of the operation. Upon completion on a given node460, a put operation may return to the requesting client a locator, also referred to herein as an object key which may be included in a keymap, corresponding to the stored object, which may uniquely identify the object instance on the given node460relative to all other objects stored throughout the storage service system. Conversely, an object get operation, which may also be generically referred to as a read or retrieval operation, may specify a locator of an object, such as an object key of a key map, as a parameter. Upon completion, a get operation may return to the requesting client the object data and/or metadata corresponding to the specified locator. In some embodiments, as part of performing an object get operation, a storage node controller, such as SNM controller461, may ensure that one or more transformations are performed on the object prior to returning the object. The one or more transformations may be applied in accordance with a bucket access policy for a particular bucket in which the object is stored and may be performed by a transformation engine, such as transformation engine462. In the illustrated embodiment, logical file I/O manager465(or, simply, manager465) may be configured to virtualize underlying device or file system characteristics in order to present to SNM controller461and transformation engine462one or more logically contiguous storage spaces in which objects may reside. For example, a given object may be located within a logical storage space according to its offset within the storage space and its extent from that offset (e.g., in terms of the object size, including data and metadata). By providing such a logical storage space, manager465may present a uniform view of underlying storage to SNM controller461regardless of the implementation details of such underlying storage. In some embodiments, manager465may be configured to execute on multiple different execution platforms including different types of hardware and software. In some such embodiments, one or more additional layers of abstraction may exist between the logical object storage space presented by manager465to SNM controller461and its clients. For example, in the illustrated embodiment, manager465may be configured to implement the logical object storage space as one or more physical files managed by file system467. Generally speaking, file system467may be configured to organize various types of physical storage devices469into logical storage devices that may store data in logical units referred to herein as physical files. Logical storage devices managed by file system467may be hierarchical in nature. For example, file system467may support a hierarchy of directories or folders that may be navigated to store and access physical files. Generally speaking, file system467may be configured to track and manage the relationship between a given physical file and the locations of storage devices469where corresponding data and/or metadata of the physical file are stored. Thus, in one embodiment, manager465may manage the mapping of the logical object storage space to one or more physical files allocated by file system467. In turn, file system467may manage the mapping of these physical files to addressable locations of storage devices469. File system467may generally be integrated within an operating system, although any given operating system may support a variety of different file systems467that offer different features for management of underlying devices469. For example, various versions of the Microsoft Windows® operating system support file systems such as the NT file system (NTFS) as well as the FAT32 (File Allocation Table-32) and FAT16 file systems. Various versions of the Linux and Unix operating systems may support file systems such as the ext/ext2 file systems, the Network File System (NFS), the Reiser File System (ReiserFS), the Fast File System (FFS), and numerous others. Some third-party software vendors may offer proprietary file systems for integration with various computing platforms, such as the VERITAS® File System (VxFS), for example. Different file systems may offer support for various features for managing underlying storage devices169. For example, some file systems467may offer support for implementing device mirroring, striping, snapshotting or other types of virtualization features. Generally speaking, storage devices469may include any suitable types of storage devices that may be supported by file system467and/or manager465. Storage devices469may commonly include hard disk drive devices, such as Small Computer System Interface (SCSI) devices or AT Attachment Programming Interface (ATAPI) devices (which may also be known as Integrated Drive Electronics (IDE) devices). However, storage devices469may encompass any type of mass storage device including magnetic- or optical-medium-based devices, solid-state mass storage devices (e.g., nonvolatile- or “Flash”-memory-based devices), magnetic tape, etc. Further, storage devices469may be supported through any suitable interface type in addition to those mentioned above, such as interfaces compliant with a version of the Universal Serial Bus or IEEE 1394/Firewire® standards. Example Provider Network Environment FIG.5is a block diagram of an example provider network that provides a distributed data storage service, a hardware virtualization service, and one or more additional services to clients, according to at least some embodiments. Hardware virtualization service520provides multiple computation resources524(e.g., VMs) to clients. The computation resources524may, for example, be rented or leased to clients of the provider network500(e.g., to a client that implements client network550). Each computation resource524may be provided with one or more private IP addresses. Provider network500may be configured to route packets from the private IP addresses of the computation resources524to public Internet destinations, and from public Internet sources to the computation resources524. Provider network500may provide a client network550, for example coupled to intermediate network540via local network556, the ability to implement virtual computing systems592and/or virtualized storage598via hardware virtualization service520coupled to intermediate network540and to provider network500. In some embodiments, hardware virtualization service520may provide one or more APIs502, for example a web services interface, via which a client network550may access functionality provided by the hardware virtualization service520. Also, a distributed data storage service included in the provider network500may include or more additional storage service interfaces, or may use a shared interface such as one or more of APIs502. In at least some embodiments, at the provider network500, each virtual computing system592at client network550may correspond to a computation resource524that is leased, rented, or otherwise provided to client network550. From an instance of a virtual computing system592and/or another client device590, the client may access the functionality of distributed data storage service510, for example via one or more APIs502or an interface of the distributed data storage service510, to access data from and store data to, e.g., storage518of a virtualized data store516provided by the provider network500. While not shown inFIG.5, the virtualization service(s) may also be accessed from resource instances within the provider network500via API(s)502. For example, a client, appliance service provider, or other entity may access a virtualization service from within a respective private network on the provider network500via an API502to request allocation of one or more resource instances within the private network or within another private network. Example Methods of Implementing Transformations in a Storage Service FIG.6is a flow diagram for implementing event triggered data transformations in a distributed data storage service, according to some embodiments. At600a distributed data storage service, such as any of the distributed data storage services described inFIGS.1-5, receives instructions specifying one or more transformations that are to be applied to a data set, for example a data object, stored in a particular logical storage location, such as a bucket, when the data set, a portion of the data set, or a representation of the data set, is made available outside of the particular storage location. The instructions may be received via an interface of the distributed data storage service, such as a web interface, an API, or other type of interface. In some embodiments, a graphical user interface may be provided to a user of a distributed data storage service to allow a user to submit instructions specifying one or more transformations that are to be applied to one or more data sets. In some embodiments, a user may select one or more transformations from a set of pre-defined transformations offered by the distributed data storage service or may provide a user-defined transformation. In some embodiments, transformations may be stored in a transformation directory and user-defined transformations may be added to a transformation directory when received with the instructions. In some embodiments, user-defined transformations may be stored in a separate directory from a directory that stores pre-defined transformations. In some embodiments, the instructions may further specify one or more triggering events for which the specified transformation is to be applied. For example, the instructions may specify that when a data set is made available outside of a particular storage location or made available at a particular destination location one or more specified transformations are to be invoked prior to making the data set available outside of the particular storage location or at the particular destination location. Also, in some embodiments, the instructions may specify one or more transformations that are to be applied prior to data being added to a particular storage location. In some embodiments, the instructions may specify one or more classes of data objects for which the transformations are to be performed, regardless of whether or not the data sets are currently stored in the distributed data storage service. For example, instructions may specify that all documents including social security numbers are to have a transformation applied that removes the social security numbers from the documents. The transformation may be applied for any documents with social security numbers currently stored in the distributed storage service when the documents are made available outside of a particular storage location and/or the transformations may be applied to any documents that may be added to the distributed storage service in the future when the added documents are made available outside of a particular storage location. In some embodiments, a class of data objects may be defined by one or more characteristics of the data objects, such as contents of the data objects, an author of the data object, a creation date associated with the data object, a modification date associated with the data object, various other types of metadata associated with the data object, and the like. At602, it is determined that a triggering event will make a data set available outside of a particular storage location. For example, a triggering event may be moving a data set, copying a data set, making a data set available to be accessed via a URL, reading a data set, making a data set available for download, etc. In some embodiments, different transformations may be assigned to be invoked for different types of triggering events, for example a first transformation may be invoked for a data set that is to be made available via a URL and another transformation may be invoked if the data set is copied. Also different transformations may be invoked depending on a destination location of a data set resulting from a triggering event. For example different transformations may be invoked if a data set is to be made available within a client's account within a distributed data storage service than are invoked if the data set is to be made available outside of the client's account. Triggering events may be specified by clients of a distributed data storage system via instructions as described at600. At604, in response to determining a triggering event will make the data set available outside of the particular storage location at which the data set is stored, the distributed data storage system causes one or more assigned transformations to be performed on the data set prior to the data set being made available outside of the particular storage location at a destination location. The transformations may be performed by hardware included in the distributed data storage service or may be performed by hardware outside of the distributed data storage service at the direction of the distributed data storage service. For example, from the perspective of a client of the distributed data storage service, the transformations may be automatically performed without intervention from the client once the transformations are assigned via the instructions as described at600. Also, a data consumer of the transformed data may be aware that the data is being transformed prior to the data consumer receiving the transformed data. At606, a transformed version of the data set is made available at a destination location. As described inFIG.2, transformed data may be made available at various destination locations and in some embodiments, transformations may be assigned based on destination location. FIG.7is a flow diagram for implementing a feature for sharing transformed data using a pre-signed URL that provides transformed data from a particular storage location, according to some embodiments. At700a request is received to establish a pre-signed URP for viewing data of a client of a distributed data storage service, such as data transformed by one or more transformations. At702, a pre-signed URL is established and a transformation is assigned to the pre-signed URL. The transformation may be indicated by the client of the distributed data storage service in the request to establish the pre-signed URL as described in700or may be specified in a separate set of instructions specifying one or more pre-defined or user-defined transformations that are to be applied to data accessed via the pre-signed URL. In some embodiments, data stored in more than one logical storage location may be accessed via the pre-signed URL and different transformations may be assigned to be applied to data sets stored in different logical storage locations that are accessed via the pre-signed URL. At704, a request to read or download data from a particular storage location associated with the pre-signed URL is received. At706, one or more assigned transformations are performed on a data stream from the particular storage location made available via the pre-signed URL. From the perspective of a data consumer receiving data via the pre-signed URL the transformations may automatically take place without an indication that the data being read via the pre-signed URL is being transformed. At708the transformed data that has passed through the assigned one or more transformations is made available to a data consumer via the pre-signed URL. Also, from the perspective of a client of the distributed data storage service, transformations of data provided via a pre-signed URL may be performed automatically by a distributed data storage service without any interaction from the client subsequent to assigning a given one or more transformations to a pre-signed URL. Illustrative Computer System In at least some embodiments, a storage server, processing server, or other computer resource that implements a portion or all of the methods and apparatus described herein may include a computer system that includes or is configured to access one or more computer-accessible media, such as computer system800illustrated inFIG.8. In the illustrated embodiment, computer system800includes one or more processors810coupled to a system memory820via an input/output (I/O) interface830. Computer system800further includes a network interface840coupled to I/O interface830. In various embodiments, computer system800may be a uniprocessor system including one processor810, or a multiprocessor system including several processors810(e.g., two, four, eight, or another suitable number). Processors810may be any suitable processors capable of executing instructions. For example, in various embodiments, processors810may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors810may commonly, but not necessarily, implement the same ISA. System memory820may be configured to store instructions and data accessible by processor(s)810. In various embodiments, system memory820may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above for the methods and apparatus described herein, are shown stored within system memory820as code825and data826. In one embodiment, I/O interface830may be configured to coordinate I/O traffic between processor810, system memory820, and any peripheral devices in the device, including network interface840or other peripheral interfaces. In some embodiments, I/O interface830may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processor810). In some embodiments, I/O interface830may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface830may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface830, such as an interface to system memory820, may be incorporated directly into processor810. Network interface840may be configured to allow data to be exchanged between computer system800and other devices860attached to a network or networks850, such as other computer systems or devices as illustrated inFIGS.1through7, for example. In various embodiments, network interface840may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface840may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory820may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIGS.1through7for implementing embodiments of methods and apparatus as described herein. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system800via I/O interface830. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc, that may be included in some embodiments of computer system800as system memory820or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface840. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc, as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
68,232
11860856
DETAILED DESCRIPTION It will be readily understood that the instant components, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments. The instant features, structures, or characteristics as described throughout this specification may be combined or removed in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined or removed in any suitable manner in one or more embodiments. Further, in the diagrams, any connection between elements can permit one-way and/or two-way communication even if the depicted connection is a one-way or two-way arrow. Also, any device depicted in the drawings can be a different device. For example, if a mobile device is shown sending information, a wired device could also be used to send the information. In addition, while the term “message” may have been used in the description of embodiments, other types of network data, such as, a packet, frame, datagram, etc. may also be used. Furthermore, while certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message and signaling In one embodiment the application utilizes a decentralized database (such as a blockchain) that is a distributed storage system, which includes multiple nodes that communicate with each other. The decentralized database includes an append-only immutable data structure resembling a distributed ledger capable of maintaining records between mutually untrusted parties. The untrusted parties are referred to herein as peers or peer nodes. Each peer maintains a copy of the database records and no single peer can modify the database records without a consensus being reached among the distributed peers. For example, the peers may execute a consensus protocol to validate blockchain storage transactions, group the storage transactions into blocks, and build a hash chain over the blocks. This process forms the ledger by ordering the storage transactions, as is necessary, for consistency. In various embodiments, a permissioned and/or a permissionless blockchain can be used. In a public or permission-less blockchain, anyone can participate without a specific identity. Public blockchains can involve native crypto-currency and use consensus based on various protocols such as Proof of Work (PoW). Conversely, a permissioned blockchain database provides secure interactions among a group of entities which share a common goal but which do not fully trust one another, such as businesses that exchange funds, goods, information, and the like. This application can utilize a blockchain that operates arbitrary, programmable logic, tailored to a decentralized storage scheme and referred to as “smart contracts” or “chaincodes.” In some cases, specialized chaincodes may exist for management functions and parameters which are referred to as system chaincode. The application can further utilize smart contracts that are trusted distributed applications which leverage tamper-proof properties of the blockchain database and an underlying agreement between nodes, which is referred to as an endorsement or endorsement policy. Blockchain transactions associated with this application can be “endorsed” before being committed to the blockchain while transactions, which are not endorsed, are disregarded. An endorsement policy allows chaincode to specify endorsers for a transaction in the form of a set of peer nodes that are necessary for endorsement. When a client sends the transaction to the peers specified in the endorsement policy, the transaction is executed to validate the transaction. After validation, the transactions enter an ordering phase in which a consensus protocol is used to produce an ordered sequence of endorsed transactions grouped into blocks. This application can utilize nodes that are the communication entities of the blockchain system. A “node” may perform a logical function in the sense that multiple nodes of different types can run on the same physical server. Nodes are grouped in trust domains and are associated with logical entities that control them in various ways. Nodes may include different types, such as a client or submitting-client node which submits a transaction-invocation to an endorser (e.g., peer), and broadcasts transaction-proposals to an ordering service (e.g., ordering node). Another type of node is a peer node which can receive client submitted transactions, commit the transactions and maintain a state and a copy of the ledger of blockchain transactions. Peers can also have the role of an endorser. An ordering-service-node or orderer is a node running the communication service for all nodes, and which implements a delivery guarantee, such as a broadcast to each of the peer nodes in the system when committing transactions and modifying a world state of the blockchain. The world state can constitute the initial blockchain transaction which normally includes control and setup information. This application can utilize a ledger that is a sequenced, tamper-resistant record of all state transitions of a blockchain. State transitions may result from chaincode invocations (i.e., transactions) submitted by participating parties (e.g., client nodes, ordering nodes, endorser nodes, peer nodes, etc.). Each participating party (such as a peer node) can maintain a copy of the ledger. A transaction may result in a set of asset key-value pairs being committed to the ledger as one or more operands, such as creates, updates, deletes, and the like. The ledger includes a blockchain (also referred to as a chain) which is used to store an immutable, sequenced record in blocks. The ledger also includes a state database which maintains a current state of the blockchain. This application can utilize a chain that is a transaction log which is structured as hash-linked blocks, and each block contains a sequence of N transactions where N is equal to or greater than one. The block header includes a hash of the block's transactions, as well as a hash of the prior block's header. In this way, all transactions on the ledger may be sequenced and cryptographically linked together. Accordingly, it is not possible to tamper with the ledger data without breaking the hash links. A hash of a most recently added blockchain block represents every transaction on the chain that has come before it, making it possible to ensure that all peer nodes are in a consistent and trusted state. The chain may be stored on a peer node file system (i.e., local, attached storage, cloud, etc.), efficiently supporting the append-only nature of the blockchain workload. The current state of the immutable ledger represents the latest values for all keys that are included in the chain transaction log. Since the current state represents the latest key values known to a channel, it is sometimes referred to as a world state. Chaincode invocations execute transactions against the current state data of the ledger. To make these chaincode interactions efficient, the latest values of the keys may be stored in a state database. The state database may be simply an indexed view into the chain's transaction log and can therefore be regenerated from the chain at any time. The state database may automatically be recovered (or generated if needed) upon peer node startup, and before transactions are accepted. Example embodiments provide methods, systems, components, non-transitory computer-readable media, devices, and/or networks which manage distributed ledger storage space. In accordance with one or more embodiments, information is received for storage in a decentralized database (e.g., for storage in a new block of a blockchain). A search is performed to determine whether the information includes a feature that has already been stored in a prior block of the blockchain. If the search indicates that the feature has been stored in a prior block, a replacement operation is performed that includes replacing the feature with an identifier that references or points to the prior block in the blockchain. The received information is then stored in the new block with the identifier instead of the feature. Because the identifier points to a prior block (or information in the prior block where the feature may be found) in the blockchain, the feature may be retrieved from the prior block using the identifier when the new block is subsequently queried. In one embodiment, the replacement operation may be performed based on a decision tree, which is generated and/or maintained by an artificial intelligence manager. The artificial intelligence manager may be implemented based on execution of instructions by one or more processors. In one embodiment, the artificial intelligence manager may execute an unsupervised machine-learning algorithm that generates a model based on the decision tree for use in performing the replacement operation. Each node of the tree may be linked to a recurring feature previously stored in the database, along with an identifier that points to the storage area (e.g., block), or information in the storage area, where the recurring feature is stored. The decision tree may then be used as a basis for generating a dictionary which stores information indicative of the recurring features and their associated identifiers. The dictionary may be used as a basis for determining that a feature in newly received information to be stored is a recurring feature and for determining and accessing the identifier corresponding to that recurring feature. The dictionary may be stored as an auxiliary part of the ledger (e.g., world state) or may be stored in another area accessible by the node or other blockchain entity. In one embodiment, the identifier may include a transactionID associated with the prior block storing the recurring feature. In one implementation, the recurring feature may be a <property, value> pair. Because the recurring feature may be substantially large in size, inserting the significantly smaller sized identifier into the new block in place of the recurring feature reduces the storage requirements of each block, and thereby increases the amount of information that may be stored in the same storage space of the blockchain ledger. This is especially the case when the recurring feature is a digital certificate, digital media, or other forms of digital information that may or may not consume relatively large amounts of storage space. In one embodiment, the identifier stored in the new block may be converted to a hash value. Some benefits of one or more of the embodiments described herein include preventing the storage of recurring values in a blockchain. This may be accomplished by storing the original occurrence of a feature in a block of the blockchain, and then storing information (initially received with the same feature) in subsequent blocks of the blockchain with an identifier instead of the feature. This reduces the size of the ledger and/or allows the ledger to store more information in the same storage space, while at the same time allowing the feature to be recovered when the information stored in the subsequent blocks is queried. This is possible because the identifier references the prior block that initially stored the feature, thereby allowing for its recovery. The feature may be, for example, various types of digital information as described herein. FIG.1Aillustrates an embodiment of a system100that manages storage space in a decentralized database. The decentralized database may include a blockchain or another type of decentralized storage area. For illustrative purposes, the database will be discussed as a blockchain. The system100may be included in an entity of the blockchain, including but not limited to a client, a node, an authority, administrator, validator, or other entity of the blockchain. The node may be an originating node, peer node, ordering service node, endorsing node, or another type of node. Referring toFIG.1A, the system includes a receiver10, an extractor20, a correlator30, and a manager40. The receiver10many be any type of interface that receives information to be stored in a new block of the blockchain. The information may include any type of information relating to an intended purpose of the blockchain. Examples include transactions, various forms of digital information, financial or data records, media data, sales data, and statistical information, just to name a few. In one embodiment, the received information may include a block, for example, including one or more types of the aforementioned information. The transactions may be cryptocurrency related or ones using other forms of payment, or may be other types of information that require secure storage in a private or public blockchain network. In one embodiment, the information may be received from an external source. The extractor20analyzes the information received by the receiver to extract first information. The first information may be of a predetermined type or kind that has a likelihood of being repeatedly stored in the blockchain (the first information may be of any type or it may be predetermined type). The first information may, for example, be located in one or more predetermined fields of the received information. In one embodiment, the extractor may perform a keyword search to locate information that is likely to be recurring information stored in the blockchain. The first information may correspond to the entire information received by the receiver or a portion of that information as identified by the extractor. Examples of the first information include, but are not limited to, digital certificates, digital media (e.g., scanned documents, images, video, etc.), key information, or other types of data that has been identified, for example, by a smart contract or other code executed by the extractor for qualifying as possible recurring information subject to the replacement operation described herein. The correlator30performs operations including correlating second information to a first storage area that has previously stored the first information. When the decentralized database is a blockchain, the correlator may correlate second information to a prior block in the blockchain which stores information that matches the first information extracted by the extractor20. In a decentralized database that is not a blockchain, the first storage area may be any storage area existing in the database prior to receipt of the information by the receiver10. In one embodiment (described in greater detail below), the correlator30may include or communicate with an artificial intelligence manager35that maintains a model for generating and/or accessing second information that may be used as a basis for linking the extracted first information to the first storage area (or information in the first storage area) storing the first information. In one embodiment, the artificial intelligence manager35may manage a dictionary38including the second information. The dictionary may be based on a model that includes a decision tree, and the artificial intelligence manager35may generate and/or access the second information from the decision tree. In one embodiment, accessing second information may not require a decision tree. The decision tree may be formulated in various ways. In one embodiment, the decision tree may include a plurality of nodes logically arranged at one or more levels, where each node corresponds to different first information and stores second information in association with that first information. For example, when the first information includes a digital certificate, the nodes of the tree may include different digital certificates for respective ones of a plurality of clients, parties, or participants of transactions stored on the blockchain. When the first information includes digital media, the nodes of the tree may include different digital media for respective ones of a plurality of clients, parties, or participates of transactions stored on the blockchain. (While the term “transaction” is used here, it is understood that the first information may relate to a financial transaction or may relate to a transaction in the sense of storing information unrelated to a financial transaction). The different first information associated with the nodes of the decision tree constitute recurring information which, if not for the embodiments described herein, would be redundantly stored in an excessive number of blocks in the blockchain. However, in accordance with one or more embodiments, the correlator30operating in combination with the artificial intelligence manager35performs a replacement operation that substantially reduces the size of blocks (that otherwise would include the first information) to be newly appended to the blockchain and that therefore substantially reduces the size of the overall blockchain ledger. This may allow more information to be stored on the same storage space allocated to the blockchain. Once the artificial intelligence manager35receives the first information (either from the extractor20or the correlator30), the artificial manager35may perform a search or detection operation that involves analyzing (iterating) the decision tree of the dictionary38in order to determine whether one of the nodes of the tree corresponds to the first information. If a node is not found that corresponds to the first information, the correlator outputs the information received by the receiver10, along with the first information, to the manager40for storage in a new block to be appended to the blockchain. When a node is found that corresponds to the first information, the second information corresponding to that node is retrieved from the dictionary and sent to the correlator30. The correlator then performs a replacement operation which includes replacing the first information with the second information. As previously indicated, the second information may be in a variety of forms. For example, the second information may be a transactionID of a transaction stored in a prior block in the blockchain that includes a full version of the first information. In one embodiment, the second information may include a type of identifier or pointer different from a transactionID. For example, the second information may include an identifier that points to the first storage area (e.g., a block number of a prior block in the blockchain, or a transaction number, address, or field in a prior block) storing the first information. The artificial intelligence manager35may be programmed to retrieve that first information from the first storage area when the newly appended block is substantially queried in the blockchain network. In one embodiment, the identifier or pointer may be an address of the first storage area in a decentralized database, e.g., blockchain or another type of decentralized database. FIG.1Bshows a conceptual embodiment of dictionary38which the artificial intelligence manager35may use to correlate second information72to a second storage area storing the first information71. In this conceptual embodiment, the dictionary38includes a variety of types of digital information as first information arranged in relation to corresponding transactionIDs serving as the identifiers corresponding to the second information. Returning toFIG.1A, the replacement operation performed by the correlator30includes modifying the information received by the receiver10, by replacing the first information (e.g., digital certificate) with the second information (e.g., transactionID of a transaction in a prior block of the blockchain storing the digital certificate). The information including the second information is then output to the manager40. The manager40stores the information received by receiver10in a second storage area with the second information in place of the first information. The second storage area may be a new block45to be added to the blockchain50, or a storage area different from the first storage area when the decentralized database is different from a blockchain database. FIG.1Cshows an example of the first storage area110as Block1and the second storage area as Block2if the replacement operation of the one or more embodiments described herein is not performed. In this case, Block1and Block2both store the first information, which is illustratively shown as a digital certificate. Because Block2stores the first information, the storage space of the decentralized database is reduced. FIG.1Dshows an example of the first storage area120as Block1and the second storage area as Block2when a replacement operation in accordance with one or more embodiments described herein is performed. In this case, Block1has the first information. However, Block2as stored by manager40has stored the information received by receiver10with the first information (e.g., digital certificate) replaced with an identifier75in the form of transaction ID 1TX000001. The transactionID is only a fraction of the size of the digital certificate, and therefore the storage space occupied by Block2subject to the replacement operation is substantially less than the storage size of Block2(containing the digital certificate) when the replacement operation is not performed. This translates into a reduction in the size of the blockchain ledger at least with respect to Block2. In one embodiment, the extractor20, correlator30, artificial intelligence manager35, and/or the manager40may be implemented by one or more processors executing instructions stored in a memory for implementing the aforementioned operations. These features may also perform the operations of the method embodiments, as described in greater detail below. FIG.2Aillustrates a blockchain architecture configuration200, according to example embodiments. Referring toFIG.2A, the blockchain architecture200may include certain blockchain elements, for example, a group of blockchain nodes202. The blockchain nodes202may include one or more nodes204-210(these four nodes are depicted by example only). These nodes participate in a number of activities, such as blockchain transaction addition and validation process (consensus). One or more of the blockchain nodes204-210may endorse transactions based on endorsement policy and may provide an ordering service for all blockchain nodes in the architecture200. A blockchain node may initiate a blockchain authentication and seek to write to a blockchain immutable ledger stored in blockchain layer216, a copy of which may also be stored on the underpinning physical infrastructure214. The blockchain configuration may include one or more applications224which are linked to application programming interfaces (APIs)222to access and execute stored program/application code220(e.g., chaincode, smart contracts, etc.) which can be created according to a customized configuration sought by participants and can maintain their own state, control their own assets, and receive external information. This can be deployed as a transaction and installed, via appending to the distributed ledger, on all blockchain nodes204-210. The blockchain base or platform212may include various layers of blockchain data, services (e.g., cryptographic trust services, virtual execution environment, etc.), and underpinning physical computer infrastructure that may be used to receive and store new transactions and provide access to auditors which are seeking to access data entries. The blockchain layer216may expose an interface that provides access to the virtual execution environment necessary to process the program code and engage the physical infrastructure214. Cryptographic trust services218may be used to verify transactions such as asset exchange transactions and keep information private. The blockchain architecture configuration ofFIG.2Amay process and execute program/application code220via one or more interfaces exposed, and services provided, by blockchain platform212. The code220may control blockchain assets. For example, the code220can store and transfer data, and may be executed by nodes204-210in the form of a smart contract and associated chaincode with conditions or other code elements subject to its execution. As a non-limiting example, smart contracts may be created to execute reminders, updates, and/or other notifications subject to the changes, updates, etc. The smart contracts can themselves be used to identify rules associated with authorization and access requirements and usage of the ledger. For example, the information226may include the first information previously described, e.g., the recurring information that may be replaced with the identifier, pointer, or other type of second information as described herein. The information226may be processed by one or more processing entities (e.g., virtual machines) included in the blockchain layer216. The result228may include information to be included in a new block of the blockchain which has been subject to the replacement operation, e.g., the first information has been replaced with the second information to be stored in a new block. The physical infrastructure214may be utilized to retrieve any of the data or information described herein. A smart contract may be created via a high-level application and programming language, and then written to a block in the blockchain. The smart contract may include executable code which is registered, stored, and/or replicated with a blockchain (e.g., distributed network of blockchain peers). A transaction is an execution of the smart contract code which can be performed in response to conditions associated with the smart contract being satisfied. The executing of the smart contract may trigger a trusted modification(s) to a state of a digital blockchain ledger. The modification(s) to the blockchain ledger caused by the smart contract execution may be automatically replicated throughout the distributed network of blockchain peers through one or more consensus protocols. The smart contract may write data to the blockchain in the format of key-value pairs. Furthermore, the smart contract code can read the values stored in a blockchain and use them in application operations. The smart contract code can write the output of various logic operations into the blockchain. The code may be used to create a temporary data structure in a virtual machine or other computing platform. Data written to the blockchain can be public and/or can be encrypted and maintained as private. The temporary data that is used/generated by the smart contract is held in memory by the supplied execution environment, then deleted once the data needed for the blockchain is identified. A chaincode may include the code interpretation of a smart contract, with additional features. As described herein, the chaincode may be program code deployed on a computing network, where it is executed and validated by chain validators together during a consensus process. The chaincode receives a hash and retrieves from the blockchain a hash associated with the data template created by use of a previously stored feature extractor. If the hashes of the hash identifier and the hash created from the stored identifier template data match, then the chaincode sends an authorization key to the requested service. The chaincode may write to the blockchain data associated with the cryptographic details. FIG.2Billustrates an example of a blockchain transactional flow250between nodes of the blockchain in accordance with an example embodiment. Referring toFIG.2B, the transaction flow may include a transaction proposal291sent by an application client node260to an endorsing peer node281. The endorsing peer281may verify the client signature and execute a chaincode function to initiate the transaction. The output may include the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response292is sent back to the client260along with an endorsement signature, if approved. The client260assembles the endorsements into a transaction payload293and broadcasts it to an ordering service node284. The ordering service node284then delivers ordered transactions as blocks to all peers281-283on a channel. Before committal to the blockchain, each peer281-283may validate the transaction. For example, the peers may check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results and authenticated the signatures against the transaction payload293. Referring again toFIG.2B, the client node260initiates the transaction291by constructing and sending a request to the peer node281, which is an endorser. The client260may include an application leveraging a supported software development kit (SDK), which utilizes an available API to generate a transaction proposal. The proposal is a request to invoke a chaincode function so that data can be read and/or written to the ledger (i.e., write new key value pairs for the assets). The SDK may serve as a shim to package the transaction proposal into a properly architected format (e.g., protocol buffer over a remote procedure call (RPC)) and take the client's cryptographic credentials to produce a unique signature for the transaction proposal. In response, the endorsing peer node281may verify (a) that the transaction proposal is well formed, (b) the transaction has not been submitted already in the past (replay-attack protection), (c) the signature is valid, and (d) that the submitter (client260, in the example) is properly authorized to perform the proposed operation on that channel. The endorsing peer node281may take the transaction proposal inputs as arguments to the invoked chaincode function. The chaincode is then executed against a current state database to produce transaction results including a response value, read set, and write set. However, no updates are made to the ledger at this point. In292, the set of values, along with the endorsing peer node's281signature is passed back as a proposal response292to the SDK of the client260which parses the payload for the application to consume. In response, the application of the client260inspects/verifies the endorsing peers signatures and compares the proposal responses to determine if the proposal response is the same. If the chaincode only queried the ledger, the application would inspect the query response and would typically not submit the transaction to the ordering node service284. If the client application intends to submit the transaction to the ordering node service284to update the ledger, the application determines if the specified endorsement policy has been fulfilled before submitting (i.e., did all peer nodes necessary for the transaction endorse the transaction). Here, the client may include only one of multiple parties to the transaction. In this case, each client may have their own endorsing node, and each endorsing node will need to endorse the transaction. The architecture is such that even if an application selects not to inspect responses or otherwise forwards an unendorsed transaction, the endorsement policy will still be enforced by peers and upheld at the commit validation phase. After successful inspection, in step293the client260assembles endorsements into a transaction and broadcasts the transaction proposal and response within a transaction message to the ordering node284. The transaction may contain the read/write sets, the endorsing peers signatures and a channel ID. The ordering node284does not need to inspect the entire content of a transaction in order to perform its operation, instead the ordering node284may simply receive transactions from all channels in the network, order them chronologically by channel, and create blocks of transactions per channel. The blocks of the transaction are delivered from the ordering node284to all peer nodes281-283on the channel. The transactions294within the block are validated to ensure any endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid. Furthermore, in step295each peer node281-283appends the block to the channel's chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as to notify whether the transaction was validated or invalidated. FIG.3Aillustrates an example of a permissioned blockchain network300, which features a distributed, decentralized peer-to-peer architecture. In this example, a blockchain user302may initiate a transaction to the permissioned blockchain304. In this example, the transaction can be a deploy, invoke, or query, and may be issued through a client-side application leveraging an SDK, directly through an API, etc. Networks may provide access to a regulator306, such as an auditor. A blockchain network operator308manages member permissions, such as enrolling the regulator306as an “auditor” and the blockchain user302as a “client”. An auditor could be restricted only to querying the ledger whereas a client could be authorized to deploy, invoke, and query certain types of chaincode. A blockchain developer310can write chaincode and client-side applications. The blockchain developer310can deploy chaincode directly to the network through an interface. To include credentials from a traditional data source312in chaincode, the developer310could use an out-of-band connection to access the data. In this example, the blockchain user302connects to the permissioned blockchain304through a peer node314. Before proceeding with any transactions, the peer node314retrieves the user's enrollment and transaction certificates from a certificate authority316, which manages user roles and permissions. In some cases, blockchain users must possess these digital certificates in order to transact on the permissioned blockchain304. Meanwhile, a user attempting to utilize chaincode may be required to verify their credentials on the traditional data source312. To confirm the user's authorization, chaincode can use an out-of-band connection to this data through a traditional processing platform318. FIG.3Billustrates another example of a permissioned blockchain network320, which features a distributed, decentralized peer-to-peer architecture. In this example, a blockchain user322may submit a transaction to the permissioned blockchain324. In this example, the transaction can be a deploy, invoke, or query, and may be issued through a client-side application leveraging an SDK, directly through an API, etc. Networks may provide access to a regulator326, such as an auditor. A blockchain network operator328manages member permissions, such as enrolling the regulator326as an “auditor” and the blockchain user322as a “client”. An auditor could be restricted only to querying the ledger whereas a client could be authorized to deploy, invoke, and query certain types of chaincode. A blockchain developer330writes chaincode and client-side applications. The blockchain developer330can deploy chaincode directly to the network through an interface. To include credentials from a traditional data source332in chaincode, the developer330could use an out-of-band connection to access the data. In this example, the blockchain user322connects to the network through a peer node334. Before proceeding with any transactions, the peer node334retrieves the user's enrollment and transaction certificates from the certificate authority336. In some cases, blockchain users must possess these digital certificates in order to transact on the permissioned blockchain324. Meanwhile, a user attempting to utilize chaincode may be required to verify their credentials on the traditional data source332. To confirm the user's authorization, chaincode can use an out-of-band connection to this data through a traditional processing platform338. In some embodiments, the blockchain herein may be a permissionless blockchain. In contrast with permissioned blockchains which require permission to join, anyone can join a permissionless blockchain. For example, to join a permissionless blockchain a user may create a personal address and begin interacting with the network, by submitting transactions, and hence adding entries to the ledger. Additionally, all parties have the choice of running a node on the system and employing the mining protocols to help verify transactions. FIG.3Cillustrates a process350of a transaction being processed by a permissionless blockchain352including a plurality of nodes354. A sender356desires to send payment or some other form of value (e.g., a deed, medical records, a contract, a good, a service, or any other asset that can be encapsulated in a digital record) to a recipient358via the permissionless blockchain352. In one embodiment, each of the sender device356and the recipient device358may have digital wallets (associated with the blockchain352) that provide user interface controls and a display of transaction parameters. In response, the transaction is broadcast throughout the blockchain352to the nodes354. Depending on the blockchain's352network parameters the nodes verify360the transaction based on rules (which may be pre-defined or dynamically allocated) established by the permissionless blockchain352creators. For example, this may include verifying identities of the parties involved, etc. The transaction may be verified immediately or it may be placed in a queue with other transactions and the nodes354determine if the transactions are valid based on a set of network rules. In structure362, valid transactions are formed into a block and sealed with a lock (hash). This process may be performed by mining nodes among the nodes354. Mining nodes may utilize additional software specifically for mining and creating blocks for the permissionless blockchain352. Each block may be identified by a hash (e.g., 256 bit number, etc.) created using an algorithm agreed upon by the network. Each block may include a header, a pointer or reference to a hash of a previous block's header in the chain, and a group of valid transactions. The reference to the previous block's hash is associated with the creation of the secure independent chain of blocks. Before blocks can be added to the blockchain, the blocks must be validated. Validation for the permissionless blockchain352may include a proof-of-work (PoW) which is a solution to a puzzle derived from the block's header. Although not shown in the example ofFIG.3C, another process for validating a block is proof-of-stake. Unlike the proof-of-work, where the algorithm rewards miners who solve mathematical problems, with the proof of stake, a creator of a new block is chosen in a deterministic way, depending on its wealth, also defined as “stake.” Then, a similar proof is performed by the selected/chosen node. With mining364, nodes try to solve the block by making incremental changes to one variable until the solution satisfies a network-wide target. This creates the PoW thereby ensuring correct answers. In other words, a potential solution must prove that computing resources were drained in solving the problem. In some types of permissionless blockchains, miners may be rewarded with value (e.g., coins, etc.) for correctly mining a block. Here, the PoW process, alongside the chaining of blocks, makes modifications of the blockchain extremely difficult, as an attacker must modify all subsequent blocks in order for the modifications of one block to be accepted. Furthermore, as new blocks are mined, the difficulty of modifying a block increases, and the number of subsequent blocks increases. With distribution366, the successfully validated block is distributed through the permissionless blockchain352and all nodes354add the block to a majority chain which is the permissionless blockchain's352auditable ledger. Furthermore, the value in the transaction submitted by the sender356is deposited or otherwise transferred to the digital wallet of the recipient device358. FIG.4illustrates an embodiment of a system messaging diagram400for performing operations included in one or more of the system and method described herein. By way of example, the messaging and its attendant operations are performed by three entities: client410, node420, and a blockchain430. The messaging and operations may be performed by one or more different entities in another embodiment. Referring toFIG.4, the system diagram400includes client410generating information that is to be stored in a new block of a blockchain412. The information includes the first information as previously described, e.g., digital certificates, digital media, or other forms of information that otherwise may be stored on a recurring basis in the blockchain. Once the information has been generated, the information413is sent to node420. The node420extracts, at421, the first information from the information413received from the client410. This operation may be performed by the extractor previously discussed. The first information is then compared to the dictionary, at423, in order to determine whether the first information has been stored in one or more previous blocks of the blockchain, e.g., whether the first information constitutes recurring information. This operation may be performed by the correlator as previously indicated, with the assistance of the artificial intelligence manager. When the first information is determined to exist in the dictionary (e.g., when the first information is determined to be recurring information), then, at425, second information corresponding to the first information is obtained from the dictionary. For example, information indicative of a storage area (e.g., a prior block) in the blockchain that has already stored the first information is obtained as the second information. The second information may include, for example, a pointer or identifier referencing the storage area. In one embodiment, the pointer or identifier may be a transactionID corresponding to a transaction containing the first information stored in the storage area. Once the second information has been obtained, the node420performs an operation, at427, which includes replacing the first information with the second information. The information413received from the client is then sent to the blockchain430with the second information in place of the first information, which is indicated by information429. A new block containing information429is then stored in the blockchain. Because information429includes the second information which has been inserted in place of the first information, the first information may be recovered when information429is accessed in a subsequent blockchain query. Append new block to blockchain, storing the information replaced with the second information431. FIG.5Aillustrates a flow diagram500for managing storage space in a decentralized database (e.g., a blockchain) according to example embodiments. The method may be performed by the system and other embodiments described herein or may be performed by a different system and/or associated network entities. For illustrative purposes, the decentralized database will be discussed as a blockchain. Referring toFIG.5A, the method500may include, at510, receiving information to be stored in the blockchain. This operation may be performed by the receiver10as previously described. The information may correspond, for example, to a transaction received for storage in a new block to be appended to a blockchain. The transaction may be received, for example, by a node (e.g., a peer node) in the blockchain network from a client participating in the network. The transaction may relate to any information that is to be recorded in the blockchain, including but not limited to, various types of financial or cryptocurrency-based transactions. In one embodiment, the information may not relate to a financial transaction, but may nevertheless be referred to as a transaction. For example, the information may include any type of information or data (e.g., as described herein) that is to be stored for later retrieval or otherwise. For purposes of discussion, an example is discussed below involving weather-related data. At520, the received information is analyzed to extract first information that may be recurring or repeatedly stored in the decentralized database, e.g., information that may have been previously stored in one or more storage areas of the database prior to receiving the information to be stored in operation510. This operation may be performed by the extractor20as previously described. The first information may be, for example, a predetermined type of information and/or information that is of a kind likely considered to be of excessive length, such that repeatedly storing the same information in different storage areas of the database would increase the overall storage requirements of the database and its corresponding ledger to be stored by various nodes within the database network. Examples of the predetermined types or kinds of information that may correspond to the first information include, but are not limited to, digital certificates, various forms of media data (e.g., scanned documents, images, video, etc.), values of asset properties associated with transactions to be recorded, and/or other types of information. This information may be included, for example, in one or more predetermined fields or sections of the received information in order to allow for expedited or efficient extraction from the received information. In one embodiment, the extractor20may include logic that searches the received information (e.g., based on one or more predetermined fields, keywords, extensions, or associated data types) that allow for identification and extraction of the first information from the received information. At530, all or a portion of the storage areas of the database may be searched in order to locate a storage area that stores the first information that was extracted in operation520. When the decentralized database is a blockchain, the one or more storage areas may be a prior block in the blockchain storing the first information. The search in operation530may be performed based on a dictionary, for example, as described in greater detail below. In one embodiment, the search may be performed without using a dictionary, for example, based on a search of the ledger. At540, when a storage area storing the first information has been found by the search, second information may be accessed and correlated to the prior block or information in the prior block (e.g., a transaction). These operations may be performed, for example, by the correlator30of the system with reference to the dictionary described below. In one embodiment, the second information may correlate the first information (as received in operation510) to the earliest or first-occurring block in the blockchain that is found to store the first information, as determined by the search. In one embodiment, the second information may correlate the first information to any block (e.g., a block that is not the first block) that has been found by the search to store the first information. This latter situation may be applied when, for example, a threshold is used as a basis for determining whether first information constitutes recurring information in the blockchain. For example, there may be instances where only one prior occurrence of the first information in the blockchain is not sufficient to quality as recurring information. In such a case, the search may be required to determine whether the first information occurs a threshold number of times or more before the first information is deemed to qualify as recurring information. In such a case, the second information may point to any prior block storing the first information which may not be the first prior block. The second information may be different from the first information and substantially smaller in size (e.g., substantially fewer bits) than the first information. Also, in one embodiment, the second information may include an identifier that links to the storage area (e.g., the first storage area (but not necessarily the first-occurring storage area previously described forFIG.1A) found by the search, for example, in order to allow for recovery of the first information when a query of the blockchain produces the new block to be recorded. In one example implementation, the first storage area may be a previous or existing block in the blockchain and the second information may be an identifier or pointer to information stored in the previous block or to the block itself, e.g., in one embodiment the identifier or pointer may identify a transaction (e.g., transactionID) stored in the previous block. The second information, in the form of the transactionID, may therefore be used as a basis for linking information in the new block to the previous block, so that the first information (e.g., digital certificate, media data, or any other type of recurring information) linked by the second information may be retrieved during a subsequent query. The aforementioned correlation operations may be accomplished, for example, using a dictionary as described herein. The dictionary may be previously generated by one or more processors in the decentralized database network. The one or more processors may be associated with a node (e.g., a blockchain peer node) of the database network, an authority or administrator of the database network, or another entity of or coupled to the network, for example, as described herein. In one embodiment, the dictionary may be included with or related to the ledger (e.g., world state) maintained by the database network nodes or may be stored in storage area external to or coupled to a database node, client, or other entity. In one embodiment, the dictionary may be generated to store a plurality of second information (identifiers) linking different first information to different storage areas (e.g., blocks or transactions in the blocks) of the blockchain. The plurality of identifiers may include an identifier linking the first information extracted in operation520to a prior block of the blockchain identified by the search. The dictionary may store the different information as is or as a derived value. For example, when the first information corresponds to a digital certificate, the dictionary may generate and store a derived value in the form of a hashed value of the digital certificate in correspondence with an identifier included as the second information in the dictionary. The identifier may be, for example, the transactionID of a transaction stored in the prior block, which transaction includes the digital certificate. In another embodiment, the identifier may be another type of pointer which points to the prior block (e.g., an address or number of the prior block or entry in the prior block) storing the digital certificate. Thus, the identifier corresponding to the second information serves to link the first information (e.g., digital certificate) in the dictionary to a prior block or information stored in the prior block corresponding to the digital certificate. At550, once the second information has been correlated to the first storage area (e.g., by performing a dictionary search), a replacement operation may be performed which involves replacing the first information with the second information. The replacement operation may involve, for example, modifying the information received by receiver10to include the second information in place of the first information. In one embodiment, the replacement operation may involve providing instructions to the manager40store the received information with the second information instead of the first information. At560, the information received at operation510may be stored in a new storage area of the database (e.g., new block of a blockchain) in a form where the first information has been replaced with the second information. This operation may be performed by a manager40as previously described, which, for example, may be located in the same node that received in the information in operation500or another node or entity in the blockchain network. Because the second information (e.g., transactionID) is substantially smaller in size than the digital certificate, the replacement operation allows the size of the information stored in the new block to be substantially smaller than it otherwise would have been if the digital certificate were redundantly stored in the new block. Thus, in accordance with one or more embodiments, the method may be implemented to detect repeated or recurring information (e.g., first information) in a blockchain and then access second information (e.g., an identifier in a dictionary) that links, references, or points to information stored in a prior block that includes the first information. A new block may then be stored with the second information in place of the first information, in order to achieve a reduction in the storage requirements of new blocks containing recurring or repeatedly stored information. In one embodiment, the dictionary may serve as an auxiliary part of the ledger (or world state). FIG.5Billustrates a flow diagram570corresponding to an embodiment of a method for generating a dictionary as previously described. The dictionary may be generated and managed by a network entity (e.g., node, authority, administrator, etc.) based on an artificial intelligence manager35, programmed to implement a machine-learning algorithm that creates, updates, and retrieves information from the dictionary. The dictionary may then be updated on a continual or periodic basis based on data, asset property values, transactions, or other types of first information in correspondence with second information. In the following example, the second information will be discussed as including one or more transactionIDs. Referring toFIG.5B, generation of the dictionary may initially include, at571, by accessing transaction data corresponding to one or more blocks recorded in a blockchain ledger. The transaction data may be accessed in predetermined increments or chunks, e.g., one transaction at a time or multiple transactions or blocks (chunks) of transactions at a time. All of the transactions recorded on the ledger may be accessed and analyzed for purposes of generating or updating the dictionary. In one embodiment, only a predetermined number of transactions (e.g., a latest number of transactions) may be accessed for this purpose. At572, the transaction data is analyzed to identify and extract recurring instances of first information, which in this example may be referred to as property asset features. The analysis may be performed and the dictionary generated, for example, by one or more processors executing instructions corresponding to an unsupervised machine-learning algorithm. In one embodiment, and for purposes of describing the present example, the information extracted from the transaction data (e.g., property asset features) may be expressed in the form of <property, value> pairs. In other embodiments, the property asset values may be expressed in a different manner, e.g., a combination of tuples of values or data or as a singular value or data. When the property asset features are expressed in the form of <property, value> pairs, the machine-learning algorithm may be used to generate a model that is trained with an initial set of <property, value> pairs derived from multiple transactions accessed in operation571. In training the model, the algorithm may analyze the transaction data, first, to locate portions of the data that have discrete values and, second, to construct the property-value pairs to be used in generating a decision tree for the model. The discrete values may include, but are not limited to, digital signatures, scanned image values, or other values or information corresponding to one or more transaction data features. The discrete values may be used to generate the <property, value> pairs. In the present example, the <property, value> pairs correspond to weather data. Accordingly, the model may be generated through implementation of the algorithm based on a training set of weather data. Example of the training data may correspond, for example, to the dataset indicated in the table501ofFIG.5C. In the example ofFIG.5C, each item of transaction data (property asset features) has four <property, value> pairs, with each <property, value> pair having a property that corresponds to one of weather outlook, temperature, humidity, and wind. The values associated with each property may differ from one another. For example, the values associated with the outlook property include sunny, overcast, and rainy. The values associated with the temperature property include cool, mild, and hot. The values associated with the humidity property include normal and high. The values associated with the wind property include binary values of true and false. The following is an example of <property, value> pairs that may be generated based on the dataset inFIG.5C. < Outlook , Sunny >< Outlook , Overcast >< Outlook , Rainy >< Temp, Hot>< Temp, Mild>< Temp, Cool>< Humidity , High>< Windy , False > An example of a JSON object associated with an example of the property asset features for one transaction recorded in the blockchain includes four <property, value> pairs as set forth below. {Outlook: sunny,Temperature: hot,Humidity: high,Windy: false} At573, once the property asset features have been analyzed to extract, or generate, <property, value> pairs, the decision tree may be generated. Each node or level in the decision tree may be assigned a different one of the <property, value> pairs. In one embodiment, this may involve calculating a taxon value for each of the <property, value> pairs based on Equation (1). λi=∏j=1n⁢VjiDj(1) where λ is the taxon value, i corresponds to the set of the taxon value (e.g., YES set or NO set), j corresponds to property in the transaction, n corresponds to total number of properties per transaction, |Vji| corresponds to the number of values of appropriate subset Vji, and |Dj| is the total number of discrete property values feature X for all objects (transactions) from a selected chunk of transactions. At574, once the taxon values have been calculated, optimal grouping may be performed by calculating an optimal group value equal to the summation of the taxon values calculated based on the <property, value> pairs. The optimal group value may be calculated based on Equation (2). g=Σi=1Lλi(2) where g corresponds to the optimal group value, i corresponds to predetermined set values (e.g., YES set, NO set for a given chunk of transaction data as explained below), L corresponds to number of sets/labels and in this cases will be two (saved/not saved), and λiis the taxon value as determined in Equation (1). Conceptually, Equations (1) and (2) may be applied in an iterative manner, which may involve comparing the property asset features extracted from the transaction data to each <property, value> pair of all the <property, value> pairs identified in the transaction data. For example, for the first <property, value> pair that is <Outlook, Sunny>, the transactions may be divided into two sets i, a YES set and a NO set. For each set, in the example under consideration, the comparison may produce the result that five data samples (transactions) from the dataset have the <property, value> pair of outlook=sunny, as shown inFIG.5D. In this case, five data samples are included in a YES set which represents the samples which have sunny as the outlook value. From the data shown in table502ofFIG.5D, the outlook property has only one value in the YES set (sunny), out of the three possible values that it can take (namely sunny, overcast, and rainy). Given this result, additional taxon values may be computed for the other three properties (temperature, humidity, wind) based on their corresponding values as indicated the dataset ofFIG.5Drelative to the <property, value> pair of outlook=sunny. This results in an aggregate taxon value λifor the YES set of the <property, value> pair of outlook=sunny indicated inFIG.5D, where the aggregate taxon value λifor this YES set may be calculated as follows: λY⁢e⁢s=∏j=1n⁢VjiDj=13*33*22*22=0.3⁢3⁢3 The fractional values in this equation may be understood as follows. The numerator may indicate the number of different values for a given property in the Yes set and the denominator may indicate the total number of possible values for a given property. Thus, for the outlook property, the first fractional value (⅓) may have a numerator of 1 because all of the property values in the Yes set ofFIG.5Dare the same (sunny) and may have a denominator of 3 because there are three possible values for the property of outlook. The second fractional value (3/3) may have a numerator of 3 because three different values of the property temperature appear in the Yes set ofFIG.5D(namely, cool, hot, mild) and may have a denominator of 3 because there are three possible values for the property of temperature. The third fractional value (2/2) may have a numerator of 2 because two different values of the property humidity appear in the Yes set ofFIG.5D(namely, normal and high) and may have a denominator of 2 because there are two possible values for the property of humidity. The fourth fractional value (2/2) may have a numerator of 2 because two different values of the property wind appear in the Yes set ofFIG.5D(namely, false and true) and may have a denominator of 2 because there are two possible values for the property of wind. Once the grouping value for the YES set has been calculated, the grouping value for the NO set (e.g., outlook≠sunny) may be calculated based on the taxon values derived from the transaction data in the dataset. Equations (1) and (2) may be used to calculate the group value for the NO set in a manner analogous to the YES set. The group value for the NO set may be indicated below based on the transaction data in the dataset ofFIG.5D. λNo=∏j=1n⁢VjiDj=23*22=0.6⁢6⁢7 The first fractional value (⅔) may have a numerator of 2 because two of the values of property outlook (namely, overcast and sunny) do not appear in the dataset ofFIG.5Dand may have a denominator of 3 because there are three possible values of the property outlook. The second fractional value (2/2) may have numerator of 2 because of the two values of property windy (True and False) and may have denominator of 2 because there are two possible values of property windy. The other two fractional values are (2/2) for the property humidity and (3/3) for the property temperature, which will give 1 as the result of multiplication so only one fraction value of windy is stated in the above equation for illustration) Once the taxon value for the YES set and the taxon value for the NO set have been calculated, the value for the optimal grouping may be calculated based on the summation of the taxon values for the YES and NO sets, as indicated below which is an expanded version of Equation (2): g=Σi=1Lλi=λYes+λNo=0.333+0.667=1 At575, one of the <property, value> pairs of the dataset are assigned to a current level or node of the decision tree. In one embodiment, this may be performed based on the smallest value of the optimal grouping generated for the dataset inFIG.5D, which is the <Temperature, Cool> pair. Thus, based on implementation of the machine-learning program the <Temperature, Cool> pair is assigned at the current level (e.g., level 0) of the decision tree. An example of the first level of the decision tree504is shown inFIG.5F, with YES and NO branches. Calculation of the taxon values and group values may be performed for remaining ones of the properties in the initial chunk of transaction data. This may involve performing the same calculations to all the <property, value> pairs in the chunk of transaction data in the dataset to give the results503shown inFIG.5E. If additional chunks of transaction data are in the training data, the aforementioned operations may be repeated for those additional chunks. At576, once the <property, value> pair is assigned to the decision tree in operation575, a check is performed as to whether all <property, value> pairs that have been extracted from the transaction data have been considered. If all <property, value> pairs have not been considered, then the method returns to operation572and the method is repeated for the remaining <property, value> pairs, the result of which is to add additional nodes to the decision tree at various levels of the tree.FIG.5Gshows an example of the decision tree505generated when all <property, value> pairs have been considered for purposes of generating nodes of the tree at various levels. In this example, the decision three has three levels. At577, when all (or a predetermined number of) <property, value> pairs have been considered, the nodes (e.g., “leaves”) of the decision tree are categorized as either “saved” or “unsaved.” This categorization may be performed, for example, based on the number of occurrences of clusters corresponding to the nodes. In one embodiment, a cluster may be considered to be a grouping of the number of times the corresponding node was generated when additional chunks of transaction data are received as used to train the model. For example, after the first chunk is considered, a number of nodes are generated for the decision tree. When a second chunk of transaction data is considered, a second set of nodes for the decision tree may be generated, some of which may be the same as some of the nodes generated for the tree based on the first chunk of data. As more and more chunks of transaction data are processes by the algorithm, each node may have multiple occurrences, for clusters, which may be taken into consideration when generating the dictionary to be used in replacing the second information with the first information, as in the method ofFIG.5A By way of example, when viewed from left to right, the decision tree shown inFIG.5Ghas six clusters of one, one, two, four, four, two data samples (or occurrences) corresponding to the three levels of nodes. The clusters having occurrences (e.g., which may be based on the number of transactions) that are equal to or greater than a predetermined threshold value may be labelled as “saved.” The clusters that have fewer occurrences (ones having only on occurrence each) are labelled as “not saved.”FIG.5Hshows an example of the decision tree506with the labels of “saved” and “unsaved.” At578, the dictionary is created based on the clusters in the decision tree that have been labelled as “saved.” This may involve saving the values along the path of the candidate nodes (leaves) in the decision tree marked as “saved” as a linked-list entry in the dictionary. The result is to generate a two-dimensional matrix of linked lists. At579, each node in the linked list (and thus each node in the decision tree and its corresponding item in the dictionary) may be assigned a reference (e.g., an identifier, pointer, etc.) pointing to a transaction in a prior block of the blockchain indicative of an occurrence (e.g., first occurrence) of a corresponding value for a property in the ledger. In one embodiment, the first occurrence may be designated, for example, as the initial storage area in the blockchain leger that corresponds to a respective one of the <property, value> pairs of “saved” clusters in the decision tree. The initial storage areas may be indicated, for example, by a transactionID. After the dictionary has been generated, it may be used to perform blockchain queries. For example, the blockchain may be queried to locate the block and thus the dictionary entry (corresponding to a relevant one or more decision tree nodes) corresponding to the transactionIDs. The transaction information corresponding to the transactionID may correspond to the first information (e.g., digital certificate, digital media, etc.) that has found to be recurring in the blockchain. The transactionID (or other form of identifier or pointer) may correspond to the second information and may be used to replace the first information to reduce the size of the blockchain information to be stored in a newly appended blocks. FIG.5Ishows an example of a two-dimensional matrix507which may correspond to the dictionary generated based on the “saved” clusters indicated in the decision tree ofFIG.5H. The matrix includes a vertical column of properties (e.g., outlook, temperature, humidity wind) and a horizontal column of letters. Each letter may correspond to the first letter of a value corresponding to one of the properties listed in the vertical column. For example, the letter “c” corresponds to the value “cool” for the temperature property. The letter “O” corresponds to the value “overcast” for the property outlook. The letter “R” corresponds to the value “rainy” for the property outlook. The letter “N” corresponds to the value of “normal” for the property humidity. And, the value “T” corresponds to the value “true” for the property wind. The references or identifiers (e.g., second information) are included in the dictionary matrix in the form of transaction IDs. For example, each node in a linked list may be designated with a TransactionId that serves as a reference for the first occurrence of a value for the property corresponding to the node on the ledger. These references, or transactionIDs, are shown in the dictionary ofFIG.5I. For example, references for the <property, value> pairs that satisfy the conditions of temperature=cool and Windy=true (e.g., enclosed by the curve inFIG.511) are saved in the dictionary. These reference for the entries in the dictionary that correspond to these conditions is Transaction #7, which indicates the first transaction recorded in the blockchain that satisfies the conditions where temperature=cool and Windy=true. After the machine-learning algorithm has created the dictionary based on the training set of data, the algorithm may continue to update the decision tree with more levels and/or nodes as new <property, value> pairs are received in information to be stored in the blockchain. The dictionary may then be modified to reflect the updates to the decision tree. Additionally, or alternatively, the decision tree and dictionary may be modified based on newly received information to be stored in the blockchain and/or based on edits or revisions indicated, for example, by a consensus of the peer nodes and/or a related policy change. FIG.5Jshows an embodiment of a method508for writing a transaction in a new block of a blockchain using the dictionary as previously described. At581, when information is received for storage in a new block of a blockchain, logic driving the machine-language algorithm (e.g., as illustrated inFIG.1A) extracts one or more <property, value> pairs from the received information. The <property, value> pair(s) may correspond to the first information previously described. At582, for each property, the first character (e.g., letter or number) of the value is identified. At583, a search is then performed to determine whether the first character of the value is in the dictionary. If no, then the received information is stored in a new block with the first information. If yes, at584, the received information is stored in the new block in a format where the first information (e.g., recurring information as previously described) is replaced with the reference (e.g., transactionID), previously described as the second information, indicated in the corresponding entry (linked list) in the dictionary. At585, the number of occurrences for a corresponding one of the nodes in the decision tree may then be (appended1to act as the leftmost bit to the reference value to indicate that first information is replaced with second information) by 1 using the machine-learning algorithm. In one embodiment, if a similar value is found in one of the referenced transactions, then (1+the found transactionId) may be written as the value of the property. Otherwise, the value of the property may be written as is in the ledger. FIG.5Kshows an embodiment of a method509for reading a transaction previously stored in the blockchain in order to retrieve a property value that has been replaced. The method includes, at591, querying the blockchain to locate a transaction (in an associated block) corresponding to a transactionID corresponding to the property value. Because the transaction includes the property value (e.g., the first information), the property value may be recovered. As previously indicated, the transaction ID may therefore serve to link to allow the first information (property value) to be retrieved based on the second information (transactionID). At592, if the flag for the property value is in (blockchain) the (e.g., flag=1), the identifier (e.g., transactionID) saved in the dictionary for that property value (e.g., after the flag value is set) is retrieved from the dictionary. At593, a search is performed to determine whether a property value included in the transaction is in the dictionary. In one embodiment, each of the property values in the dictionary may be associated with a flag (leftmost bit in blockchain's referenced value) or in a register or other storage area external to the dictionary. The search may therefore involve locating the checking the flag bit corresponding to the property value while reading from block to see whether it is set to a logical 1 (saved in the dictionary) or logical 0 (not saved in the dictionary). In one embodiment, the aforementioned operations may be summarized as follows: perform a read against the ledger, and if any information (property value) contains flag 1 then this information will correspond to the second information and may be used as a basis for accessing the first information using dictionary. In accordance with one or more embodiments, replacing the first information with the second information for transaction or other information to be stored in the blockchain allows for a significant amount of storage savings. For example, for the dataset ofFIG.5CorFIG.5D, if there are four matching properties and each property has a size of 85 KB, storage requirements may be reduced by 99.85%. Equation (3) provides one way of measuring the storage reduction achieved by the replacement operation performed in accordance with one or more the embodiments described herein. Total⁢Storage⁢Size=(PR)⁢(∑i=1n(M)⁢(R)+((P-M)⁢(∑j=1k(S))))⁢M=number⁢of⁢⁢matching⁢properties⁢P=total⁢number⁢of⁢properties⁢R=size⁢of⁢Reference⁢S=total⁢sizes⁢of⁢properties⁢PR=number⁢of⁢peers⁢k=number⁢of⁢properties⁢n=number⁢of⁢⁢transactions FIG.6Aillustrates an example system600that includes a physical infrastructure610configured to perform various operations according to example embodiments. Referring toFIG.6A, the physical infrastructure610includes a module612and a module614. The module614includes a blockchain620and a smart contract630(which may reside on the blockchain620), that may execute any of the operational steps608(in module612) included in any of the example embodiments. The steps/operations608may include one or more of the embodiments described or depicted and may represent output or written information that is written or read from one or more smart contracts630and/or blockchains620. The physical infrastructure610, the module612, and the module614may include one or more computers, servers, processors, memories, and/or wireless communication devices. Further, the module612and the module614may be a same module. FIG.6Billustrates another example system640configured to perform various operations according to example embodiments. Referring toFIG.6B, the system640includes a module612and a module614. The module614includes a blockchain620and a smart contract630(which may reside on the blockchain620), that may execute any of the operational steps608(in module612) included in any of the example embodiments. The steps/operations608may include one or more of the embodiments described or depicted and may represent output or written information that is written or read from one or more smart contracts630and/or blockchains620. The physical infrastructure610, the module612, and the module614may include one or more computers, servers, processors, memories, and/or wireless communication devices. Further, the module612and the module614may be a same module. FIG.6Cillustrates an example system configured to utilize a smart contract configuration among contracting parties and a mediating server configured to enforce the smart contract terms on the blockchain according to example embodiments. Referring toFIG.6C, the configuration650may represent a communication session, an asset transfer session or a process or procedure that is driven by a smart contract630which explicitly identifies one or more user devices652and/or656. The execution, operations and results of the smart contract execution may be managed by a server654. Content of the smart contract630may require digital signatures by one or more of the entities652and656which are parties to the smart contract transaction. The results of the smart contract execution may be written to a blockchain620as a blockchain transaction. The smart contract630resides on the blockchain620which may reside on one or more computers, servers, processors, memories, and/or wireless communication devices. FIG.6Dillustrates a system660including a blockchain, according to example embodiments. Referring to the example ofFIG.6D, an application programming interface (API) gateway662provides a common interface for accessing blockchain logic (e.g., smart contract630or other chaincode) and data (e.g., distributed ledger, etc.). In this example, the API gateway662is a common interface for performing transactions (invoke, queries, etc.) on the blockchain by connecting one or more entities652and656to a blockchain peer (i.e., server654). Here, the server654is a blockchain network peer component that holds a copy of the world state and a distributed ledger allowing clients652and656to query data on the world state as well as submit transactions into the blockchain network where, depending on the smart contract630and endorsement policy, endorsing peers will run the smart contracts630. The above embodiments may be implemented in hardware, in a computer program executed by a processor, in firmware, or in a combination of the above. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium. An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (“ASIC”). In the alternative, the processor and the storage medium may reside as discrete components. FIG.7Aillustrates a process700of a new block being added to a distributed ledger720, according to example embodiments, andFIG.7Billustrates contents of a new data block structure730for blockchain, according to example embodiments. Referring toFIG.7A, clients (not shown) may submit transactions to blockchain nodes711,712, and/or713. Clients may be instructions received from any source to enact activity on the blockchain720. As an example, clients may be applications that act on behalf of a requester, such as a device, person or entity to propose transactions for the blockchain. The plurality of blockchain peers (e.g., blockchain nodes711,712, and713) may maintain a state of the blockchain network and a copy of the distributed ledger720. Different types of blockchain nodes/peers may be present in the blockchain network including endorsing peers which simulate and endorse transactions proposed by clients and committing peers which verify endorsements, validate transactions, and commit transactions to the distributed ledger720. In this example, the blockchain nodes711,712, and713may perform the role of endorser node, committer node, or both. The distributed ledger720includes a blockchain which stores immutable, sequenced records in blocks, and a state database724(current world state) maintaining a current state of the blockchain722. One distributed ledger720may exist per channel and each peer maintains its own copy of the distributed ledger720for each channel of which they are a member. The blockchain722is a transaction log, structured as hash-linked blocks where each block contains a sequence of N transactions. Blocks may include various components such as shown inFIG.7B. The linking of the blocks (shown by arrows inFIG.7A) may be generated by adding a hash of a prior block's header within a block header of a current block. In this way, all transactions on the blockchain722are sequenced and cryptographically linked together preventing tampering with blockchain data without breaking the hash links. Furthermore, because of the links, the latest block in the blockchain722represents every transaction that has come before it. The blockchain722may be stored on a peer file system (local or attached storage), which supports an append-only blockchain workload. The current state of the blockchain722and the distributed ledger722may be stored in the state database724. Here, the current state data represents the latest values for all keys ever included in the chain transaction log of the blockchain722. Chaincode invocations execute transactions against the current state in the state database724. To make these chaincode interactions extremely efficient, the latest values of all keys are stored in the state database724. The state database724may include an indexed view into the transaction log of the blockchain722, it can therefore be regenerated from the chain at any time. The state database724may automatically get recovered (or generated if needed) upon peer startup, before transactions are accepted. Endorsing nodes receive transactions from clients and endorse the transaction based on simulated results. Endorsing nodes hold smart contracts which simulate the transaction proposals. When an endorsing node endorses a transaction, the endorsing nodes creates a transaction endorsement which is a signed response from the endorsing node to the client application indicating the endorsement of the simulated transaction. The method of endorsing a transaction depends on an endorsement policy which may be specified within chaincode. An example of an endorsement policy is “the majority of endorsing peers must endorse the transaction”. Different channels may have different endorsement policies. Endorsed transactions are forward by the client application to ordering service710. The ordering service710accepts endorsed transactions, orders them into a block, and delivers the blocks to the committing peers. For example, the ordering service710may initiate a new block when a threshold of transactions has been reached, a timer times out, or another condition. In the example ofFIG.7A, blockchain node712is a committing peer that has received a new data new data block730for storage on blockchain720. The first block in the blockchain may be referred to as a genesis block which includes information about the blockchain, its members, the data stored therein, etc. The ordering service710may be made up of a cluster of orderers. The ordering service710does not process transactions, smart contracts, or maintain the shared ledger. Rather, the ordering service710may accept the endorsed transactions and specifies the order in which those transactions are committed to the distributed ledger720. The architecture of the blockchain network may be designed such that the specific implementation of ‘ordering’ (e.g., Solo, Kafka, BFT, etc.) becomes a pluggable component. Transactions are written to the distributed ledger720in a consistent order. The order of transactions is established to ensure that the updates to the state database724are valid when they are committed to the network. Unlike a crypto-currency blockchain system (e.g., Bitcoin, etc.) where ordering occurs through the solving of a cryptographic puzzle, or mining, in this example the parties of the distributed ledger720may choose the ordering mechanism that best suits that network. When the ordering service710initializes a new data block730, the new data block730may be broadcast to committing peers (e.g., blockchain nodes711,712, and713). In response, each committing peer validates the transaction within the new data block730by checking to make sure that the read set and the write set still match the current world state in the state database724. Specifically, the committing peer can determine whether the read data that existed when the endorsers simulated the transaction is identical to the current world state in the state database724. When the committing peer validates the transaction, the transaction is written to the blockchain722on the distributed ledger720, and the state database724is updated with the write data from the read-write set. If a transaction fails, that is, if the committing peer finds that the read-write set does not match the current world state in the state database724, the transaction ordered into a block will still be included in that block, but it will be marked as invalid, and the state database724will not be updated. Referring toFIG.7B, a new data block730(also referred to as a data block) that is stored on the blockchain722of the distributed ledger720may include multiple data segments such as a block header740, block data750, and block metadata760. It should be appreciated that the various depicted blocks and their contents, such as new data block730and its contents. shown inFIG.7Bare merely examples and are not meant to limit the scope of the example embodiments. The new data block730may store transactional information of N transaction(s) (e.g., 1, 10, 100, 500, 1000, 2000, 3000, etc.) within the block data750. The new data block730may also include a link to a previous block (e.g., on the blockchain722inFIG.7A) within the block header740. In particular, the block header740may include a hash of a previous block's header. The block header740may also include a unique block number, a hash of the block data750of the new data block730, and the like. The block number of the new data block730may be unique and assigned in various orders, such as an incremental/sequential order starting from zero. The block data750may store transactional information of each transaction that is recorded within the new data block730. For example, the transaction data may include one or more of a type of the transaction, a version, a timestamp, a channel ID of the distributed ledger720, a transaction ID, an epoch, a payload visibility, a chaincode path (deploy tx), a chaincode name, a chaincode version, input (chaincode and functions), a client (creator) identify such as a public key and certificate, a signature of the client, identities of endorsers, endorser signatures, a proposal hash, chaincode events, response status, namespace, a read set (list of key and version read by the transaction, etc.), a write set (list of key and value, etc.), a start key, an end key, a list of keys, a Merkel tree query summary, and the like. The transaction data may be stored for each of the N transactions. In some embodiments, the block data750may also store new data762which adds additional information to the hash-linked chain of blocks in the blockchain722. The additional information includes one or more of the steps, features, processes and/or actions described or depicted herein. Accordingly, the new data762can be stored in an immutable log of blocks on the distributed ledger720. Some of the benefits of storing such new data762are reflected in the various embodiments disclosed and depicted herein. Although inFIG.7Bthe new data762is depicted in the block data750but could also be located in the block header740or the block metadata760. The block metadata760may store multiple fields of metadata (e.g., as a byte array, etc.). Metadata fields may include signature on block creation, a reference to a last configuration block, a transaction filter identifying valid and invalid transactions within the block, last offset persisted of an ordering service that ordered the block, and the like. The signature, the last configuration block, and the orderer metadata may be added by the ordering service710. Meanwhile, a committer of the block (such as blockchain node712) may add validity/invalidity information based on an endorsement policy, verification of read/write sets, and the like. The transaction filter may include a byte array of a size equal to the number of transactions in the block data750and a validation code identifying whether a transaction was valid/invalid. FIG.7Cillustrates an embodiment of a blockchain770for digital content in accordance with the embodiments described herein. The digital content may include one or more files and associated information. The files may include media, images, video, audio, text, links, graphics, animations, web pages, documents, or other forms of digital content. The immutable, append-only aspects of the blockchain serve as a safeguard to protect the integrity, validity, and authenticity of the digital content, making it suitable use in legal proceedings where admissibility rules apply or other settings where evidence is taken into consideration or where the presentation and use of digital information is otherwise of interest. In this case, the digital content may be referred to as digital evidence. The blockchain may be formed in various ways. In one embodiment, the digital content may be included in and accessed from the blockchain itself. For example, each block of the blockchain may store a hash value of reference information (e.g., header, value, etc.) along the associated digital content. The hash value and associated digital content may then be encrypted together. Thus, the digital content of each block may be accessed by decrypting each block in the blockchain, and the hash value of each block may be used as a basis to reference a previous block. This may be illustrated as follows: Block 1Block 2. . .Block NHash Value 1Hash Value 2Hash Value NDigital Content 1Digital Content 2Digital Content N In one embodiment, the digital content may be not included in the blockchain. For example, the blockchain may store the encrypted hashes of the content of each block without any of the digital content. The digital content may be stored in another storage area or memory address in association with the hash value of the original file. The other storage area may be the same storage device used to store the blockchain or may be a different storage area or even a separate relational database. The digital content of each block may be referenced or accessed by obtaining or querying the hash value of a block of interest and then looking up that has value in the storage area, which is stored in correspondence with the actual digital content. This operation may be performed, for example, a database gatekeeper. This may be illustrated as follows: BlockchainStorage AreaBlock 1 Hash ValueBlock 1 Hash Value . . . Content......Block N Hash ValueBlock N Hash Value . . . Content In the example embodiment ofFIG.7C, the blockchain770includes a number of blocks7781,7782, . . .778N cryptographically linked in an ordered sequence, where N≥1. The encryption used to link the blocks7781,7782, . . .778N may be any of a number of keyed or un-keyed Hash functions. In one embodiment, the blocks7781,7782, . . .778N are subject to a hash function which produces n-bit alphanumeric outputs (where n is 256 or another number) from inputs that are based on information in the blocks. Examples of such a hash function include, but are not limited to, a SHA-type (SHA stands for Secured Hash Algorithm) algorithm, Merkle-Damgard algorithm, HAIFA algorithm, Merkle-tree algorithm, nonce-based algorithm, and a non-collision-resistant PRF algorithm. In another embodiment, the blocks7781,7782, . . . ,778N may be cryptographically linked by a function that is different from a hash function. For purposes of illustration, the following description is made with reference to a hash function, e.g., SHA-2. Each of the blocks7781,7782, . . . ,778N in the blockchain includes a header, a version of the file, and a value. The header and the value are different for each block as a result of hashing in the blockchain. In one embodiment, the value may be included in the header. As described in greater detail below, the version of the file may be the original file or a different version of the original file. The first block7781in the blockchain is referred to as the genesis block and includes the header7721, original file7741, and an initial value7761. The hashing scheme used for the genesis block, and indeed in all subsequent blocks, may vary. For example, all the information in the first block7781may be hashed together and at one time, or each or a portion of the information in the first block7781may be separately hashed and then a hash of the separately hashed portions may be performed. The header7721may include one or more initial parameters, which, for example, may include a version number, timestamp, nonce, root information, difficulty level, consensus protocol, duration, media format, source, descriptive keywords, and/or other information associated with original file7741and/or the blockchain. The header7721may be generated automatically (e.g., by blockchain network managing software) or manually by a blockchain participant. Unlike the header in other blocks7782to778N in the blockchain, the header7721in the genesis block does not reference a previous block, simply because there is no previous block. The original file7741in the genesis block may be, for example, data as captured by a device with or without processing prior to its inclusion in the blockchain. The original file7741is received through the interface of the system from the device, media source, or node. The original file7741is associated with metadata, which, for example, may be generated by a user, the device, and/or the system processor, either manually or automatically. The metadata may be included in the first block7781in association with the original file7741. The value7761in the genesis block is an initial value generated based on one or more unique attributes of the original file7741. In one embodiment, the one or more unique attributes may include the hash value for the original file7741, metadata for the original file7741, and other information associated with the file. In one implementation, the initial value7761may be based on the following unique attributes:1) SHA-2 computed hash value for the original file2) originating device ID3) starting timestamp for the original file4) initial storage location of the original file5) blockchain network member ID for software to currently control the original file and associated metadata The other blocks7782to778N in the blockchain also have headers, files, and values. However, unlike the first block7721, each of the headers7722to772N in the other blocks includes the hash value of an immediately preceding block. The hash value of the immediately preceding block may be just the hash of the header of the previous block or may be the hash value of the entire previous block. By including the hash value of a preceding block in each of the remaining blocks, a trace can be performed from the Nth block back to the genesis block (and the associated original file) on a block-by-block basis, as indicated by arrows780, to establish an auditable and immutable chain-of-custody. Each of the header7722to772N in the other blocks may also include other information, e.g., version number, timestamp, nonce, root information, difficulty level, consensus protocol, and/or other parameters or information associated with the corresponding files and/or the blockchain in general. The files7742to774N in the other blocks may be equal to the original file or may be a modified version of the original file in the genesis block depending, for example, on the type of processing performed. The type of processing performed may vary from block to block. The processing may involve, for example, any modification of a file in a preceding block, such as redacting information or otherwise changing the content of, taking information away from, or adding or appending information to the files. Additionally, or alternatively, the processing may involve merely copying the file from a preceding block, changing a storage location of the file, analyzing the file from one or more preceding blocks, moving the file from one storage or memory location to another, or performing action relative to the file of the blockchain and/or its associated metadata. Processing which involves analyzing a file may include, for example, appending, including, or otherwise associating various analytics, statistics, or other information associated with the file. The values in each of the other blocks7762to776N in the other blocks are unique values and are all different as a result of the processing performed. For example, the value in any one block corresponds to an updated version of the value in the previous block. The update is reflected in the hash of the block to which the value is assigned. The values of the blocks therefore provide an indication of what processing was performed in the blocks and also permit a tracing through the blockchain back to the original file. This tracking confirms the chain-of-custody of the file throughout the entire blockchain. For example, consider the case where portions of the file in a previous block are redacted, blocked out, or pixelated in order to protect the identity of a person shown in the file. In this case, the block including the redacted file will include metadata associated with the redacted file, e.g., how the redaction was performed, who performed the redaction, timestamps where the redaction(s) occurred, etc. The metadata may be hashed to form the value. Because the metadata for the block is different from the information that was hashed to form the value in the previous block, the values are different from one another and may be recovered when decrypted. In one embodiment, the value of a previous block may be updated (e.g., a new hash value computed) to form the value of a current block when any one or more of the following occurs. The new hash value may be computed by hashing all or a portion of the information noted below, in this example embodiment.a) new SHA-2 computed hash value if the file has been processed in any way (e.g., if the file was redacted, copied, altered, accessed, or some other action was taken)b) new storage location for the filec) new metadata identified associated with the filed) transfer of access or control of the file from one blockchain participant to another blockchain participant FIG.7Dillustrates an embodiment of a block which may represent the structure of the blocks in the blockchain790in accordance with one embodiment. The block, Blocki, includes a header772i, a file774i, and a value776i. The header772iincludes a hash value of a previous block Blocki-1and additional reference information, which, for example, may be any of the types of information (e.g., header information including references, characteristics, parameters, etc.) discussed herein. All blocks reference the hash of a previous block except, of course, the genesis block. The hash value of the previous block may be just a hash of the header in the previous block or a hash of all or a portion of the information in the previous block, including the file and metadata. The file774iincludes a plurality of data, such as Data1, Data2, . . . , Data N in sequence. The data are tagged with metadata Metadata1, Metadata2, . . . , Metadata N which describe the content and/or characteristics associated with the data. For example, the metadata for each data may include information to indicate a timestamp for the data, process the data, keywords indicating the persons or other content depicted in the data, and/or other features that may be helpful to establish the validity and content of the file as a whole, and particularly its use a digital evidence, for example, as described in connection with an embodiment discussed below. In addition to the metadata, each data may be tagged with reference REF1, REF2, . . . , REFN to a previous data to prevent tampering, gaps in the file, and sequential reference through the file. Once the metadata is assigned to the data (e.g., through a smart contract), the metadata cannot be altered without the hash changing, which can easily be identified for invalidation. The metadata, thus, creates a data log of information that may be accessed for use by participants in the blockchain. The value776iis a hash value or other value computed based on any of the types of information previously discussed. For example, for any given block Blocki, the value for that block may be updated to reflect the processing that was performed for that block, e.g., new hash value, new storage location, new metadata for the associated file, transfer of control or access, identifier, or other action or information to be added. Although the value in each block is shown to be separate from the metadata for the data of the file and header, the value may be based, in part or whole, on this metadata in another embodiment. Once the blockchain770is formed, at any point in time, the immutable chain-of-custody for the file may be obtained by querying the blockchain for the transaction history of the values across the blocks. This query, or tracking procedure, may begin with decrypting the value of the block that is most currently included (e.g., the last (Nth) block), and then continuing to decrypt the value of the other blocks until the genesis block is reached and the original file is recovered. The decryption may involve decrypting the headers and files and associated metadata at each block, as well. Decryption is performed based on the type of encryption that took place in each block. This may involve the use of private keys, public keys, or a public key-private key pair. For example, when asymmetric encryption is used, blockchain participants or a processor in the network may generate a public key and private key pair using a predetermined algorithm. The public key and private key are associated with each other through some mathematical relationship. The public key may be distributed publicly to serve as an address to receive messages from other users, e.g., an IP address or home address. The private key is kept secret and used to digitally sign messages sent to other blockchain participants. The signature is included in the message so that the recipient can verify using the public key of the sender. This way, the recipient can be sure that only the sender could have sent this message. Generating a key pair may be analogous to creating an account on the blockchain, but without having to actually register anywhere. Also, every transaction that is executed on the blockchain is digitally signed by the sender using their private key. This signature ensures that only the owner of the account can track and process (if within the scope of permission determined by a smart contract) the file of the blockchain. FIGS.8A and8Billustrate additional examples of use cases for blockchain which may be incorporated and used herein. In particular,FIG.8Aillustrates an example800of a blockchain810which stores machine learning (artificial intelligence) data. Machine learning relies on vast quantities of historical data (or training data) to build predictive models for accurate prediction on new data. Machine learning software (e.g., neural networks, etc.) can often sift through millions of records to unearth non-intuitive patterns. In the example ofFIG.8A, a host platform820builds and deploys a machine learning model for predictive monitoring of assets830. Here, the host platform820may be a cloud platform, an industrial server, a web server, a personal computer, a user device, and the like. Assets830can be any type of asset (e.g., machine or equipment, etc.) such as an aircraft, locomotive, turbine, medical machinery and equipment, oil and gas equipment, boats, ships, vehicles, and the like. As another example, assets830may be non-tangible assets such as stocks, currency, digital coins, insurance, or the like. The blockchain810can be used to significantly improve both a training process802of the machine learning model and a predictive process804based on a trained machine learning model. For example, in802, rather than requiring a data scientist/engineer or other user to collect the data, historical data may be stored by the assets830themselves (or through an intermediary, not shown) on the blockchain810. This can significantly reduce the collection time needed by the host platform820when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin to the blockchain810. By using the blockchain810to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the individuals that use the data for building a machine learning model. This allows for sharing of data among the assets830. The collected data may be stored in the blockchain810based on a consensus mechanism. The consensus mechanism pulls in (permissioned nodes) to ensure that the data being recorded is verified and accurate. The data recorded is time-stamped, cryptographically signed, and immutable. It is therefore auditable, transparent, and secure. Adding IoT devices which write directly to the blockchain can, in certain cases (i.e. supply chain, healthcare, logistics, etc.), increase both the frequency and accuracy of the data being recorded. Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform820. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In802, the different training and testing steps (and the data associated therewith) may be stored on the blockchain810by the host platform820. Each refinement of the machine learning model (e.g., changes in variables, weights, etc.) may be stored on the blockchain810. This provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform820has achieved a finally trained model, the resulting model may be stored on the blockchain810. After the model has been trained, it may be deployed to a live environment where it can make predictions/decisions based on the execution of the final trained machine learning model. For example, in804, the machine learning model may be used for condition-based maintenance (CBM) for an asset such as an aircraft, a wind turbine, a healthcare machine, and the like. In this example, data fed back from the asset830may be input the machine learning model and used to make event predictions such as failure events, error codes, and the like. Determinations made by the execution of the machine learning model at the host platform820may be stored on the blockchain810to provide auditable/verifiable proof. As one non-limiting example, the machine learning model may predict a future breakdown/failure to a part of the asset830and create alert or a notification to replace the part. The data behind this decision may be stored by the host platform820on the blockchain810. In one embodiment the features and/or the actions described and/or depicted herein can occur on or with respect to the blockchain810. New transactions for a blockchain can be gathered together into a new block and added to an existing hash value. This is then encrypted to create a new hash for the new block. This is added to the next list of transactions when they are encrypted, and so on. The result is a chain of blocks that each contain the hash values of all preceding blocks. Computers that store these blocks regularly compare their hash values to ensure that they are all in agreement. Any computer that does not agree, discards the records that are causing the problem. This approach is good for ensuring tamper-resistance of the blockchain, but it is not perfect. One way to game this system is for a dishonest user to change the list of transactions in their favor, but in a way that leaves the hash unchanged. This can be done by brute force, in other words by changing a record, encrypting the result, and seeing whether the hash value is the same. And if not, trying again and again and again until it finds a hash that matches. The security of blockchains is based on the belief that ordinary computers can only perform this kind of brute force attack over time scales that are entirely impractical, such as the age of the universe. By contrast, quantum computers are much faster (1000s of times faster) and consequently pose a much greater threat. FIG.8Billustrates an example850of a quantum-secure blockchain852which implements quantum key distribution (QKD) to protect against a quantum computing attack. In this example, blockchain users can verify each other's identities using QKD. This sends information using quantum particles such as photons, which cannot be copied by an eavesdropper without destroying them. In this way, a sender and a receiver through the blockchain can be sure of each other's identity. In the example ofFIG.8B, four users are present854,856,858, and860. Each of pair of users may share a secret key862(i.e., a QKD) between themselves. Since there are four nodes in this example, six pairs of nodes exists, and therefore six different secret keys862are used including QKDAB, QKDAC, QKDAD, QKDBC, QKDBD, and QKDCD. Each pair can create a QKD by sending information using quantum particles such as photons, which cannot be copied by an eavesdropper without destroying them. In this way, a pair of users can be sure of each other's identity. The operation of the blockchain852is based on two procedures (i) creation of transactions, and (ii) construction of blocks that aggregate the new transactions. New transactions may be created similar to a traditional blockchain network. Each transaction may contain information about a sender, a receiver, a time of creation, an amount (or value) to be transferred, a list of reference transactions that justifies the sender has funds for the operation, and the like. This transaction record is then sent to all other nodes where it is entered into a pool of unconfirmed transactions. Here, two parties (i.e., a pair of users from among854-860) authenticate the transaction by providing their shared secret key862(QKD). This quantum signature can be attached to every transaction making it exceedingly difficult to tamper with. Each node checks their entries with respect to a local copy of the blockchain852to verify that each transaction has sufficient funds. However, the transactions are not yet confirmed. Rather than perform a traditional mining process on the blocks, the blocks may be created in a decentralized manner using a broadcast protocol. At a predetermined period of time (e.g., seconds, minutes, hours, etc.) the network may apply the broadcast protocol to any unconfirmed transaction thereby to achieve a Byzantine agreement (consensus) regarding a correct version of the transaction. For example, each node may possess a private value (transaction data of that particular node). In a first round, nodes transmit their private values to each other. In subsequent rounds, nodes communicate the information they received in the previous round from other nodes. Here, honest nodes are able to create a complete set of transactions within a new block. This new block can be added to the blockchain852. In one embodiment the features and/or the actions described and/or depicted herein can occur on or with respect to the blockchain852. FIG.9illustrates an example system900that supports one or more of the example embodiments described and/or depicted herein. The system900comprises a computer system/server902, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server902include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server902may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server902may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.9, computer system/server902in cloud computing node900is shown in the form of a general-purpose computing device. The components of computer system/server902may include, but are not limited to, one or more processors or processing units904, a system memory906, and a bus that couples various system components including system memory906to processor904. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server902typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server902, and it includes both volatile and non-volatile media, removable and non-removable media. System memory906, in one embodiment, implements the flow diagrams of the other figures. The system memory906can include computer system readable media in the form of volatile memory, such as random-access memory (RAM)910and/or cache memory912. Computer system/server902may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system914can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus by one or more data media interfaces. As will be further depicted and described below, memory906may include one or more program products having a set (e.g., one or more) of program modules that are configured to carry out the functions of various embodiments of the application. Program/utility916, having a set (one or more) of program modules918, may be stored in memory906by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules918generally carry out the functions and/or methodologies of various embodiments of the application as described herein. As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method, or computer program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Computer system/server902may also communicate with one or more external devices920such as a keyboard, a pointing device, a display922, etc.; one or more devices that enable a user to interact with computer system/server902; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server902to communicate with one or more other computing devices. Such communication can occur via I/O interfaces924. Still yet, computer system/server902can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter926. As depicted, network adapter926communicates with the other components of computer system/server902via a bus. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server902. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Examples of Technological Innovations One or more of the aforementioned embodiments provide an enhancement to how blockchain itself works rather than an application that uses blockchain. The advantages introduced by the one or more embodiments may retain and/or enforce blockchain characteristics, including but not limited to provenance and immutability. Moreover, one of more of the embodiments may solve technical problems associated with, and may advance, optimizing how data is stored without compromising blockchain characteristics. Replacing blockchain with a traditional database would defeat these improvements altogether. One or more of the aforementioned embodiments may also represent an improvement to computer functionality utilizing the blockchain (e.g., improve how data is stored, arranged, and or retrieved, reduce data that needs to be stored, implement a new storage mechanism, add security to data, improve processing speed, improve security, etc.). In at least one implementation, the one or more embodiments may improve or optimizes the way data gets stored on a blockchain ledger. For example, recurring values of fields of transactions stored in a ledger may be detected and references to them may be stored on the ledger, as opposed to storing actual values each time they occur in a new transaction to be committed to the ledger. For large field values (e.g. binary files, photos, and BLOBs), storing references consumes much less storage space than storing the actual values (e.g., the content of the binary file). Also, the speed of data retrieval may be improved using a dictionary of references to the first occurrence of such values. Furthermore, one or more embodiments analyze existing data to determine which field value(s) may be good candidates to apply the algorithm. As a result, improved or maximized benefit may be achieved from a data storage savings perspective. One or more embodiments may also store at least two new types of data in blocks of a blockchain. The first type of new data may include a reference to a prior or first occurrence of the same field value. The reference is stored in place of the field value which is in the data section of a block. The improvement(s) this new data type brings includes, for example, the saving of storage space in the form of the difference between the size of a BLOB or a large binary file and the size to store a reference to the address of a previous occurrence of such an object on the ledger. The second type of new data may include a dictionary that holds the references mentioned above, for example, in a tree indexed format for faster retrieval. This is stored as data in the data section of a normal data block and is maintained by the blockchain like any other data block, so that it retains privacy, provenance, and immutability of references data, too. Although an exemplary embodiment of at least one of a system, method, and non-transitory computer readable medium has been illustrated in the accompanied drawings and described in the foregoing detailed description, it will be understood that the application is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions as set forth and defined by the following claims. For example, the capabilities of the system of the various figures can be performed by one or more of the modules or components described herein or in a distributed architecture and may include a transmitter, receiver or pair of both. For example, all or part of the functionality performed by the individual modules, may be performed by one or more of these modules. Further, the functionality described herein may be performed at various times and in relation to various events, internal or external to the modules or components. Also, the information sent between various modules can be sent between the modules via one or more of: a data network, the Internet, a voice network, an Internet Protocol network, a wireless device, a wired device and/or via plurality of protocols. Also, the messages sent or received by any of the modules may be sent or received directly and/or via one or more of the other modules. One skilled in the art will appreciate that a “system” could be embodied as a personal computer, a server, a console, a personal digital assistant (PDA), a cell phone, a tablet computing device, a smartphone or any other suitable computing device, or combination of devices. Presenting the above-described functions as being performed by a “system” is not intended to limit the scope of the present application in any way but is intended to provide one example of many embodiments. Indeed, methods, systems and apparatuses disclosed herein may be implemented in localized and distributed forms consistent with computing technology. It should be noted that some of the system features described in this specification have been presented as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, graphics processing units, or the like. A module may also be at least partially implemented in software for execution by various types of processors. An identified unit of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. It will be readily understood that the components of the application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments of the application. One having ordinary skill in the art will readily understand that the above may be practiced with steps in a different order, and/or with hardware elements in configurations that are different than those which are disclosed. Therefore, although the application has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent. While preferred embodiments of the present application have been described, it is to be understood that the embodiments described are illustrative only and the scope of the application is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto.
128,894
11860857
DETAILED DESCRIPTION The disclosed technology relates generally to a system that receives submissions related to a point of interest and determines whether to approve, or publish, the submissions. For instance, the system may receive a submission including one or more types of content related to the point of interest. The submission may include content that is different than what is currently published or publicly accessible relating to the point of interest. The types of content may include a photo, phone number, address, business hours, etc. relating and/or corresponding to the point of interest. The content may be moderated, or analyzed, to determine whether to accept the content in the submission, reject the content in the submission, or whether more content is necessary to make a determination. The submission may be stored in an input database. An input processor may convert the submission to make the submission compatible, or consumable, for a machine learning (“ML”) model. The ML model may be trained to determine whether the submission meets a threshold. The threshold may correspond to the quality of the submission. In some examples, the threshold may be a score corresponding to the quality or accuracy of the submission. In other examples, the threshold may comprise a metric that indicates a confidence level that the submission is related to the point of interest. The metric may be determined by reference to other information linked to the point of interest including images, objects in images, other points of interest within range of a reference point or other data that the point of interest is associated with the information provided in the submission. The threshold may be met, for example, if the metric indicates that a given number, e.g., three, items of other information associated with the point of interest correlate to information in the submission. The other information may comprise information autonomously accessed by the system from a database or as a result of computing operations associated with data mining public available sources, e.g., images, articles, etc. The input features to the ML model may include the types of content within the submission, such as the name, address, phone number, street view image, interior images, business hours, etc. pertaining to the point of interest. The ML model may use the input features to more accurately determine whether the submission meets a threshold. The output of processing using the ML model includes information reflecting a determination whether the submission meets the threshold. According to some examples, the submission may include a type of content unknown to the ML model. The ML model may classify the new type of content. For example, the submission may include a list of specials for a restaurant on a particular day. The ML model may determine the new type of content to be “daily specials.” The ML model may then update the model so that the submission is associated with “daily specials.” An input manager may manage the submission and any notifications regarding the submission. In examples where the ML model determines that the submission meets the threshold, the input manager may transmit a notification indicating that the submission has been published. For example, the notification may include a uniform resource locator (“URL”) or website address corresponding to the point of interest. The content of the submission may be found in that URL or website address. In some examples, the ML model may determine that the submission does not meet the threshold. In response, the input manager may transmit a notification indicating that submission does not meet the threshold. According to some examples, the notification may indicate specific additional content needed such that the submission would meet the threshold. The system may receive a secondary submission related to the original submission. The secondary submission may be combined with and/or added to the original submission to create an updated submission. The updated submission may be moderated similar to the original submission. In such an example, the input manager may direct the updated submission to go through remoderation. In such an example, the updated submission may be analyzed to determine the type of content within the submission. The types of content may be compared to one or more training examples, or models, within the ML model. If the updated submission meets a threshold corresponding to the point of interest, the information relating to the point of interest is updated based on the content in the submission. If the submission does not meet the threshold, a notification is transmitted requesting an additional submission. Example Systems FIGS.1A and1Billustrates an example system in which the features described above may be implemented. It should not be considered limiting the scope of the disclosure or usefulness of the features described herein. In this example, system100may include a plurality of computing devices110,120,130, storage system140, server computing device150, and network160. Each computing device110,120,130may include one or more processors111,121,131, memory112,122,132, data114,124,134, and instructions113,123,133. Each of computing devices110,120,130may also include a display115,125,135and user input116,126,136. For purposes of ease, computing devices110,120,130may be collectively or individually referred to as computing device110, the one or more processors111,121,131may be collectively or individually referred to as one or more processors111, memory112,122,132may be collectively or individually referred to as memory112, data114,124,134may be collectively or individually referred to as data114, instructions113,123,133may be collectively or individually referred to as instructions113, display115,125,135may be collectively or individuals referred to as display115, and user input116,126,136may be collectively or individuals referred to as user input116. Memory112of computing device110may store information that is accessible by processor111. Memory112may also include data that can be retrieved, manipulated or stored by the processor111. The memory112may be of any non-transitory type capable of storing information accessible by the processor111including a non-transitory computer-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, read-only memory (“ROM”), random access memory (“RAM”), optical disks, as well as other write-capable and read-only memories. Memory112may store information that is accessible by the processors111including instructions113that may be executed by processors111and data114. Data114may be retrieved, stored or modified by processors111in accordance with instructions113. For instance, although the present disclosure is not limited by a particular data structure, the data114may be stored in computer registers, in a relational database as a table having a plurality of different fields and records, XML documents, or flat files. The data114may also be formatted in a computer-readable format such as, but not limited to, binary values, ASCII or Unicode. By further way of example only, the data114may comprise information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories (including other network locations) or information that is used by a function to calculate the relevant data. The instructions113can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the processor111. In that regard, the terms “instructions,” “application,” “steps,” and “programs” can be used interchangeably herein. The instructions can be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below. The one or more processors111may include any conventional processors, such as a commercially available CPU or microprocessor. Alternatively, the processor can be a dedicated component such as an ASIC or other hardware-based processor. Although not necessary, computing devices110may include specialized hardware components to perform specific computing functions faster or more efficiently. AlthoughFIG.1Afunctionally illustrates the processor, memory, and other elements of computing devices110as being within the same respective blocks, it will be understood by those of ordinary skill in the art that the processor or memory may actually include multiple processors or memories that may or may not be stored within the same physical housing. Similarly, the memory may be a hard drive or other storage media located in a housing different from that of the computing devices110. Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel. Display115and other displays described herein may be any type of display, such as a monitor having a screen, a touch-screen, a projector, or a television. The display115of the one or more computing devices110may electronically display information to a user via a graphical user interface (“GUI”) or other types of user interfaces. For example, as will be discussed below, display115may electronically display a notification indicating the status of a submission relating to a point of interest. The user inputs116may be a mouse, keyboard, touch-screen, microphone, or any other type of input. The computing devices110may be located at various nodes of a network160and capable of directly and indirectly communicating with other nodes of network160. Although three (3) computing devices are depicted inFIG.1A, it should be appreciated that a typical system can include one or more computing devices, with each computing device being at a different node of network160. The network160and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network160can utilize standard communications protocols, such as WiFi, that are proprietary to one or more companies. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission. In one example, system100may include one or more server computing devices150having a plurality of computing devices, e.g., a load balanced server farm, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices. For instance, one or more server computing devices150may be a web server that is capable of communicating with the one or more client computing devices110via the network160. In addition, server computing device150may use network160to transmit and present information to a user of one of the other computing devices110. Server computing device150may include one or more processors151, memory152, instructions153, and data154. These components operate in the same or similar fashion as those described above with respect to computing devices110. Server computing device150may include a moderation channel170. Moderation channel170may receive, store, analyze, etc. a submission related to a point of interest. The submission may include one or more suggested updates related to the point of interest. For example, the submission may include update information corresponding to the hours of operation, the location, photos of the front facade or interior of the point of interest, etc. The moderation channel170may include input database172, dispatcher174, edit state machine176, edit manager178, remoderator180, input processor182, and moderation platform184. AlthoughFIG.1Bfunctionally illustrates the input database172, dispatcher174, edit state machine176, edit manager178, remoderator180, input processor182, and moderation platform184of moderation channel170as being within the moderation channel170, it will be understood by those of ordinary skill in the art that the components may not be stored within the same physical housing. It will be understood that each component of the moderation channel may actually include multiple processors or memories that may or may not be stored within the same physical housing. Input database172may store one or more submissions related to a point of interest. Dispatcher174may receive and/or retrieve the submission from input database172and dispatch the submission to the next step of processing. According to some examples, dispatcher174may create a ticket or internal identifier to track the proposed changed with respect to the point of interest. Dispatcher174may dispatch submissions in one or more parts to one or more components within moderation channel170. Edit state machine176may change and/or update one or more workflow states of the submission. For example, the one or more workflow states may include pending, approved, more information required, rejected, etc. The workflow state of the submission may change one or more times from the time the submission is received by input database172until the time the submission exits moderation channel170. The submission may exit moderation channel170after the submission is published, when the submission is rejected, etc. Edit manager178may move and/or direct the submission to a different component within moderation channel170based on the workflow state. For example, when the workflow state of the submission is pending, edit manager178may direct the submission to moderation platform184. In some examples, when the workflow state of the submission is “more information required,” edit manager178may direct the submission to remoderator180. Edit manager178may provide and/or transmit notifications to computing device110indicating the state of the submission. For example, if edit state machine176updates the submission workflow state to approved, edit manager178may move the submission to storage system140. In such an example, edit manager178may transmit a notification to the computing device110indicating that the submission has been approved. According to some examples, edit state machine176may update the submission workflow state to pending. In such an example, edit manager178may move the submission to moderation platform184and transmit a notification to the computing device110that the submission is pending or currently undergoing review. Remoderator180may combine one or more submissions related to a point of interest. For example, server computing device150may receive a first submission related to a point of interest. The first submission may not include enough information, may not include properly formatted information, may include inaccurate or conflicting information, etc., and, therefore, may not be able to be validated such that the content related to the point of interest is updated. In such an example, edit manager178may transmit a notification to computing device110requesting additional, updated, and/or corrected information. Server computing device150may receive a second submission with the additional, updated, and/or corrected information relating to the point of interest. Remoderator180may combine the first and second submissions into a single combined submission for review by moderation channel170. Input processor182may convert the one or more submissions saved in input database172into a format compatible, or consumable, for the ML model. Moderation platform184may include model manager186and rule engine188. Model manager186may include the ML model. The ML model may be trained to determine whether the submission meets a threshold corresponding to the quality and/or accuracy of the submission. The ML model may be trained using one or more input features. The input features may correspond to one or more types of content such as the name, address, phone number, street view image, interior images, business hours, etc. pertaining to the point of interest. The ML model may use the input features to more accurately determine whether the submission meets a threshold. The output of processing using the ML model includes information reflecting a determination whether the submission meets the threshold. Model manager182may update the ML model based on new types of content within the submission. For example, model manager182may classify the new type of content within the submission. The classified type of content may be included as an input feature of the ML model for future submissions. Rule engine188may process the submission using one or more heuristics. Each of the one or more heuristics may define a function or rule for processing the submission according to the function or rule. For example, if the submission include an update to the category of the point of interest, a heuristics may be removing submissions or input that includes numbers. According to some examples, the heuristics may be based on type of content submitted. For example, the heuristics may only process a certain type or content. Additionally or alternatively, the heuristics may not process a certain type of content. The processing of the submission by rule engine184may allow the submissions to be more consistent and/or streamlined for input into the ML model. Storage system140may store various types of content or data. For instance, the storage system140may store content related to a point of interest. The content may include, for example, a photo, phone number, address, business hours, etc. relating and/or corresponding to the point of interest. The content related to the point of interest may be changed, or updated, based on one or more submissions related to the point of interest. As shown inFIG.1C, each computing device110may be a personal computing device intended for use by a respective user and have all of the components normally used in connection with a personal computing device including a one or more processors (e.g., a central processing unit (CPU)), memory (e.g., RAM and internal hard drives) storing data and instructions, a display (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device such as a smart watch display that is operable to display information), and user input devices (e.g., a mouse, keyboard, touchscreen or microphone). The client computing devices may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another. Computing devices110may be capable of wirelessly exchanging and/or obtaining data over the network160. The devices110may each be a mobile computing device capable of wirelessly exchanging data with a server over a network such as the Internet, or a full-sized personal computing device. By way of example only, devices may include mobile phones130, wireless-enabled PDAs, tablet PC, a netbook that is capable of obtaining information via the Internet or other networks160, wearable computing devices (e.g., a smartwatch, headset, smartglasses, virtual reality player, other head-mounted display, etc.), wireless speakers, home assistants120, gaming consoles, etc. Example Methods In addition to the operations described above and illustrated in the figures, various operations will now be described. It should be understood that the following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in a different order or simultaneously, and steps may be added or omitted. Referring now toFIG.2, computing device130is being shown as being used in conjunction with system100. Server computing device150may receive a submission from computing device130. The submission may include update information or content related to a point of interest. The point of interest may be a store, a restaurant or café, a movie theater, bowling alley, landmark, museum, etc. The point of interest may be determined and/or selected using a search206of mapping application202. For example, a search for a point of interest in mapping application202may identify one or more points of interest on map204. In some examples, the point of interest may be selected from a web search. FIG.3illustrates a user interface displaying information corresponding to the selected point of interest. For example, the point of interest may be Bowl-a-Rama310, a nearby bowling alley. Selecting the point of interest may display information and/or content related to the point of interest. As shown, the content may include the address312, hours314, phone number316, website318, and one or more photos320related to Bowl-a-Rama310. According to some examples, the content may be inaccurate or out of date. The user interface may include an option, or input button, to contribute208. Selecting contribute208may cause computing device130to display a user interface for submitting updated content related to the point of interest. In some examples, contribute208may allow a user to check on the status of a submission. FIG.4illustrates an example user interface in contribution mode422. The user interface for contribution mode422may be displayed after contribute208is selected. Contribution mode422may include one or more input field424. The input fields424may correspond to the types of content displayed when the point of interest is selected by a user. For example, input fields424may include name, category, address, phone number, hours, photos, website, additional information, etc. As shown inFIG.4, update information related to the point of interest, Bowl-a-Rama310, may be entered into input fields424. For example, the category and address may be missing. The category, bowling alley, and address, 123 Main St., may be entered into input fields424and submitted to server computing device150. For example, after update information is entered into input fields424, selecting submit426may transmit the update information to server computing device150via network160. In some examples, contribution mode422may include an input for submission status428. Selecting submission status428may cause computing device130to display a different user interface which provides status updates related to a submission of content for a point of interest. FIGS.5A-5Fillustrate example submission statuses.FIG.5Aillustrates an example submission status after a submission has been submitted to server computing device150. The submission status428after the server computing device150receives the submission may be “successfully submitted”530. After receiving the submission, server computing device150may store the submission in input database172. The submission may additionally or alternatively be stored in system storage140. Dispatcher174may direct the submission to be reviewed by moderation channel170. In some examples, dispatcher174may create a ticket or identify the submission for tracking through the review process. After a submission has been received by server computing device150but before server computing device150begins review of the submission, the submission status428user interface may provide an option to withdrawn submission532. A submission may be withdrawn due to errors or typos in the submission, wrong content being submitted, etc. FIG.5Billustrates an example submission status after server computing device150and/or moderation channel170begins review of the submission. For example, after server computing device150begins review of the submission, edit state machine176may change the submission status from “successfully submitted”530to “moderation in progress”534. While the submission status is shown as “moderation in progress,” the submission status may be any indication that the submission is currently being reviewed such as “in review,” “please wait, decision will be updated,” etc. According to some examples, edit manager178may transmit a notification to computing device130. The notification may be displayed on display135of computing device130and may indicate the workflow status of the submission. While moderation is in progress, moderation channel170and, therefore, moderation platform184may review the content within the submission for accuracy, integrity, quality, etc. Moderation channel170and/or moderation platform184may review the content using a ML model and/or one or more rules. For example, input processor182may convert the raw data from the submission into a format that is consumable for the ML model. For example, each of the input fields424may correspond to an input feature of the ML model. The ML model may be trained according to a variety of machine learning techniques, for example using a model trainer configured to train the system and/or ML model. The ML model may be trained according to a supervised learning technique on a training set of input features. The model trainer may pass training input through the system and/or ML model to obtain a forecast corresponding to the training input. For example, the input features, such as the name, address, phone number, street view image, interior images, business hours, etc. pertaining to the point of interest, may be used to determine whether the submission meets a threshold. The output of processing using the ML model may include information reflecting a determination whether the submission meets the threshold. The threshold may correspond to the quality, accuracy, integrity, etc. of the submission. According to some examples, a submission may include a type of content unknown to the ML model. For example, the submission may update information that was submitted via the “additional information” input field424. In some examples, the content included in the “additional information” input field424may be a new type of content. The new type of content may be content that does not correspond to a predefined input field424or an input feature into the ML model. For example, the new type of content may be related to the type of music played at Bowl-a-Rama310. Content related to the type of music Bowl-a-Rama310plays on a given night may be submitted in the “additional information” input field424. The ML model may determine the new type of content to be “music.” The ML model may be updated such that the new content within the submission is associated with a “music” input feature. The input feature “music” may then be used as part of the ML model for future submissions. After the ML model reviews the submission, the ML model may determine that the submission does not meet a threshold. In such an example, edit state machine176may update the submission status from “moderation in progress”534to “looking for more information”536, as illustrated inFIG.5C. Edit manager178may transmit a notification to computing device130indicating that more information is required for further review of the submission. The notification may include specifics regarding what additional information is required. For example, as shown inFIG.4, an updated address was submitted. The updated address included “123 Main St.” but did not include a city, state, or zip code. Edit manager178may transmit a notification indicating that a full address, including the city state, and zip code, is required for further review of the submission. In response to a “looking for more information”542status, server computing device150may receive a second submission. For example, a user may submit a second submission through the contribution mode422user interface. The second submission may include the content or information indicated as necessary for further review. Remoderator180may combine the second submission with the first submission to create a combined submission. Input processor182may process the combined submission into a format that is consumable for the ML model. The ML model may determine whether the combined submission does or does not meet a threshold. According to some examples, server computing device150may not receive a second submission in response to the “looking for more information”542status. In such an example, edit manager178may transmit one or more additional notifications requesting the information. For example, edit manager178may transmit a notification to client device130indicating that server computing device150is waiting for additional information, the initial submission is going to expire due to missing information, etc. In some examples, after the ML model reviews the submission, the ML model may determine that the submission does meet a threshold. In such an example, edit state machine176may change the submission status from “moderation in progress”534or “looking for more information”536to “approved and applied”538, as illustrated inFIG.5C. Edit manager178may transmit a notification to computing device130indicating that the submission has been approved and applied. The notification may include a URL or website address corresponding to the point of interest that provides access to the updated content. In examples where the submission meets the threshold, the publicly available content related to the point of interest may be updated. The updated content related to the point of interest may be saved and/or stored in storage system140. According to some examples, after the ML model reviews the first and/or combined submission, the ML model may determine that the ML model does not meet a threshold. In such an example, edit state machine176may change the submission status from “moderation in progress”534or “looking for more information”536to “rejected and not applied”540, as illustrated inFIG.5D. A submission may be rejected and not applied due to quality, integrity, accuracy, etc. of the submission. In some examples, the submission may be rejected for lacking complete information and/or not receiving a secondary submission. Edit manager178may transmit a notification to client computing device130indicating that the submission has not been approved. A reason for the rejection may be provided within the notification. In addition, the notification may include information that alerts a user that further action is required, including requesting more information, e.g., other information associated with the point of interest. According to some examples, a submission may not be approved. For example, the submission may not be approve as it may be duplicative of another submission. For example, the ML model may determine that the submission meets the threshold for approval but the content related to the point of interest already includes that information. In some examples, the same submission may be received by server computing device150twice. In examples where the content in the submission is duplicative of the published content related to the point of interest and/or the content in the submission is duplicative of another submission, the submission status428may change from moderation in progress”534or “looking for more information”536to “duplicate”542, as shown inFIG.5F. FIG.6illustrates a sequence of steps that may occur among device130, server150, moderation channel170, and storage system140. The following operations do not have to be performed in the precise order described below. Rather, various operations can be handled in a different order or simultaneously, and operations may be added or omitted. In block602, device130may submit content, such as update information, via a contribution mode. For example device130may have a display for outputting a user interface. The user interface may provide input fields corresponding to content types. A user may interact with the user interface to provide content related to a point of interest. The content may be submitted to server150via network160. In block604, server150may receive the content related to the point of interest. For example, the content may include update information corresponding to publicly available content and/or content associated with the point of interest. In some examples, the content may include new information related to the point of interest. New information may be content that is not publicly available and/or associated with the point of interest. In block606moderation channel170may compare the content of the submission to a model. A ML model may be used to determine whether the content of the submission is associated with the point of interest. Additionally or alternatively, the ML model may compare the content of the submission to a threshold. The threshold may correspond to the quality, accuracy, integrity, associability, etc. of the submission. For example, the submission may be an updated address for the point of interest. The ML model may compare the submitted address to trained models that include address format, address information for other companies, etc. According to some examples, moderation channel170may determine whether the submission is associated with the point of interest by comparing features associated with the point of interest with the content of the submission. In block608the moderation channel170may determine that the submission meets the threshold and/or that the submission is associable with the point of interest. In block610, based on the determination that the submission meets the threshold, moderation channel170may transmit a notification to device130. The notification may indicate that the submission was approved. Additionally or alternatively, in block612, based on the determination that the submission meets the threshold, the content associated with the point of interest in storage system140may be updated. In block614the moderation channel170may determine that the submission does not meet the threshold and/or that the submission is not associable with the point of interest. In block616, based on the determination that the submission does not meet the threshold, moderation channel170may transmit a notification to device130. The notification may request additional information. In some examples, the notification may include specific details or content to be submitted. In block618, server may receive a second submission related to the point of interest and combine the content of the first and second submissions. In block620, moderation channel170may compare the content of the submission to a model, similar to block608. The process may repeat itself until moderation channel170updates the content associated with the point of interest or determines not to update the content associated with the point of interest. Moderation channel170may not update the content associated with the point of interest as the submission content may be duplicative, not enough information may have been provided, etc. FIG.7illustrates an example method for updating information relating to a point of interest. The following operations do not have to be performed in the precise order described below. Rather, various operations can be handled in a different order or simultaneously, and operations may be added or omitted. In block710, one or more processors may receive a submission including content related to a point of interest. The content may include update information. For example, the content may include a new address, updated hours of operation, additional photos of the point of interest, etc. In block720, the one or more processors may analyze the submission to determine a type of content. The type of content may correspond to the input fields of the user interface. In some examples, the type of content may be different than the input fields of the user interface. In such an example, the one or more processors may determine the type of content and provide a label or field title. In block730, the one or more processors may compare the type of content in the submission to a model of the type of content. The model may be a ML model. The ML model may be trained using one or more training examples. The training examples may use the input fields of the user interface as input features to the ML model. The ML model may be updated with additional input features when different and/or additional types of content are determined. In some examples, the ML model may, additionally or alternatively, determine whether the update information in the submission is associable with the point of interest. In block740, the one or more processors may determine whether the submission meets a threshold corresponding to the point of interest. The threshold may related to quality, integrity, format, accuracy, etc. For example, if the submission includes update information relating to the address of the point of interest, the submission may be compared to a model of an address to determine whether the format, accuracy, quality, etc. of the update information meets a threshold. In block750, if the submission meets the threshold, the one or more processors may update the point of interest. In block760, if the submission does not meet the threshold, the one or more processors may transmit a first notification.
37,597
11860858
DETAILED DESCRIPTION Described herein are systems and methods for decoding distributed ledger transaction records. The decoded transaction records may be utilized for data extraction and visualization, as well as for further processing, such as data analysis, alert triggering, and/or reporting, as described in more detail herein below. An example distributed ledger may include multiple nodes, such that each node may be associated with one or more distributed ledger accounts. The distributed ledger may implement a transaction-based state machine, which transitions to a new state based on a set of inputs represented by transaction records (referred to as “transactions” for conciseness). A distributed ledger may be cryptographically-protected, e.g., by cryptographically encrypting the transaction records, such that reversing a transaction becomes computationally infeasible. In one embodiment, the cryptographically-protected distributed ledger may be implemented by a blockchain. A transaction may encode a message that is sent by a source account to a destination account. The message, which is signed by the private key of the source account, may specify a transfer of a certain amount of a digital asset from the source account to the destination account. Some distributed ledgers (e.g., Ethereum) support a special account type, which is referred to as “contract account.” A message to a contract account activates its executable code implementing a “smart contract,” which may evaluate specified conditions and perform various actions (e.g., transfer cryptocurrency tokens between accounts, write data to internal storage, mint new cryptocurrency tokens, perform calculation, create new smart contracts, etc.). The nodes of the distributed ledger may collectively implement a distributed virtual machine (e.g., the Ethereum Virtual Machine (EVM)) for executing the code implementing smart contracts. A smart contract can be created in a high level programming language (such as Solidity) and then compiled into the EVM bytecode. In various implementations, a transaction may further specify various other parameters, e.g., the amount of a digital asset to be transferred to a node that has successfully processed the transaction. A transaction may be cryptographically signed by the originating node. To cause a state transition of the blockchain, a transaction should be validated by at least one node, which would then include it, together with other transactions, into a block that is appended to the blockchain. The block also includes a “proof of work” value that has been computed by the node that created the block in order to enforce a sequential order of blocks. The proof of work value is produced by solving a computationally intensive task (e.g., computing padding bits to be appended to the block in order to produce a predetermined value of the block hash). Some distributed ledgers, such as Ethereum, may rely on other consensus mechanisms, such as Clique, IBFT, PBFT, or use proof of stake instead of proof of work to order blocks. Thus, a distributed ledger may implement a cryptographically protected distributed immutable database and a distributed virtual machine for executing smart contracts. A data intake and query system operating in accordance with aspects of the present disclosure can implement a Getting-Data-In (GDI) component (such as data adapter, monitor, forwarder, connector, or the like) in order to ingest the distributed ledger transaction data, e.g., by reading log files maintained by one or more nodes of the distributed ledger, listening to the blocks, transactions, and events that are broadcasted to all participating nodes of a distributed ledger, and/or performing other actions. The ingested raw data may be aggregated, decoded, visualized, and/or further processed by the data intake and query system. As noted herein above, a transaction can include invocation of a smart contract. Since the bytecode that encodes smart contract invocation does not preserve parameter names, meaningful decoding of such a transaction would require extrinsic knowledge of the meanings of the parameters. Decoding the transaction may be further hindered by potentially overlapping signatures of functions exposed by different smart contracts, i.e., two or more different functions having the same signature that encodes the function name and parameter types. In an illustrative example, each of two smart contracts may expose a function with the signature transfer(address, uint256). In one smart contract, the second argument (having the type of uint256) may refer to an amount of digital currency being transferred, while in another smart contract the second argument may refer to a token identifier. However, both functions would have the same signature, thus presenting a challenge for the transaction decoder. The systems and methods of the present disclosure overcome the above-noted challenges by implementing digital fingerprinting of smart contracts, which facilitates associating a smart contract of interest with a known application binary interface (ABI) definition. “Digital fingerprint” herein refers to a numeric value (represented by a bit string of a predetermined size) which can be unambiguously derived from the smart contract bytecode encoding or ABI definition, such that the probability of two different smart contracts (e.g., smart contracts that have different ABI definitions) having the same digital fingerprint value is very low (e.g., below a predetermined probability threshold). In an illustrative example, the digital fingerprint of a smart contract can be represented by a hash of all function and event signatures exposed by the contract ABI. Thus, a transaction decoder implemented in accordance with aspects of the present disclosure may compute a digital fingerprint for the EVM bytecode implementing a smart contract invoked by a distributed ledger transaction and compare the computed digital fingerprint with known digital fingerprints of ABI definitions of smart contracts, the database of which can be maintained by the data intake and query system. Upon identifying a matching smart contract among the stored ABI definitions, signatures of the functions and events of the identified matching smart contracts, along with other pertinent information that can be stored in the smart contract ABI database maintained by the query intake and analysis system, can be utilized for decoding the distributed ledger transaction. The decoded transaction data, including the function name, the parameters names, types, and values, may be fed to a data intake and query system, such as the SPLUNK® ENTERPRISE system developed by Splunk Inc. of San Francisco, California, as described in more detail herein below. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index, and search machine data from various websites, applications, servers, networks, and mobile devices that power their businesses. The data intake and query system is particularly useful for analyzing data which is commonly found in system log files, network data, and other data input sources. Although many of the techniques described herein are explained with reference to a data intake and query system similar to the SPLUNK® ENTERPRISE system, these techniques are also applicable to other types of data systems. In the data intake and query system, machine data are collected and stored as “events”. An event comprises a portion of machine data and is associated with a specific point in time. The portion of machine data may reflect activity in an IT environment and may be produced by a component of that IT environment, where the events may be searched to provide insight into the IT environment, thereby improving the performance of components in the IT environment. Events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time. In general, each event has a portion of machine data that is associated with a timestamp that is derived from the portion of machine data in the event. A timestamp of an event may be determined through interpolation between temporally proximate events having known timestamps or may be determined based on other configurable rules for associating timestamps with events. In some instances, machine data can have a predefined format, where data items with specific data formats are stored at predefined locations in the data. For example, the machine data may include data associated with fields in a database table. In other instances, machine data may not have a predefined format (e.g., may not be at fixed, predefined locations), but may have repeatable (e.g., non-random) patterns. This means that some machine data can comprise various data items of different data types that may be stored at different locations within the data. For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing machine data that includes different types of performance and diagnostic information associated with a specific point in time (e.g., a timestamp). Examples of components which may generate machine data from which events can be derived include, but are not limited to, web servers, application servers, databases, firewalls, routers, operating systems, and software applications that execute on computer systems, mobile devices, sensors, Internet of Things (IoT) devices, distributed ledger nodes, etc. The machine data generated by such data sources can include, for example and without limitation, server log files, activity log files, configuration files, messages, network packet data, performance measurements, sensor measurements, distributed ledger transactions, etc. The data intake and query system uses a flexible schema to specify how to extract information from events. A flexible schema may be developed and redefined as needed. Note that a flexible schema may be applied to events “on the fly,” when it is needed (e.g., at search time, index time, ingestion time, etc.). When the schema is not applied to events until search time, the schema may be referred to as a “late-binding schema.” During operation, the data intake and query system receives machine data from any type and number of sources (e.g., one or more system logs, streams of network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc.). The system parses the machine data to produce events each having a portion of machine data associated with a timestamp. The system stores the events in a data store. The system enables users to run queries against the stored events to, for example, retrieve events that meet criteria specified in a query, such as criteria indicating certain keywords or having specific values in defined fields. As used herein, the term “field” refers to a location in the machine data of an event containing one or more values for a specific data item. A field may be referenced by a field name associated with the field. As will be described in more detail herein, a field is defined by an extraction rule (e.g., a regular expression) that derives one or more values or a sub-portion of text from the portion of machine data in each event to produce a value for the field for that event. The set of values produced are semantically-related (such as IP address), even though the machine data in each event may be in different formats (e.g., semantically-related values may be in different positions in the events derived from different sources). As described above, the system stores the events in a data store. The events stored in the data store are field-searchable, where field-searchable herein refers to the ability to search the machine data (e.g., the raw machine data) of an event based on a field specified in search criteria. For example, a search having criteria that specifies a field name “UserID” may cause the system to field-search the machine data of events to identify events that have the field name “UserID.” In another example, a search having criteria that specifies a field name “UserID” with a corresponding field value “12345” may cause the system to field-search the machine data of events to identify events having that field-value pair (e.g., field name “UserID” with a corresponding field value of “12345”). Events are field-searchable using one or more configuration files associated with the events. Each configuration file includes one or more field names, where each field name is associated with a corresponding extraction rule and a set of events to which that extraction rule applies. The set of events to which an extraction rule applies may be identified by metadata associated with the set of events. For example, an extraction rule may apply to a set of events that are each associated with a particular host, source, or source type. When events are to be searched based on a particular field name specified in a search, the system uses one or more configuration files to determine whether there is an extraction rule for that particular field name that applies to each event that falls within the criteria of the search. If so, the event is considered as part of the search results (and additional processing may be performed on that event based on criteria specified in the search). If not, the next event is similarly analyzed, and so on. As noted above, the data intake and query system utilizes a late-binding schema while performing queries on events. One aspect of a late-binding schema is applying extraction rules to events to extract values for specific fields during search time. More specifically, the extraction rule for a field can include one or more instructions that specify how to extract a value for the field from an event. An extraction rule can generally include any type of instruction for extracting values from events. In some cases, an extraction rule comprises a regular expression, where a sequence of characters forms a search pattern. An extraction rule comprising a regular expression is referred to herein as a regex rule. The system applies a regex rule to an event to extract values for a field associated with the regex rule, where the values are extracted by searching the event for the sequence of characters defined in the regex rule. In the data intake and query system, a field extractor may be configured to automatically generate extraction rules for certain fields in the events when the events are being created, indexed, or stored, or possibly at a later time. Alternatively, a user may manually define extraction rules for fields using a variety of techniques. In contrast to a conventional schema for a database system, a late-binding schema is not defined at data ingestion time. Instead, the late-binding schema can be developed on an ongoing basis until the time a query is actually executed. This means that extraction rules for the fields specified in a query may be provided in the query itself, or may be located during execution of the query. Hence, as a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying machine data and uses a late-binding schema for searching the machine data, it enables a user to continue investigating and learn valuable insights about the machine data. In some embodiments, a common field name may be used to reference two or more fields containing equivalent and/or similar data items, even though the fields may be associated with different types of events that possibly have different data formats and different extraction rules. By enabling a common field name to be used to identify equivalent and/or similar fields from different types of events generated by disparate data sources, the system facilitates use of a “common information model” (CIM) across the disparate data sources (further discussed with respect toFIG.7A). FIG.1is a block diagram of an example networked computer environment100, in accordance with example embodiments. Those skilled in the art would understand thatFIG.1represents one example of a networked computer system and other embodiments may use different arrangements. The networked computer system100comprises one or more computing devices. These one or more computing devices comprise any combination of hardware and software configured to implement the various logical components described herein. For example, the one or more computing devices may include one or more memories that store instructions for implementing the various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. In some embodiments, one or more client devices102are coupled to one or more host devices106and a data intake and query system108via one or more networks104. Networks104broadly represent one or more LANs, WANs, cellular networks (e.g., LTE, HSPA, 3G, and other cellular technologies), and/or networks using any of wired, wireless, terrestrial microwave, or satellite links, and may include the public Internet. In the illustrated embodiment, a system100includes one or more host devices106. Host devices106may broadly include any number of computers, virtual machine instances, and/or data centers that are configured to host or execute one or more instances of host applications114. In general, a host device106may be involved, directly or indirectly, in processing requests received from client devices102. Each host device106may comprise, for example, one or more of a network device, a web server, an application server, a database server, etc. A collection of host devices106may be configured to implement a network-based service. For example, a provider of a network-based service may configure one or more host devices106and host applications114(e.g., one or more web servers, application servers, database servers, etc.) to collectively implement the network-based application. In general, client devices102communicate with one or more host applications114to exchange information. The communication between a client device102and a host application114may, for example, be based on the Hypertext Transfer Protocol (HTTP) or any other network protocol. Content delivered from the host application114to a client device102may include, for example, HTML documents, media content, etc. The communication between a client device102and host application114may include sending various requests and receiving data packets. For example, in general, a client device102or application running on a client device may initiate communication with a host application114by making a request for a specific resource (e.g., based on an HTTP request), and the application server may respond with the requested content stored in one or more response packets. In the illustrated embodiment, one or more of host applications114may generate various types of performance data during operation, including event logs, network data, sensor data, and other types of machine data. For example, a host application114comprising a web server may generate one or more web server logs in which details of interactions between the web server and any number of client devices102is recorded. As another example, a host device106comprising a router may generate one or more router logs that record information related to network traffic managed by the router. As yet another example, a host application114comprising a database server may generate one or more logs that record information related to requests sent from other host applications114(e.g., web servers or application servers) for data managed by the database server. Client devices102ofFIG.1represent any computing device capable of interacting with one or more host devices106via a network104. Examples of client devices102may include, without limitation, smart phones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, and so forth. In general, a client device102can provide access to different content, for instance, content provided by one or more host devices106, etc. Each client device102may comprise one or more client applications110, described in more detail in a separate section hereinafter. In some embodiments, each client device102may host or execute one or more client applications110that are capable of interacting with one or more host devices106via one or more networks104. For instance, a client application110may be or comprise a web browser that a user may use to navigate to one or more websites or other resources provided by one or more host devices106. As another example, a client application110may comprise a mobile application or “app.” For example, an operator of a network-based service hosted by one or more host devices106may make available one or more mobile apps that enable users of client devices102to access various resources of the network-based service. As yet another example, client applications110may include background processes that perform various operations without direct interaction from a user. A client application110may include a “plug-in” or “extension” to another application, such as a web browser plug-in or extension. In some embodiments, a client application110may include a monitoring component112. At a high level, the monitoring component112comprises a software component or other logic that facilitates generating performance data related to a client device's operating state, including monitoring network traffic sent and received from the client device and collecting other device and/or application-specific information. Monitoring component112may be an integrated component of a client application110, a plug-in, an extension, or any other type of add-on component. Monitoring component112may also be a stand-alone process. In some embodiments, a monitoring component112may be created when a client application110is developed, for example, by an application developer using a software development kit (SDK). The SDK may include custom monitoring code that can be incorporated into the code implementing a client application110. When the code is converted to an executable application, the custom code implementing the monitoring functionality can become part of the application itself. In some embodiments, an SDK or other code for implementing the monitoring functionality may be offered by a provider of a data intake and query system, such as a system108. In such cases, the provider of the system108can implement the custom code so that performance data generated by the monitoring functionality is sent to the system108to facilitate analysis of the performance data by a developer of the client application or other users. In some embodiments, the custom monitoring code may be incorporated into the code of a client application110in a number of different ways, such as the insertion of one or more lines in the client application code that call or otherwise invoke the monitoring component112. As such, a developer of a client application110can add one or more lines of code into the client application110to trigger the monitoring component112at desired points during execution of the application. Code that triggers the monitoring component may be referred to as a monitor trigger. For instance, a monitor trigger may be included at or near the beginning of the executable code of the client application110such that the monitoring component112is initiated or triggered as the application is launched, or included at other points in the code that correspond to various actions of the client application, such as sending a network request or displaying a particular interface. In some embodiments, the monitoring component112may monitor one or more aspects of network traffic sent and/or received by a client application110. For example, the monitoring component112may be configured to monitor data packets transmitted to and/or from one or more host applications114. Incoming and/or outgoing data packets can be read or examined to identify network data contained within the packets, for example, and other aspects of data packets can be analyzed to determine a number of network performance statistics. Monitoring network traffic may enable information to be gathered particular to the network performance associated with a client application110or set of applications. In some embodiments, network performance data refers to any type of data that indicates information about the network and/or network performance. Network performance data may include, for instance, a URL requested, a connection type (e.g., HTTP, HTTPS, etc.), a connection start time, a connection end time, an HTTP status code, request length, response length, request headers, response headers, connection status (e.g., completion, response time(s), failure, etc.), and the like. Upon obtaining network performance data indicating performance of the network, the network performance data can be transmitted to a data intake and query system108for analysis. Upon developing a client application110that incorporates a monitoring component112, the client application110can be distributed to client devices102. Applications generally can be distributed to client devices102in any manner, or they can be pre-loaded. In some cases, the application may be distributed to a client device102via an application marketplace or other application distribution system. For instance, an application marketplace or other application distribution system might distribute the application to a client device based on a request from the client device to download the application. Examples of functionality that enables monitoring performance of a client device are described in U.S. patent application Ser. No. 14/524,748, entitled “UTILIZING PACKET HEADERS TO MONITOR NETWORK TRAFFIC IN ASSOCIATION WITH A CLIENT DEVICE”, filed on 27 Oct. 2014, and which is hereby incorporated by reference in its entirety for all purposes. In some embodiments, the monitoring component112may also monitor and collect performance data related to one or more aspects of the operational state of a client application110and/or client device102. For example, a monitoring component112may be configured to collect device performance information by monitoring one or more client device operations, or by making calls to an operating system and/or one or more other applications executing on a client device102for performance information. Device performance information may include, for instance, a current wireless signal strength of the device, a current connection type and network carrier, current memory performance information, a geographic location of the device, a device orientation, and any other information related to the operational state of the client device. In some embodiments, the monitoring component112may also monitor and collect other device profile information including, for example, a type of client device, a manufacturer and model of the device, versions of various software applications installed on the device, and so forth. In general, a monitoring component112may be configured to generate performance data in response to a monitor trigger in the code of a client application110or other triggering application event, as described above, and to store the performance data in one or more data records. Each data record, for example, may include a collection of field-value pairs, each field-value pair storing a particular item of performance data in association with a field for the item. For example, a data record generated by a monitoring component112may include a “network Latency” field (not shown in the Figure) in which a value is stored. This field indicates a network latency measurement associated with one or more network requests. The data record may include a “state” field to store a value indicating a state of a network connection, and so forth for any number of aspects of collected performance data. FIG.2is a block diagram of an example data intake and query system108, in accordance with example embodiments. System108includes one or more forwarders204that receive data from a variety of input data sources202, and one or more indexers206that process and store the data in one or more data stores208. These forwarders204and indexers206can comprise separate computer systems, or may alternatively comprise separate processes executing on one or more computer systems. Each data source202broadly represents a distinct source of data that can be consumed by system108. Examples of a data sources202include, without limitation, data files, directories of files, data sent over a network, event logs, registries, etc. During operation, the forwarders204identify which indexers206receive data collected from a data source202and forward the data to the appropriate indexers. Forwarders204can also perform operations on the data before forwarding, including removing extraneous data, detecting timestamps in the data, parsing data, indexing data, routing data based on criteria relating to the data being routed, and/or performing other data transformations. In some embodiments, a forwarder204may comprise a service accessible to client devices102and host devices106via a network104. For example, one type of forwarder204may be capable of consuming vast amounts of real-time data from a potentially large number of client devices102and/or host devices106. The forwarder204may, for example, comprise a computing device which implements multiple data pipelines or “queues” to handle forwarding of network data to indexers206. A forwarder204may also perform many of the functions that are performed by an indexer. For example, a forwarder204may perform keyword extractions on raw data or parse raw data to create events. A forwarder204may generate time stamps for events. Additionally, or alternatively, a forwarder204may perform routing of events to indexers206. Data store208may contain events derived from machine data from a variety of sources all pertaining to the same component in an IT environment, and this data may be produced by the machine in question or by other components in the IT environment. The example data intake and query system108described in reference toFIG.2comprises several system components, including one or more forwarders, indexers, and search heads. In some environments, a user of a data intake and query system108may install and configure, on computing devices owned and operated by the user, one or more software applications that implement some or all of these system components. For example, a user may install a software application on server computers owned by the user and configure each server to operate as one or more of a forwarder, an indexer, a search head, etc. This arrangement generally may be referred to as an “on-premises” solution. That is, the system108is installed and operates on computing devices directly controlled by the user of the system. Some users may prefer an on-premises solution because it may provide a greater level of control over the configuration of certain aspects of the system (e.g., security, privacy, standards, controls, etc.). However, other users may instead prefer an arrangement in which the user is not directly responsible for providing and managing the computing devices upon which various components of system108operate. In one embodiment, to provide an alternative to an entirely on-premises environment for system108, one or more of the components of a data intake and query system instead may be provided as a cloud-based service. In this context, a cloud-based service refers to a service hosted by one more computing resources that are accessible to end users over a network, for example, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a cloud-based data intake and query system by managing computing resources configured to implement various aspects of the system (e.g., forwarders, indexers, search heads, etc.) and by providing access to the system to end users via a network. Typically, a user may pay a subscription or other fee to use such a service. Each subscribing user of the cloud-based service may be provided with an account that enables the user to configure a customized cloud-based system based on the user's preferences. FIG.3illustrates a block diagram of an example cloud-based data intake and query system. Similar to the system ofFIG.2, the networked computer system300includes input data sources202and forwarders204. These input data sources and forwarders may be in a subscriber's private computing environment. Alternatively, they might be directly managed by the service provider as part of the cloud service. In the example system300, one or more forwarders204and client devices302are coupled to a cloud-based data intake and query system306via one or more networks304. Network304broadly represents one or more LANs, WANs, cellular networks, intranetworks, internetworks, etc., using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet, and is used by client devices302and forwarders204to access the system306. Similar to the system of 38, each of the forwarders204may be configured to receive data from an input source and to forward the data to other components of the system306for further processing. In some embodiments, a cloud-based data intake and query system306may comprise a plurality of system instances308. In general, each system instance308may include one or more computing resources managed by a provider of the cloud-based system306made available to a particular subscriber. The computing resources comprising a system instance308may, for example, include one or more servers or other devices configured to implement one or more forwarders, indexers, search heads, and other components of a data intake and query system, similar to system108. As indicated above, a subscriber may use a web browser or other application of a client device302to access a web portal or other interface that enables the subscriber to configure an instance308. Providing a data intake and query system as described in reference to system108as a cloud-based service presents a number of challenges. Each of the components of a system108(e.g., forwarders, indexers, and search heads) may at times refer to various configuration files stored locally at each component. These configuration files typically may involve some level of user configuration to accommodate particular types of data a user desires to analyze and to account for other user preferences. However, in a cloud-based service context, users typically may not have direct access to the underlying computing resources implementing the various system components (e.g., the computing resources comprising each system instance308) and may desire to make such configurations indirectly, for example, using one or more web-based interfaces. Thus, the techniques and systems described herein for providing user interfaces that enable a user to configure source type definitions are applicable to both on-premises and cloud-based service contexts, or some combination thereof (e.g., a hybrid system where both an on-premises environment, such as SPLUNK® ENTERPRISE, and a cloud-based environment, such as SPLUNK CLOUD □, are centrally visible). FIG.4shows a block diagram of an example of a data intake and query system108that provides transparent search facilities for data systems that are external to the data intake and query system. Such facilities are available in the Splunk® Analytics for Hadoop® system provided by Splunk Inc. of San Francisco, California. Splunk® Analytics for Hadoop® represents an analytics platform that enables business and IT teams to rapidly explore, analyze, and visualize data in Hadoop® and NoSQL data stores. The search head210of the data intake and query system receives search requests from one or more client devices404over network connections420. As discussed above, the data intake and query system108may reside in an enterprise location, in the cloud, etc.FIG.4illustrates that multiple client devices404a,404b. . .404nmay communicate with the data intake and query system108. The client devices404may communicate with the data intake and query system using a variety of connections. For example, one client device inFIG.4is illustrated as communicating over an Internet (Web) protocol, another client device is illustrated as communicating via a command line interface, and another client device is illustrated as communicating via a software developer kit (SDK). The search head210analyzes the received search request to identify request parameters. If a search request received from one of the client devices404references an index maintained by the data intake and query system, then the search head210connects to one or more indexers206of the data intake and query system for the index referenced in the request parameters. That is, if the request parameters of the search request reference an index, then the search head accesses the data in the index via the indexer. The data intake and query system108may include one or more indexers206, depending on system access resources and requirements. As described further below, the indexers206retrieve data from their respective local data stores208as specified in the search request. The indexers and their respective data stores can comprise one or more storage devices and typically reside on the same system, though they may be connected via a local network connection. If the request parameters of the received search request reference an external data collection, which is not accessible to the indexers206or under the management of the data intake and query system, then the search head210can access the external data collection through an External Result Provider (ERP) process410. An external data collection may be referred to as a “virtual index” (plural, “virtual indices”). An ERP process provides an interface through which the search head210may access virtual indices. Thus, a search reference to an index of the system relates to a locally stored and managed data collection. In contrast, a search reference to a virtual index relates to an externally stored and managed data collection, which the search head may access through one or more ERP processes410,412.FIG.4shows two ERP processes410,412that connect to respective remote (external) virtual indices, which are indicated as a Hadoop or another system414(e.g., Amazon S3, Amazon EMR, other Hadoop® Compatible File Systems (HCFS), etc.) and a relational database management system (RDBMS)416. Other virtual indices may include other file organizations and protocols, such as Structured Query Language (SQL) and the like. The ellipses between the ERP processes410,412indicate optional additional ERP processes of the data intake and query system108. An ERP process may be a computer process that is initiated or spawned by the search head210and is executed by the search data intake and query system108. Alternatively, or additionally, an ERP process may be process spawned by the search head210on the same or different host system as the search head210resides. The search head210may spawn a single ERP process in response to multiple virtual indices referenced in a search request, or the search head may spawn different ERP processes for different virtual indices. Generally, virtual indices that share common data configurations or protocols may share ERP processes. For example, all search query references to a Hadoop file system may be processed by the same ERP process, if the ERP process is suitably configured. Likewise, all search query references to a SQL database may be processed by the same ERP process. In addition, the search head may provide a common ERP process for common external data source types (e.g., a common vendor may utilize a common ERP process, even if the vendor includes different data storage system types, such as Hadoop and SQL). Common indexing schemes also may be handled by common ERP processes, such as flat text files or Weblog files. The search head210determines the number of ERP processes to be initiated via the use of configuration parameters that are included in a search request message. Generally, there is a one-to-many relationship between an external results provider “family” and ERP processes. There is also a one-to-many relationship between an ERP process and corresponding virtual indices that are referred to in a search request. For example, using RDBMS, assume two independent instances of such a system by one vendor, such as one RDBMS for production and another RDBMS used for development. In such a situation, it is likely preferable (but optional) to use two ERP processes to maintain the independent operation as between production and development data. Both of the ERPs, however, will belong to the same family, because the two RDBMS system types are from the same vendor. The ERP processes410,412receive a search request from the search head210. The search head may optimize the received search request for execution at the respective external virtual index. Alternatively, the ERP process may receive a search request as a result of analysis performed by the search head or by a different system process. The ERP processes410,412can communicate with the search head210via conventional input/output routines (e.g., standard in/standard out, etc.). In this way, the ERP process receives the search request from a client device such that the search request may be efficiently executed at the corresponding external virtual index. The ERP processes410,412may be implemented as a process of the data intake and query system. Each ERP process may be provided by the data intake and query system, or may be provided by process or application providers who are independent of the data intake and query system. Each respective ERP process may include an interface application installed at a computer of the external result provider that ensures proper communication between the search support system and the external result provider. The ERP processes410,412generate appropriate search requests in the protocol and syntax of the respective virtual indices414,416, each of which corresponds to the search request received by the search head210. Upon receiving search results from their corresponding virtual indices, the respective ERP process passes the result to the search head210, which may return or display the results or a processed set of results based on the returned results to the respective client device. Client devices404may communicate with the data intake and query system108through a network interface420, e.g., one or more LANs, WANs, cellular networks, intranetworks, and/or internetworks using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet. The analytics platform utilizing the External Result Provider process described in more detail in U.S. Pat. No. 8,738,629, entitled “EXTERNAL RESULT PROVIDED PROCESS FOR RETRIEVING DATA STORED USING A DIFFERENT CONFIGURATION OR PROTOCOL”, issued on 27 May 2014, U.S. Pat. No. 8,738,587, entitled “PROCESSING A SYSTEM SEARCH REQUEST BY RETRIEVING RESULTS FROM BOTH A NATIVE INDEX AND A VIRTUAL INDEX”, issued on 25 Jul. 2013, U.S. patent application Ser. No. 14/266,832, entitled “PROCESSING A SYSTEM SEARCH REQUEST ACROSS DISPARATE DATA COLLECTION SYSTEMS”, filed on 1 May 2014, and U.S. Pat. No. 9,514,189, entitled “PROCESSING A SYSTEM SEARCH REQUEST INCLUDING EXTERNAL DATA SOURCES”, issued on 6 Dec. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. The ERP processes described above may include two operation modes: a streaming mode and a reporting mode. The ERP processes can operate in streaming mode only, in reporting mode only, or in both modes simultaneously. Operating in both modes simultaneously is referred to as mixed mode operation. In a mixed mode operation, the ERP at some point can stop providing the search head with streaming results and only provide reporting results thereafter, or the search head at some point may start ignoring streaming results it has been using and only use reporting results thereafter. The streaming mode returns search results in real time, with minimal processing, in response to the search request. The reporting mode provides results of a search request with processing of the search results prior to providing them to the requesting search head, which in turn provides results to the requesting client device. ERP operation with such multiple modes provides greater performance flexibility with regard to report time, search latency, and resource utilization. In a mixed mode operation, both streaming mode and reporting mode are operating simultaneously. The streaming mode results (e.g., the machine data obtained from the external data source) are provided to the search head, which can then process the results data (e.g., break the machine data into events, timestamp it, filter it, etc.) and integrate the results data with the results data from other external data sources, and/or from data stores of the search head. The search head performs such processing and can immediately start returning interim (streaming mode) results to the user at the requesting client device; simultaneously, the search head is waiting for the ERP process to process the data it is retrieving from the external data source as a result of the concurrently executing reporting mode. In some instances, the ERP process initially operates in a mixed mode, such that the streaming mode operates to enable the ERP quickly to return interim results (e.g., some of the machined data or unprocessed data necessary to respond to a search request) to the search head, enabling the search head to process the interim results and begin providing to the client or search requester interim results that are responsive to the query. Meanwhile, in this mixed mode, the ERP also operates concurrently in reporting mode, processing portions of machine data in a manner responsive to the search query. Upon determining that it has results from the reporting mode available to return to the search head, the ERP may halt processing in the mixed mode at that time (or some later time) by stopping the return of data in streaming mode to the search head and switching to reporting mode only. The ERP at this point starts sending interim results in reporting mode to the search head, which in turn may then present this processed data responsive to the search request to the client or search requester. Typically, the search head switches from using results from the ERP's streaming mode of operation to results from the ERP's reporting mode of operation when the higher bandwidth results from the reporting mode outstrip the amount of data processed by the search head in the streaming mode of ERP operation. A reporting mode may have a higher bandwidth because the ERP does not have to spend time transferring data to the search head for processing all the machine data. In addition, the ERP may optionally direct another processor to do the processing. The streaming mode of operation does not need to be stopped to gain the higher bandwidth benefits of a reporting mode; the search head could simply stop using the streaming mode results—and start using the reporting mode results—when the bandwidth of the reporting mode has caught up with or exceeded the amount of bandwidth provided by the streaming mode. Thus, a variety of triggers and ways to accomplish a search head's switch from using streaming mode results to using reporting mode results may be appreciated by one skilled in the art. The reporting mode can involve the ERP process (or an external system) performing event breaking, time stamping, filtering of events to match the search query request, and calculating statistics on the results. The user can request particular types of data, such as if the search query itself involves types of events, or the search request may ask for statistics on data, such as on events that meet the search request. In either case, the search head understands the query language used in the received query request, which may be a proprietary language. One exemplary query language is Splunk Processing Language (SPL) developed by the assignee of the application, Splunk Inc. The search head typically understands how to use that language to obtain data from the indexers, which store data in a format used by the SPLUNK® Enterprise system. The ERP processes support the search head, as the search head is not ordinarily configured to understand the format in which data is stored in external data sources such as Hadoop or SQL data systems. Rather, the ERP process performs that translation from the query submitted in the search support system's native format (e.g., SPL if SPLUNK® ENTERPRISE is used as the search support system) to a search query request format that will be accepted by the corresponding external data system. The external data system typically stores data in a different format from that of the search support system's native index format, and it utilizes a different query language (e.g., SQL or MapReduce, rather than SPL or the like). As noted, the ERP process can operate in the streaming mode alone. After the ERP process has performed the translation of the query request and received raw results from the streaming mode, the search head can integrate the returned data with any data obtained from local data sources (e.g., native to the search support system), other external data sources, and other ERP processes (if such operations were required to satisfy the terms of the search query). An advantage of mixed mode operation is that, in addition to streaming mode, the ERP process is also executing concurrently in reporting mode. Thus, the ERP process (rather than the search head) is processing query results (e.g., performing event breaking, timestamping, filtering, possibly calculating statistics if required to be responsive to the search query request, etc.). It should be apparent to those skilled in the art that additional time is needed for the ERP process to perform the processing in such a configuration. Therefore, the streaming mode will allow the search head to start returning interim results to the user at the client device before the ERP process can complete sufficient processing to start returning any search results. The switchover between streaming and reporting mode happens when the ERP process determines that the switchover is appropriate, such as when the ERP process determines it can begin returning meaningful results from its reporting mode. The operation described above illustrates the source of operational latency: streaming mode has low latency (immediate results) and usually has relatively low bandwidth (fewer results can be returned per unit of time). In contrast, the concurrently running reporting mode has relatively high latency (it has to perform a lot more processing before returning any results) and usually has relatively high bandwidth (more results can be processed per unit of time). For example, when the ERP process does begin returning report results, it returns more processed results than in the streaming mode, because, e.g., statistics only need to be calculated to be responsive to the search request. That is, the ERP process doesn't have to take time to first return machine data to the search head. As noted, the ERP process could be configured to operate in streaming mode alone and return just the machine data for the search head to process in a way that is responsive to the search request. Alternatively, the ERP process can be configured to operate in the reporting mode only. Also, the ERP process can be configured to operate in streaming mode and reporting mode concurrently, as described, with the ERP process stopping the transmission of streaming results to the search head when the concurrently running reporting mode has caught up and started providing results. The reporting mode does not require the processing of all machine data that is responsive to the search query request before the ERP process starts returning results; rather, the reporting mode usually performs processing of chunks of events and returns the processing results to the search head for each chunk. For example, an ERP process can be configured to merely return the contents of a search result file verbatim, with little or no processing of results. That way, the search head performs all processing (such as parsing byte streams into events, filtering, etc.). The ERP process can be configured to perform additional intelligence, such as analyzing the search request and handling all the computation that a native search indexer process would otherwise perform. In this way, the configured ERP process provides greater flexibility in features while operating according to desired preferences, such as response latency and resource requirements. FIG.5Ais a flow chart of an example method that illustrates how indexers process, index, and store data received from forwarders, in accordance with example embodiments. The data flow illustrated inFIG.5Ais provided for illustrative purposes only; those skilled in the art would understand that one or more of the steps of the processes illustrated inFIG.5Amay be removed or that the ordering of the steps may be changed. Furthermore, for the purposes of illustrating a clear example, one or more particular system components are described in the context of performing various operations during each of the data flow stages. For example, a forwarder is described as receiving and processing machine data during an input phase; an indexer is described as parsing and indexing machine data during parsing and indexing phases; and a search head is described as performing a search query during a search phase. However, other system arrangements and distributions of the processing steps across system components may be used. At block502, a forwarder receives data from an input source, such as a data source202shown inFIG.2. A forwarder initially may receive the data as a raw data stream generated by the input source. For example, a forwarder may receive a data stream from a log file generated by an application server, from a stream of network data from a network device, or from any other source of data. In some embodiments, a forwarder receives the raw data and may segment the data stream into “blocks”, possibly of a uniform data size, to facilitate subsequent processing steps. At block504, a forwarder or other system component annotates each block generated from the raw data with one or more metadata fields. These metadata fields may, for example, provide information related to the data block as a whole and may apply to each event that is subsequently derived from the data in the data block. For example, the metadata fields may include separate fields specifying each of a host, a source, and a source type related to the data block. A host field may contain a value identifying a host name or IP address of a device that generated the data. A source field may contain a value identifying a source of the data, such as a pathname of a file or a protocol and port related to received network data. A source type field may contain a value specifying a particular source type label for the data. Additional metadata fields may also be included during the input phase, such as a character encoding of the data, if known, and possibly other values that provide information relevant to later processing steps. In some embodiments, a forwarder forwards the annotated data blocks to another system component (typically an indexer) for further processing. The data intake and query system allows forwarding of data from one data intake and query instance to another, or even to a third-party system. The data intake and query system can employ different types of forwarders in a configuration. In some embodiments, a forwarder may contain the essential components needed to forward data. A forwarder can gather data from a variety of inputs and forward the data to an indexer for indexing and searching. A forwarder can also tag metadata (e.g., source, source type, host, etc.). In some embodiments, a forwarder has the capabilities of the aforementioned forwarder as well as additional capabilities. The forwarder can parse data before forwarding the data (e.g., can associate a time stamp with a portion of data and create an event, etc.) and can route data based on criteria such as source or type of event. The forwarder can also index data locally while forwarding the data to another indexer. At block506, an indexer receives data blocks from a forwarder and parses the data to organize the data into events. In some embodiments, to organize the data into events, an indexer may determine a source type associated with each data block (e.g., by extracting a source type label from the metadata fields associated with the data block, etc.) and refer to a source type configuration corresponding to the identified source type. The source type definition may include one or more properties that indicate to the indexer to automatically determine the boundaries within the received data that indicate the portions of machine data for events. In general, these properties may include regular expression-based rules or delimiter rules where, for example, event boundaries may be indicated by predefined characters or character strings. These predefined characters may include punctuation marks or other special characters including, for example, carriage returns, tabs, spaces, line breaks, etc. If a source type for the data is unknown to the indexer, an indexer may infer a source type for the data by examining the structure of the data. Then, the indexer can apply an inferred source type definition to the data to create the events. At block508, the indexer determines a timestamp for each event. Similar to the process for parsing machine data, an indexer may again refer to a source type definition associated with the data to locate one or more properties that indicate instructions for determining a timestamp for each event. The properties may, for example, instruct an indexer to extract a time value from a portion of data for the event, to interpolate time values based on timestamps associated with temporally proximate events, to create a timestamp based on a time the portion of machine data was received or generated, to use the timestamp of a previous event, or use any other rules for determining timestamps. At block510, the indexer associates with each event one or more metadata fields including a field containing the timestamp determined for the event. In some embodiments, a timestamp may be included in the metadata fields. These metadata fields may include any number of “default fields” that are associated with all events, and may also include one more custom fields as defined by a user. Similar to the metadata fields associated with the data blocks at block504, the default metadata fields associated with each event may include a host, source, and source type field including or in addition to a field storing the timestamp. At block512, an indexer may optionally apply one or more transformations to data included in the events created at block506. For example, such transformations can include removing a portion of an event (e.g., a portion used to define event boundaries, extraneous characters from the event, other extraneous text, etc.), masking a portion of an event (e.g., masking a credit card number), removing redundant portions of an event, etc. The transformations applied to events may, for example, be specified in one or more configuration files and referenced by one or more source type definitions. FIG.5Cillustrates an illustrative example of machine data can be stored in a data store in accordance with various disclosed embodiments. In other embodiments, machine data can be stored in a flat file in a corresponding bucket with an associated index file, such as a time series index or “TSIDX.” As such, the depiction of machine data and associated metadata as rows and columns in the table ofFIG.5Cis merely illustrative and is not intended to limit the data format in which the machine data and metadata is stored in various embodiments described herein. In one particular embodiment, machine data can be stored in a compressed or encrypted formatted. In such embodiments, the machine data can be stored with or be associated with data that describes the compression or encryption scheme with which the machine data is stored. The information about the compression or encryption scheme can be used to decompress or decrypt the machine data, and any metadata with which it is stored, at search time. As mentioned above, certain metadata, e.g., host536, source537, source type538and timestamps535can be generated for each event, and associated with a corresponding portion of machine data539when storing the event data in a data store, e.g., data store208. Any of the metadata can be extracted from the corresponding machine data, or supplied or defined by an entity, such as a user or computer system. The metadata fields can become part of or stored with the event. Note that while the time-stamp metadata field can be extracted from the raw data of each event, the values for the other metadata fields may be determined by the indexer based on information it receives pertaining to the source of the data separate from the machine data. While certain default or user-defined metadata fields can be extracted from the machine data for indexing purposes, all the machine data within an event can be maintained in its original condition. As such, in embodiments in which the portion of machine data included in an event is unprocessed or otherwise unaltered, it is referred to herein as a portion of raw machine data. In other embodiments, the port of machine data in an event can be processed or otherwise altered. As such, unless certain information needs to be removed for some reasons (e.g., extraneous information, confidential information), all the raw machine data contained in an event can be preserved and saved in its original form. Accordingly, the data store in which the event records are stored is sometimes referred to as a “raw record data store.” The raw record data store contains a record of the raw event data tagged with the various default fields. InFIG.5C, the first three rows of the table represent events531,532, and533and are related to a server access log that records requests from multiple clients processed by a server, as indicated by entry of “access.log” in the source column536. In the example shown inFIG.5C, each of the events531-533is associated with a discrete request made from a client device. The raw machine data generated by the server and extracted from a server access log can include the IP address of the client540, the user id of the person requesting the document541, the time the server finished processing the request542, the request line from the client543, the status code returned by the server to the client545, the size of the object returned to the client (in this case, the gif file requested by the client)546and the time spent to serve the request in microseconds544. As seen inFIG.5C, all the raw machine data retrieved from the server access log is retained and stored as part of the corresponding events,531-533in the data store. Event534is associated with an entry in a server error log, as indicated by “error.log” in the source column537that records errors that the server encountered when processing a client request. Similar to the events related to the server access log, all the raw machine data in the error log file pertaining to event534can be preserved and stored as part of the event534. Saving minimally processed or unprocessed machine data in a data store associated with metadata fields in the manner similar to that shown inFIG.5Cis advantageous because it allows search of all the machine data at search time instead of searching only previously specified and identified fields or field-value pairs. As mentioned above, because data structures used by various embodiments of the present disclosure maintain the underlying raw machine data and use a late-binding schema for searching the raw machines data, it enables a user to continue investigating and learn valuable insights about the raw data. In other words, the user is not compelled to know about all the fields of information that will be needed at data ingestion time. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by defining new extraction rules, or modifying or deleting existing extraction rules used by the system. At blocks514and516, an indexer can optionally generate a keyword index to facilitate fast keyword searching for events. To build a keyword index, at block514, the indexer identifies a set of keywords in each event. At block516, the indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. In some embodiments, the keyword index may include entries for field name-value pairs found in events, where a field name-value pair can include a pair of keywords connected by a symbol, such as an equals sign or colon. This way, events containing these field name-value pairs can be quickly located. In some embodiments, fields can automatically be generated for some or all of the field names of the field name-value pairs at the time of indexing. For example, if the string “dest=10.0.1.2” is found in an event, a field named “dest” may be created for the event, and assigned a value of “10.0.1.2”. At block518, the indexer stores the events with an associated timestamp in a data store208. Timestamps enable a user to search for events based on a time range. In some embodiments, the stored events are organized into “buckets,” where each bucket stores events associated with a specific time range based on the timestamps associated with each event. This improves time-based searching, as well as allows for events with recent timestamps, which may have a higher likelihood of being accessed, to be stored in a faster memory to facilitate faster retrieval. For example, buckets containing the most recent events can be stored in flash memory rather than on a hard disk. In some embodiments, each bucket may be associated with an identifier, a time range, and a size constraint. Each indexer206may be responsible for storing and searching a subset of the events contained in a corresponding data store208. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. For example, using mapreduce techniques, each indexer returns partial responses for a subset of events to a search head that combines the results to produce an answer for the query. By storing events in buckets for specific time ranges, an indexer may further optimize the data retrieval process by searching buckets corresponding to time ranges that are relevant to a query. In some embodiments, each bucket may be associated with an identifier, a time range, and a size constraint. In certain embodiments, a bucket can correspond to a file system directory and the machine data, or events, of a bucket can be stored in one or more files of the file system directory. The file system directory can include additional files, such as one or more inverted indexes, high performance indexes, permissions files, configuration files, etc. In some embodiments, each indexer has a home directory and a cold directory. The home directory of an indexer stores hot buckets and warm buckets, and the cold directory of an indexer stores cold buckets. A hot bucket is a bucket that is capable of receiving and storing events. A warm bucket is a bucket that can no longer receive events for storage but has not yet been moved to the cold directory. A cold bucket is a bucket that can no longer receive events and may be a bucket that was previously stored in the home directory. The home directory may be stored in faster memory, such as flash memory, as events may be actively written to the home directory, and the home directory may typically store events that are more frequently searched and thus are accessed more frequently. The cold directory may be stored in slower and/or larger memory, such as a hard disk, as events are no longer being written to the cold directory, and the cold directory may typically store events that are not as frequently searched and thus are accessed less frequently. In some embodiments, an indexer may also have a quarantine bucket that contains events having potentially inaccurate information, such as an incorrect time stamp associated with the event or a time stamp that appears to be an unreasonable time stamp for the corresponding event. The quarantine bucket may have events from any time range; as such, the quarantine bucket may always be searched at search time. Additionally, an indexer may store old, archived data in a frozen bucket that is not capable of being searched at search time. In some embodiments, a frozen bucket may be stored in slower and/or larger memory, such as a hard disk, and may be stored in offline and/or remote storage. Moreover, events and buckets can also be replicated across different indexers and data stores to facilitate high availability and disaster recovery as described in U.S. Pat. No. 9,130,971, entitled “SITE-BASED SEARCH AFFINITY”, issued on 8 Sep. 2015, and in U.S. patent Ser. No. 14/266,817, entitled “MULTI-SITE CLUSTERING”, issued on 1 Sep. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. FIG.5Bis a block diagram of an example data store501that includes a directory for each index (or partition) that contains a portion of data managed by an indexer.FIG.5Bfurther illustrates details of an embodiment of an inverted index507B and an event reference array515associated with inverted index507B. The data store501can correspond to a data store208that stores events managed by an indexer206or can correspond to a different data store associated with an indexer206. In the illustrated embodiment, the data store501includes a main directory503associated with a main index and a test directory505associated with a test index. However, the data store501can include fewer or more directories. In some embodiments, multiple indexes can share a single directory or all indexes can share a common directory. Additionally, although illustrated as a single data store501, it will be understood that the data store501can be implemented as multiple data stores storing different portions of the information shown inFIG.5B. For example, a single index or partition can span multiple directories or multiple data stores, and can be indexed or searched by multiple corresponding indexers. In the illustrated embodiment ofFIG.5B, the index-specific directories503and505include inverted indexes507A,507B and509A,509B, respectively. The inverted indexes507A . . .507B, and509A . . .509B can be keyword indexes or field-value pair indexes described herein and can include less or more information that depicted inFIG.5B. In some embodiments, the inverted index507A . . .507B, and509A . . .509B can correspond to a distinct time-series bucket that is managed by the indexer206and that contains events corresponding to the relevant index (e.g., main index, test index). As such, each inverted index can correspond to a particular range of time for an index. Additional files, such as high performance indexes for each time-series bucket of an index, can also be stored in the same directory as the inverted indexes507A . . .507B, and509A . . .509B. In some embodiments inverted index507A . . .507B, and509A . . .509B can correspond to multiple time-series buckets or inverted indexes507A . . .507B, and509A . . .509B can correspond to a single time-series bucket. Each inverted index507A . . .507B, and509A . . .509B can include one or more entries, such as keyword (or token) entries or field-value pair entries. Furthermore, in certain embodiments, the inverted indexes507A . . .507B, and509A . . .509B can include additional information, such as a time range523associated with the inverted index or an index identifier525identifying the index associated with the inverted index507A . . .507B, and509A . . .509B. However, each inverted index507A . . .507B, and509A . . .509B can include less or more information than depicted. Token entries, such as token entries511illustrated in inverted index507B, can include a token511A (e.g., “error,” “itemID,” etc.) and event references511B indicative of events that include the token. For example, for the token “error,” the corresponding token entry includes the token “error” and an event reference, or unique identifier, for each event stored in the corresponding time-series bucket that includes the token “error.” In the illustrated embodiment ofFIG.5B, the error token entry includes the identifiers3,5,6,8,11, and12corresponding to events managed by the indexer206and associated with the index main503that are located in the time-series bucket associated with the inverted index507B. In some cases, some token entries can be default entries, automatically determined entries, or user specified entries. In some embodiments, the indexer206can identify each word or string in an event as a distinct token and generate a token entry for it. In some cases, the indexer206can identify the beginning and ending of tokens based on punctuation, spaces, as described in greater detail herein. In certain cases, the indexer206can rely on user input or a configuration file to identify tokens for token entries511, etc. It will be understood that any combination of token entries can be included as a default, automatically determined, and/or included based on user-specified criteria. Similarly, field-value pair entries, such as field-value pair entries513shown in inverted index507B, can include a field-value pair513A and event references513B indicative of events that include a field value that corresponds to the field-value pair. For example, for a field-value pair sourcetype::sendmail, a field-value pair entry would include the field-value pair sourcetype:: sendmail and a unique identifier, or event reference, for each event stored in the corresponding time-series bucket that includes a sendmail sourcetype. In some cases, the field-value pair entries513can be default entries, automatically determined entries, or user specified entries. As a non-limiting example, the field-value pair entries for the fields host, source, sourcetype can be included in the inverted indexes507A . . .507B, and509A . . .509B as a default. As such, all of the inverted indexes507A . . .507B, and509A . . .509B can include field-value pair entries for the fields host, source, sourcetype. As yet another non-limiting example, the field-value pair entries for the IP address field can be user specified and may only appear in the inverted index507B based on user-specified criteria. As another non-limiting example, as the indexer indexes the events, it can automatically identify field-value pairs and create field-value pair entries. For example, based on the indexers review of events, it can identify IP address as a field in each event and add the IP address field-value pair entries to the inverted index507B. It will be understood that any combination of field-value pair entries can be included as a default, automatically determined, or included based on user-specified criteria. Each unique identifier517, or event reference, can correspond to a unique event located in the time series bucket. However, the same event reference can be located in multiple entries. For example, if an event has a sourcetype splunkd, host wwwl and token “warning,” then the unique identifier for the event will appear in the field-value pair entries sourcetype::splunkd and host::wwwl, as well as the token entry “warning.” With reference to the illustrated embodiment ofFIG.5Band the event that corresponds to the event reference 3, the event reference 3 is found in the field-value pair entries513host::hostA, source::sourceB, sourcetype::sourcetypeA, and IP address::91.205.189.15 indicating that the event corresponding to the event references is from hostA, sourceB, of sourcetypeA, and includes 91.205.189.15 in the event data. For some fields, the unique identifier is located in only one field-value pair entry for a particular field. For example, the inverted index may include four sourcetype field-value pair entries corresponding to four different sourcetypes of the events stored in a bucket (e.g., sourcetypes: sendmail, splunkd, web_access, and web_service). Within those four sourcetype field-value pair entries, an identifier for a particular event may appear in only one of the field-value pair entries. With continued reference to the example illustrated embodiment ofFIG.5B, since the event reference 7 appears in the field-value pair entry sourcetype::sourcetypeA, then it does not appear in the other field-value pair entries for the sourcetype field, including sourcetype::sourcetypeB, sourcetype::sourcetypeC, and sourcetype::sourcetypeD. The event references517can be used to locate the events in the corresponding bucket. For example, the inverted index can include, or be associated with, an event reference array515. The event reference array515can include an array entry517for each event reference in the inverted index507B. Each array entry517can include location information519of the event corresponding to the unique identifier (non-limiting example: seek address of the event), a timestamp521associated with the event, or additional information regarding the event associated with the event reference, etc. For each token entry511or field-value pair entry513, the event reference 501Bor unique identifiers can be listed in chronological order or the value of the event reference can be assigned based on chronological data, such as a timestamp associated with the event referenced by the event reference. For example, the event reference 1 in the illustrated embodiment ofFIG.5Bcan correspond to the first-in-time event for the bucket, and the event reference 12 can correspond to the last-in-time event for the bucket. However, the event references can be listed in any order, such as reverse chronological order, ascending order, descending order, or some other order, etc. Further, the entries can be sorted. For example, the entries can be sorted alphabetically (collectively or within a particular group), by entry origin (e.g., default, automatically generated, user-specified, etc.), by entry type (e.g., field-value pair entry, token entry, etc.), or chronologically by when added to the inverted index, etc. In the illustrated embodiment ofFIG.5B, the entries are sorted first by entry type and then alphabetically. As a non-limiting example of how the inverted indexes507A . . .507B, and509A . . .509B can be used during a data categorization request command, the indexers can receive filter criteria indicating data that is to be categorized and categorization criteria indicating how the data is to be categorized. Example filter criteria can include, but is not limited to, indexes (or partitions), hosts, sources, sourcetypes, time ranges, field identifier, keywords, etc. Using the filter criteria, the indexer identifies relevant inverted indexes to be searched. For example, if the filter criteria includes a set of partitions, the indexer can identify the inverted indexes stored in the directory corresponding to the particular partition as relevant inverted indexes. Other means can be used to identify inverted indexes associated with a partition of interest. For example, in some embodiments, the indexer can review an entry in the inverted indexes, such as an index-value pair entry513to determine if a particular inverted index is relevant. If the filter criteria does not identify any partition, then the indexer can identify all inverted indexes managed by the indexer as relevant inverted indexes. Similarly, if the filter criteria includes a time range, the indexer can identify inverted indexes corresponding to buckets that satisfy at least a portion of the time range as relevant inverted indexes. For example, if the time range is last hour then the indexer can identify all inverted indexes that correspond to buckets storing events associated with timestamps within the last hour as relevant inverted indexes. When used in combination, an index filter criterion specifying one or more partitions and a time range filter criterion specifying a particular time range can be used to identify a subset of inverted indexes within a particular directory (or otherwise associated with a particular partition) as relevant inverted indexes. As such, the indexer can focus the processing to only a subset of the total number of inverted indexes that the indexer manages. Once the relevant inverted indexes are identified, the indexer can review them using any additional filter criteria to identify events that satisfy the filter criteria. In some cases, using the known location of the directory in which the relevant inverted indexes are located, the indexer can determine that any events identified using the relevant inverted indexes satisfy an index filter criterion. For example, if the filter criteria includes a partition main, then the indexer can determine that any events identified using inverted indexes within the partition main directory (or otherwise associated with the partition main) satisfy the index filter criterion. Furthermore, based on the time range associated with each inverted index, the indexer can determine that that any events identified using a particular inverted index satisfies a time range filter criterion. For example, if a time range filter criterion is for the last hour and a particular inverted index corresponds to events within a time range of 50 minutes ago to 35 minutes ago, the indexer can determine that any events identified using the particular inverted index satisfy the time range filter criterion. Conversely, if the particular inverted index corresponds to events within a time range of 59 minutes ago to 62 minutes ago, the indexer can determine that some events identified using the particular inverted index may not satisfy the time range filter criterion. Using the inverted indexes, the indexer can identify event references (and therefore events) that satisfy the filter criteria. For example, if the token “error” is a filter criterion, the indexer can track all event references within the token entry “error.” Similarly, the indexer can identify other event references located in other token entries or field-value pair entries that match the filter criteria. The system can identify event references located in all of the entries identified by the filter criteria. For example, if the filter criteria include the token “error” and field-value pair sourcetype::web_ui, the indexer can track the event references found in both the token entry “error” and the field-value pair entry sourcetype::web_ui. As mentioned previously, in some cases, such as when multiple values are identified for a particular filter criterion (e.g., multiple sources for a source filter criterion), the system can identify event references located in at least one of the entries corresponding to the multiple values and in all other entries identified by the filter criteria. The indexer can determine that the events associated with the identified event references satisfy the filter criteria. In some cases, the indexer can further consult a timestamp associated with the event reference to determine whether an event satisfies the filter criteria. For example, if an inverted index corresponds to a time range that is partially outside of a time range filter criterion, then the indexer can consult a timestamp associated with the event reference to determine whether the corresponding event satisfies the time range criterion. In some embodiments, to identify events that satisfy a time range, the indexer can review an array, such as the event reference array515that identifies the time associated with the events. Furthermore, as mentioned above using the known location of the directory in which the relevant inverted indexes are located (or other index identifier), the indexer can determine that any events identified using the relevant inverted indexes satisfy the index filter criterion. In some cases, based on the filter criteria, the indexer reviews an extraction rule. In certain embodiments, if the filter criteria includes a field name that does not correspond to a field-value pair entry in an inverted index, the indexer can review an extraction rule, which may be located in a configuration file, to identify a field that corresponds to a field-value pair entry in the inverted index. For example, the filter criteria includes a field name “sessionID” and the indexer determines that at least one relevant inverted index does not include a field-value pair entry corresponding to the field name sessionID, the indexer can review an extraction rule that identifies how the sessionID field is to be extracted from a particular host, source, or sourcetype (implicitly identifying the particular host, source, or sourcetype that includes a sessionID field). The indexer can replace the field name “sessionID” in the filter criteria with the identified host, source, or sourcetype. In some cases, the field name “sessionID” may be associated with multiples hosts, sources, or sourcetypes, in which case, all identified hosts, sources, and sourcetypes can be added as filter criteria. In some cases, the identified host, source, or sourcetype can replace or be appended to a filter criterion, or be excluded. For example, if the filter criteria includes a criterion for source S1 and the “sessionID” field is found in source S2, the source S2 can replace S1 in the filter criteria, be appended such that the filter criteria includes source S1 and source S2, or be excluded based on the presence of the filter criterion source S1. If the identified host, source, or sourcetype is included in the filter criteria, the indexer can then identify a field-value pair entry in the inverted index that includes a field value corresponding to the identity of the particular host, source, or sourcetype identified using the extraction rule. Once the events that satisfy the filter criteria are identified, the system, such as the indexer206can categorize the results based on the categorization criteria. The categorization criteria can include categories for grouping the results, such as any combination of partition, source, sourcetype, or host, or other categories or fields as desired. The indexer can use the categorization criteria to identify categorization criteria-value pairs or categorization criteria values by which to categorize or group the results. The categorization criteria-value pairs can correspond to one or more field-value pair entries stored in a relevant inverted index, one or more index-value pairs based on a directory in which the inverted index is located or an entry in the inverted index (or other means by which an inverted index can be associated with a partition), or other criteria-value pair that identifies a general category and a particular value for that category. The categorization criteria values can correspond to the value portion of the categorization criteria-value pair. As mentioned, in some cases, the categorization criteria-value pairs can correspond to one or more field-value pair entries stored in the relevant inverted indexes. For example, the categorization criteria-value pairs can correspond to field-value pair entries of host, source, and sourcetype (or other field-value pair entry as desired). For instance, if there are ten different hosts, four different sources, and five different sourcetypes for an inverted index, then the inverted index can include ten host field-value pair entries, four source field-value pair entries, and five sourcetype field-value pair entries. The indexer can use the nineteen distinct field-value pair entries as categorization criteria-value pairs to group the results. Specifically, the indexer can identify the location of the event references associated with the events that satisfy the filter criteria within the field-value pairs, and group the event references based on their location. As such, the indexer can identify the particular field value associated with the event corresponding to the event reference. For example, if the categorization criteria include host and sourcetype, the host field-value pair entries and sourcetype field-value pair entries can be used as categorization criteria-value pairs to identify the specific host and sourcetype associated with the events that satisfy the filter criteria. In addition, as mentioned, categorization criteria-value pairs can correspond to data other than the field-value pair entries in the relevant inverted indexes. For example, if partition or index is used as a categorization criterion, the inverted indexes may not include partition field-value pair entries. Rather, the indexer can identify the categorization criteria-value pair associated with the partition based on the directory in which an inverted index is located, information in the inverted index, or other information that associates the inverted index with the partition, etc. As such a variety of methods can be used to identify the categorization criteria-value pairs from the categorization criteria. Accordingly based on the categorization criteria (and categorization criteria-value pairs), the indexer can generate groupings based on the events that satisfy the filter criteria. As a non-limiting example, if the categorization criteria includes a partition and sourcetype, then the groupings can correspond to events that are associated with each unique combination of partition and sourcetype. For instance, if there are three different partitions and two different sourcetypes associated with the identified events, then the six different groups can be formed, each with a unique partition value-sourcetype value combination. Similarly, if the categorization criteria includes partition, sourcetype, and host and there are two different partitions, three sourcetypes, and five hosts associated with the identified events, then the indexer can generate up to thirty groups for the results that satisfy the filter criteria. Each group can be associated with a unique combination of categorization criteria-value pairs (e.g., unique combinations of partition value sourcetype value, and host value). In addition, the indexer can count the number of events associated with each group based on the number of events that meet the unique combination of categorization criteria for a particular group (or match the categorization criteria-value pairs for the particular group). With continued reference to the example above, the indexer can count the number of events that meet the unique combination of partition, sourcetype, and host for a particular group. Each indexer communicates the groupings to the search head. The search head can aggregate the groupings from the indexers and provide the groupings for display. In some cases, the groups are displayed based on at least one of the host, source, sourcetype, or partition associated with the groupings. In some embodiments, the search head can further display the groups based on display criteria, such as a display order or a sort order as described in greater detail above. As a non-limiting example and with reference toFIG.5B, consider a request received by an indexer206that includes the following filter criteria: keyword=error, partition=main, time range=3/1/17 16:22.00.000-16:28.00.000, sourcetype=sourcetypeC, host=hostB, and the following categorization criteria: source. Based on the above criteria, the indexer206identifies main directory503and can ignore test directory505and any other partition-specific directories. The indexer determines that inverted partition507B is a relevant partition based on its location within the main directory503and the time range associated with it. For sake of simplicity in this example, the indexer206determines that no other inverted indexes in the main directory503, such as inverted index507A satisfy the time range criterion. Having identified the relevant inverted index507B, the indexer reviews the token entries511and the field-value pair entries513to identify event references, or events that satisfy all of the filter criteria. With respect to the token entries511, the indexer can review the error token entry and identify event references 3, 5, 6, 8, 11, 12, indicating that the term “error” is found in the corresponding events. Similarly, the indexer can identify event references 4, 5, 6, 8, 9, 10, 11 in the field-value pair entry sourcetype::sourcetypeC and event references 2, 5, 6, 8, 10, 11 in the field-value pair entry host::hostB. As the filter criteria did not include a source or an IP address field-value pair, the indexer can ignore those field-value pair entries. In addition to identifying event references found in at least one token entry or field-value pair entry (e.g., event references 3, 4, 5, 6, 8, 9, 10, 11, 12), the indexer can identify events (and corresponding event references) that satisfy the time range criterion using the event reference array1614(e.g., event references 2, 3, 4, 5, 6, 7, 8, 9, 10). Using the information obtained from the inverted index507B (including the event reference array515), the indexer206can identify the event references that satisfy all of the filter criteria (e.g., event references 5, 6, 8). Having identified the events (and event references) that satisfy all of the filter criteria, the indexer206can group the event references using the received categorization criteria (source). In doing so, the indexer can determine that event references 5 and 6 are located in the field-value pair entry source::sourceD (or have matching categorization criteria-value pairs) and event reference 8 is located in the field-value pair entry source::sourceC. Accordingly, the indexer can generate a sourceC group having a count of one corresponding to reference 8 and a sourceD group having a count of two corresponding to references 5 and 6. This information can be communicated to the search head. In turn the search head can aggregate the results from the various indexers and display the groupings. As mentioned above, in some embodiments, the groupings can be displayed based at least in part on the categorization criteria, including at least one of host, source, sourcetype, or partition. It will be understood that a change to any of the filter criteria or categorization criteria can result in different groupings. As a one non-limiting example, a request received by an indexer206that includes the following filter criteria: partition=main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 1-12 as satisfying the filter criteria. The indexer would then generate up to 24 groupings corresponding to the 24 different combinations of the categorization criteria-value pairs, including host (hostA, hostB), source (sourceA, sourceB, sourceC, sourceD), and sourcetype (sourcetypeA, sourcetypeB, sourcetypeC). However, as there are only twelve events identifiers in the illustrated embodiment and some fall into the same grouping, the indexer generates eight groups and counts as follows:Group 1 (hostA, sourceA, sourcetypeA): 1 (event reference 7)Group 2 (hostA, sourceA, sourcetypeB): 2 (event references 1, 12)Group 3 (hostA, sourceA, sourcetypeC): 1 (event reference 4)Group 4 (hostA, sourceB, sourcetypeA): 1 (event reference 3)Group 5 (hostA, sourceB, sourcetypeC): 1 (event reference 9)Group 6 (hostB, sourceC, sourcetypeA): 1 (event reference 2)Group 7 (hostB, sourceC, sourcetypeC): 2 (event references 8, 11)Group 8 (hostB, sourceD, sourcetypeC): 3 (event references 5, 6, 10) As noted, each group has a unique combination of categorization criteria-value pairs or categorization criteria values. The indexer communicates the groups to the search head for aggregation with results received from other indexers. In communicating the groups to the search head, the indexer can include the categorization criteria-value pairs for each group and the count. In some embodiments, the indexer can include more or less information. For example, the indexer can include the event references associated with each group and other identifying information, such as the indexer or inverted index used to identify the groups. As another non-limiting examples, a request received by an indexer206that includes the following filter criteria: partition=main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, source=sourceA, sourceD, and keyword=itemID and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 4, 7, and 10 as satisfying the filter criteria, and generate the following groups:Group 1 (hostA, sourceA, sourcetypeC): 1 (event reference 4)Group 2 (hostA, sourceA, sourcetypeA): 1 (event reference 7)Group 3 (hostB, sourceD, sourcetypeC): 1 (event references 10) The indexer communicates the groups to the search head for aggregation with results received from other indexers. As will be understand there are myriad ways for filtering and categorizing the events and event references. For example, the indexer can review multiple inverted indexes associated with a partition or review the inverted indexes of multiple partitions, and categorize the data using any one or any combination of partition, host, source, sourcetype, or other category, as desired. Further, if a user interacts with a particular group, the indexer can provide additional information regarding the group. For example, the indexer can perform a targeted search or sampling of the events that satisfy the filter criteria and the categorization criteria for the selected group, also referred to as the filter criteria corresponding to the group or filter criteria associated with the group. In some cases, to provide the additional information, the indexer relies on the inverted index. For example, the indexer can identify the event references associated with the events that satisfy the filter criteria and the categorization criteria for the selected group and then use the event reference array515to access some or all of the identified events. In some cases, the categorization criteria values or categorization criteria-value pairs associated with the group become part of the filter criteria for the review. With reference toFIG.5Bfor instance, suppose a group is displayed with a count of six corresponding to event references 4, 5, 6, 8, 10, 11 (i.e., event references 4, 5, 6, 8, 10, 11 satisfy the filter criteria and are associated with matching categorization criteria values or categorization criteria-value pairs) and a user interacts with the group (e.g., selecting the group, clicking on the group, etc.). In response, the search head communicates with the indexer to provide additional information regarding the group. In some embodiments, the indexer identifies the event references associated with the group using the filter criteria and the categorization criteria for the group (e.g., categorization criteria values or categorization criteria-value pairs unique to the group). Together, the filter criteria and the categorization criteria for the group can be referred to as the filter criteria associated with the group. Using the filter criteria associated with the group, the indexer identifies event references 4, 5, 6, 8, 10, 11. Based on a sampling criteria, discussed in greater detail above, the indexer can determine that it will analyze a sample of the events associated with the event references 4, 5, 6, 8, 10, 11. For example, the sample can include analyzing event data associated with the event references 5, 8, 10. In some embodiments, the indexer can use the event reference array515to access the event data associated with the event references 5, 8, 10. Once accessed, the indexer can compile the relevant information and provide it to the search head for aggregation with results from other indexers. By identifying events and sampling event data using the inverted indexes, the indexer can reduce the amount of actual data this is analyzed and the number of events that are accessed in order to generate the summary of the group and provide a response in less time. FIG.6Ais a flow diagram of an example method that illustrates how a search head and indexers perform a search query, in accordance with example embodiments. At block602, a search head receives a search query from a client. At block604, the search head analyzes the search query to determine what portion(s) of the query can be delegated to indexers and what portions of the query can be executed locally by the search head. At block606, the search head distributes the determined portions of the query to the appropriate indexers. In some embodiments, a search head cluster may take the place of an independent search head where each search head in the search head cluster coordinates with peer search heads in the search head cluster to schedule jobs, replicate search results, update configurations, fulfill search requests, etc. In some embodiments, the search head (or each search head) communicates with a master node (also known as a cluster master, not shown inFIG.2) that provides the search head with a list of indexers to which the search head can distribute the determined portions of the query. The master node maintains a list of active indexers and can also designate which indexers may have responsibility for responding to queries over certain sets of events. A search head may communicate with the master node before the search head distributes queries to indexers to discover the addresses of active indexers. At block608, the indexers to which the query was distributed, search data stores associated with them for events that are responsive to the query. To determine which events are responsive to the query, the indexer searches for events that match the criteria specified in the query. These criteria can include matching keywords or specific values for certain fields. The searching operations at block608may use the late-binding schema to extract values for specified fields from events at the time the query is processed. In some embodiments, one or more rules for extracting field values may be specified as part of a source type definition in a configuration file. The indexers may then either send the relevant events back to the search head, or use the events to determine a partial result, and send the partial result back to the search head. At block610, the search head combines the partial results and/or events received from the indexers to produce a final result for the query. In some examples, the results of the query are indicative of performance or security of the IT environment and may help improve the performance of components in the IT environment. This final result may comprise different types of data depending on what the query requested. For example, the results can include a listing of matching events returned by the query, or some type of visualization of the data from the returned events. In another example, the final result can include one or more calculated values derived from the matching events. The results generated by the system108can be returned to a client using different techniques. For example, one technique streams results or relevant events back to a client in real-time as they are identified. Another technique waits to report the results to the client until a complete set of results (which may include a set of relevant events or a result based on relevant events) is ready to return to the client. Yet another technique streams interim results or relevant events back to the client in real-time until a complete set of results is ready, and then returns the complete set of results to the client. In another technique, certain results are stored as “search jobs” and the client may retrieve the results by referring the search jobs. The search head can also perform various operations to make the search more efficient. For example, before the search head begins execution of a query, the search head can determine a time range for the query and a set of common keywords that all matching events include. The search head may then use these parameters to query the indexers to obtain a superset of the eventual results. Then, during a filtering stage, the search head can perform field-extraction operations on the superset to produce a reduced set of search results. This speeds up queries, which may be particularly helpful for queries that are performed on a periodic basis. Various embodiments of the present disclosure can be implemented using, or in conjunction with, a pipelined command language. A pipelined command language is a language in which a set of inputs or data is operated on by a first command in a sequence of commands, and then subsequent commands in the order they are arranged in the sequence. Such commands can include any type of functionality for operating on data, such as retrieving, searching, filtering, aggregating, processing, transmitting, and the like. As described herein, a query can thus be formulated in a pipelined command language and include any number of ordered or unordered commands for operating on data. Splunk Processing Language (SPL) is an example of a pipelined command language in which a set of inputs or data is operated on by any number of commands in a particular sequence. A sequence of commands, or command sequence, can be formulated such that the order in which the commands are arranged defines the order in which the commands are applied to a set of data or the results of an earlier executed command. For example, a first command in a command sequence can operate to search or filter for specific data in particular set of data. The results of the first command can then be passed to another command listed later in the command sequence for further processing. In various embodiments, a query can be formulated as a command sequence defined in a command line of a search UI. In some embodiments, a query can be formulated as a sequence of SPL commands. Some or all of the SPL commands in the sequence of SPL commands can be separated from one another by a pipe symbol “|”. In such embodiments, a set of data, such as a set of events, can be operated on by a first SPL command in the sequence, and then a subsequent SPL command following a pipe symbol “|” after the first SPL command operates on the results produced by the first SPL command or other set of data, and so on for any additional SPL commands in the sequence. As such, a query formulated using SPL comprises a series of consecutive commands that are delimited by pipe “|” characters. The pipe character indicates to the system that the output or result of one command (to the left of the pipe) should be used as the input for one of the subsequent commands (to the right of the pipe). This enables formulation of queries defined by a pipeline of sequenced commands that refines or enhances the data at each step along the pipeline until the desired results are attained. Accordingly, various embodiments described herein can be implemented with Splunk Processing Language (SPL) used in conjunction with the SPLUNK® ENTERPRISE system. While a query can be formulated in many ways, a query can start with a search command and one or more corresponding search terms at the beginning of the pipeline. Such search terms can include any combination of keywords, phrases, times, dates, Boolean expressions, fieldname-field value pairs, etc. that specify which results should be obtained from an index. The results can then be passed as inputs into subsequent commands in a sequence of commands by using, for example, a pipe character. The subsequent commands in a sequence can include directives for additional processing of the results once it has been obtained from one or more indexes. For example, commands may be used to filter unwanted information out of the results, extract more information, evaluate field values, calculate statistics, reorder the results, create an alert, create summary of the results, or perform some type of aggregation function. In some embodiments, the summary can include a graph, chart, metric, or other visualization of the data. An aggregation function can include analysis or calculations to return an aggregate value, such as an average value, a sum, a maximum value, a root mean square, statistical values, and the like. Due to its flexible nature, use of a pipelined command language in various embodiments is advantageous because it can perform “filtering” as well as “processing” functions. In other words, a single query can include a search command and search term expressions, as well as data-analysis expressions. For example, a command at the beginning of a query can perform a “filtering” step by retrieving a set of data based on a condition (e.g., records associated with server response times of less than 1 microsecond). The results of the filtering step can then be passed to a subsequent command in the pipeline that performs a “processing” step (e.g., calculation of an aggregate value related to the filtered events such as the average response time of servers with response times of less than 1 microsecond). Furthermore, the search command can allow events to be filtered by keyword as well as field value criteria. For example, a search command can filter out all events containing the word “warning” or filter out all events where a field value associated with a field “clientip” is “10.0.1.2.” The results obtained or generated in response to a command in a query can be considered a set of results data. The set of results data can be passed from one command to another in any data format. In one embodiment, the set of result data can be in the form of a dynamically created table. Each command in a particular query can redefine the shape of the table. In some implementations, an event retrieved from an index in response to a query can be considered a row with a column for each field value. Columns contain basic information about the data and also may contain data that has been dynamically extracted at search time. FIG.6Bprovides a visual representation of the manner in which a pipelined command language or query operates in accordance with the disclosed embodiments. The query630can be inputted by the user into a search. The query comprises a search, the results of which are piped to two commands (namely, command1and command2) that follow the search step. Disk622represents the event data in the raw record data store. When a user query is processed, a search step will precede other queries in the pipeline in order to generate a set of events at block640. For example, the query can comprise search terms “sourcetype=syslog ERROR” at the front of the pipeline as shown inFIG.6B. Intermediate results table624shows fewer rows because it represents the subset of events retrieved from the index that matched the search terms “sourcetype=syslog ERROR” from search command630. By way of further example, instead of a search step, the set of events at the head of the pipeline may be generating by a call to a pre-existing inverted index (as will be explained later). At block642, the set of events generated in the first part of the query may be piped to a query that searches the set of events for field-value pairs or for keywords. For example, the second intermediate results table626shows fewer columns, representing the result of the top command, “top user” which summarizes the events into a list of the top 10 users and displays the user, count, and percentage. Finally, at block644, the results of the prior stage can be pipelined to another stage where further filtering or processing of the data can be performed, e.g., preparing the data for display purposes, filtering the data based on a condition, performing a mathematical calculation with the data, etc. As shown inFIG.6B, the “fields—percent” part of command630removes the column that shows the percentage, thereby, leaving a final results table628without a percentage column. In different embodiments, other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. The search head210allows users to search and visualize events generated from machine data received from homogenous data sources. The search head210also allows users to search and visualize events generated from machine data received from heterogeneous data sources. The search head210includes various mechanisms, which may additionally reside in an indexer206, for processing a query. A query language may be used to create a query, such as any suitable pipelined query language. For example, Splunk Processing Language (SPL) can be utilized to make a query. SPL is a pipelined search language in which a set of inputs is operated on by a first command in a command line, and then a subsequent command following the pipe symbol “|” operates on the results produced by the first command, and so on for additional commands. Other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. In response to receiving the search query, search head210uses extraction rules to extract values for fields in the events being searched. The search head210obtains extraction rules that specify how to extract a value for fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the fields corresponding to the extraction rules. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, an extraction rule may truncate a character string or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. The search head210can apply the extraction rules to events that it receives from indexers206. Indexers206may apply the extraction rules to events in an associated data store208. Extraction rules can be applied to all the events in a data store or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the portions of machine data in the events and examining the data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. FIG.7Ais a diagram of an example scenario where a common customer identifier is found among log data received from three disparate data sources, in accordance with example embodiments. In this example, a user submits an order for merchandise using a vendor's shopping application program701running on the user's system. In this example, the order was not delivered to the vendor's server due to a resource exception at the destination server that is detected by the middleware code702. The user then sends a message to the customer support server703to complain about the order failing to complete. The three systems701,702, and703are disparate systems that do not have a common logging format. The order application701sends log data704to the data intake and query system in one format, the middleware code702sends error log data705in a second format, and the support server703sends log data706in a third format. Using the log data received at one or more indexers206from the three systems, the vendor can uniquely obtain an insight into user activity, user experience, and system behavior. The search head210allows the vendor's administrator to search the log data from the three systems that one or more indexers206are responsible for searching, thereby obtaining correlated information, such as the order number and corresponding customer ID number of the person placing the order. The system also allows the administrator to see a visualization of related events via a user interface. The administrator can query the search head210for customer ID field value matches across the log data from the three systems that are stored at the one or more indexers206. The customer ID field value exists in the data gathered from the three systems, but the customer ID field value may be located in different areas of the data given differences in the architecture of the systems. There is a semantic relationship between the customer ID field values generated by the three systems. The search head210requests events from the one or more indexers206to gather relevant events from the three systems. The search head210then applies extraction rules to the events in order to extract field values that it can correlate. The search head may apply a different extraction rule to each set of events from each system when the event format differs among systems. In this example, the user interface can display to the administrator the events corresponding to the common customer ID field values707,708, and709, thereby providing the administrator with insight into a customer's experience. Note that query results can be returned to a client, a search head, or any other system component for further processing. In general, query results may include a set of one or more events, a set of one or more values obtained from the events, a subset of the values, statistics calculated based on the values, a report containing the values, a visualization (e.g., a graph or chart) generated from the values, and the like. The search system enables users to run queries against the stored data to retrieve events that meet criteria specified in a query, such as containing certain keywords or having specific values in defined fields.FIG.7Billustrates the manner in which keyword searches and field searches are processed in accordance with disclosed embodiments. If a user inputs a search query into search bar710that includes only keywords (also known as “tokens”), e.g., the keyword “error” or “warning”, the query search engine of the data intake and query system searches for those keywords directly in the event data711stored in the raw record data store. Note that whileFIG.7Bonly illustrates four events712,713,714,715, the raw record data store (corresponding to data store208inFIG.2) may contain records for millions of events. As disclosed above, an indexer can optionally generate a keyword index to facilitate fast keyword searching for event data. The indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. For example, if the keyword “HTTP” was indexed by the indexer at index time, and the user searches for the keyword “HTTP”, the events712,713, and714, will be identified based on the results returned from the keyword index. As noted above, the index contains reference pointers to the events containing the keyword, which allows for efficient retrieval of the relevant events from the raw record data store. If a user searches for a keyword that has not been indexed by the indexer, the data intake and query system would nevertheless be able to retrieve the events by searching the event data for the keyword in the raw record data store directly as shown inFIG.7B. For example, if a user searches for the keyword “frank”, and the name “frank” has not been indexed at index time, the data intake and query system will search the event data directly and return the first event712. Note that whether the keyword has been indexed at index time or not, in both cases the raw data of the events712-715is accessed from the raw data record store to service the keyword search. In the case where the keyword has been indexed, the index will contain a reference pointer that will allow for a more efficient retrieval of the event data from the data store. If the keyword has not been indexed, the search engine will need to search through all the records in the data store to service the search. In most cases, however, in addition to keywords, a user's search will also include fields. The term “field” refers to a location in the event data containing one or more values for a specific data item. Often, a field is a value with a fixed, delimited position on a line, or a name and value pair, where there is a single value to each field name. A field can also be multivalued, that is, it can appear more than once in an event and have a different value for each appearance, e.g., email address fields. Fields are searchable by the field name or field name-value pairs. Some examples of fields are “clientip” for IP addresses accessing a web server, or the “From” and “To” fields in email addresses. By way of further example, consider the search, “status=404”. This search query finds events with “status” fields that have a value of “404.” When the search is run, the search engine does not look for events with any other “status” value. It also does not look for events containing other fields that share “404” as a value. As a result, the search returns a set of results that are more focused than if “404” had been used in the search string as part of a keyword search. Note also that fields can appear in events as “key=value” pairs such as “user_name=Bob.” But in most cases, field values appear in fixed, delimited positions without identifying keys. For example, the data store may contain events where the “user_name” value always appears by itself after the timestamp as illustrated by the following string: “Nov 15 09:33:22 johnmedlock.” The data intake and query system advantageously allows for search time field extraction. In other words, fields can be extracted from the event data at search time using late-binding schema as opposed to at data ingestion time, which was a major limitation of the prior art systems. In response to receiving the search query, search head210uses extraction rules to extract values for the fields associated with a field or fields in the event data being searched. The search head210obtains extraction rules that specify how to extract a value for certain fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the relevant fields. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, a transformation rule may truncate a character string, or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. FIG.7Billustrates the manner in which configuration files may be used to configure custom fields at search time in accordance with the disclosed embodiments. In response to receiving a search query, the data intake and query system determines if the query references a “field.” For example, a query may request a list of events where the “clientip” field equals “127.0.0.1.” If the query itself does not specify an extraction rule and if the field is not a metadata field, e.g., time, host, source, source type, etc., then in order to determine an extraction rule, the search engine may, in one or more embodiments, need to locate configuration file716during the execution of the search as shown inFIG.7B. Configuration file716may contain extraction rules for all the various fields that are not metadata fields, e.g., the “clientip” field. The extraction rules may be inserted into the configuration file in a variety of ways. In some embodiments, the extraction rules can comprise regular expression rules that are manually entered in by the user. Regular expressions match patterns of characters in text and are used for extracting custom fields in text. In one or more embodiments, as noted above, a field extractor may be configured to automatically generate extraction rules for certain field values in the events when the events are being created, indexed, or stored, or possibly at a later time. In one embodiment, a user may be able to dynamically create custom fields by highlighting portions of a sample event that should be extracted as fields using a graphical user interface. The system would then generate a regular expression that extracts those fields from similar events and store the regular expression as an extraction rule for the associated field in the configuration file716. In some embodiments, the indexers may automatically discover certain custom fields at index time and the regular expressions for those fields will be automatically generated at index time and stored as part of extraction rules in configuration file716. For example, fields that appear in the event data as “key=value” pairs may be automatically extracted as part of an automatic field discovery process. Note that there may be several other ways of adding field definitions to configuration files in addition to the methods discussed herein. The search head210can apply the extraction rules derived from configuration file716to event data that it receives from indexers206. Indexers206may apply the extraction rules from the configuration file to events in an associated data store208. Extraction rules can be applied to all the events in a data store, or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the event data and examining the event data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. In one more embodiments, the extraction rule in configuration file716will also need to define the type or set of events that the rule applies to. Because the raw record data store will contain events from multiple heterogeneous sources, multiple events may contain the same fields in different locations because of discrepancies in the format of the data generated by the various sources. Furthermore, certain events may not contain a particular field at all. For example, event715also contains “clientip” field, however, the “clientip” field is in a different format from the events712,713, and714. To address the discrepancies in the format and content of the different types of events, the configuration file will also need to specify the set of events that an extraction rule applies to, e.g., extraction rule717specifies a rule for filtering by the type of event and contains a regular expression for parsing out the field value. Accordingly, each extraction rule will pertain to only a particular type of event. If a particular field, e.g., “clientip” occurs in multiple events, each of those types of events would need its own corresponding extraction rule in the configuration file716and each of the extraction rules would comprise a different regular expression to parse out the associated field value. The most common way to categorize events is by source type because events generated by a particular source can have the same format. The field extraction rules stored in configuration file716perform search-time field extractions. For example, for a query that requests a list of events with source type “access_combined” where the “clientip” field equals “127.0.0.1,” the query search engine would first locate the configuration file716to retrieve extraction rule717that would allow it to extract values associated with the “clientip” field from the event data720“where the source type is “access_combined. After the “clientip” field has been extracted from all the events comprising the “clientip” field where the source type is “access_combined,” the query search engine can then execute the field criteria by performing the compare operation to filter out the events where the “clientip” field equals “127.0.0.1.” In the example shown inFIG.7B, the events712,713, and714would be returned in response to the user query. In this manner, the search engine can service queries containing field criteria in addition to queries containing keyword criteria (as explained above). The configuration file can be created during indexing. It may either be manually created by the user or automatically generated with certain predetermined field extraction rules. As discussed above, the events may be distributed across several indexers, wherein each indexer may be responsible for storing and searching a subset of the events contained in a corresponding data store. In a distributed indexer system, each indexer would need to maintain a local copy of the configuration file that is synchronized periodically across the various indexers. The ability to add schema to the configuration file at search time results in increased efficiency. A user can create new fields at search time and simply add field definitions to the configuration file. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules in the configuration file for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying raw data and uses late-binding schema for searching the raw data, it enables a user to continue investigating and learn valuable insights about the raw data long after data ingestion time. The ability to add multiple field definitions to the configuration file at search time also results in increased flexibility. For example, multiple field definitions can be added to the configuration file to capture the same field across events generated by different source types. This allows the data intake and query system to search and correlate data across heterogeneous sources flexibly and efficiently. Further, by providing the field definitions for the queried fields at search time, the configuration file716allows the record data store to be field searchable. In other words, the raw record data store can be searched using keywords as well as fields, wherein the fields are searchable name/value pairings that distinguish one event from another and can be defined in configuration file716using extraction rules. In comparison to a search containing field names, a keyword search does not need the configuration file and can search the event data directly as shown inFIG.7B. It should also be noted that any events filtered out by performing a search-time field extraction using a configuration file can be further processed by directing the results of the filtering step to a processing step using a pipelined search language. Using the prior example, a user could pipeline the results of the compare step to an aggregate function by asking the query search engine to count the number of events where the “clientip” field equals “127.0.0.1.” FIG.8Ais an interface diagram of an example user interface for a search screen800, in accordance with example embodiments. Search screen800includes a search bar802that accepts user input in the form of a search string. It also includes a time range picker812that enables the user to specify a time range for the search. For historical searches (e.g., searches based on a particular historical time range), the user can select a specific time range, or alternatively a relative time range, such as “today,” “yesterday” or “last week.” For real-time searches (e.g., searches whose results are based on data received in real-time), the user can select the size of a preceding time window to search for real-time events. Search screen800also initially displays a “data summary” dialog as is illustrated inFIG.8Bthat enables the user to select different sources for the events, such as by selecting specific hosts and log files. After the search is executed, the search screen800inFIG.8Acan display the results through search results tabs804, wherein search results tabs804includes: an “events tab” that displays various information about events returned by the search; a “statistics tab” that displays statistics about the search results; and a “visualization tab” that displays various visualizations of the search results. The events tab illustrated inFIG.8Adisplays a timeline graph805that graphically illustrates the number of events that occurred in one-hour intervals over the selected time range. The events tab also displays an events list808that enables a user to view the machine data in each of the returned events. The events tab additionally displays a sidebar that is an interactive field picker806. The field picker806may be displayed to a user in response to the search being executed and allows the user to further analyze the search results based on the fields in the events of the search results. The field picker806includes field names that reference fields present in the events in the search results. The field picker may display any Selected Fields820that a user has pre-selected for display (e.g., host, source, sourcetype) and may also display any Interesting Fields822that the system determines may be interesting to the user based on pre-specified criteria (e.g., action, bytes, categoryid, clientip, date_hour, date_mday, date_minute, etc.). The field picker also provides an option to display field names for all the fields present in the events of the search results using the All Fields control824. Each field name in the field picker806has a value type identifier to the left of the field name, such as value type identifier826. A value type identifier identifies the type of value for the respective field, such as an “a” for fields that include literal values or a “#” for fields that include numerical values. Each field name in the field picker also has a unique value count to the right of the field name, such as unique value count828. The unique value count indicates the number of unique values for the respective field in the events of the search results. Each field name is selectable to view the events in the search results that have the field referenced by that field name. For example, a user can select the “host” field name, and the events shown in the events list808will be updated with events in the search results that have the field that is reference by the field name “host.” A data model is a hierarchically structured search-time mapping of semantic knowledge about one or more datasets. It encodes the domain knowledge used to build a variety of specialized searches of those datasets. Those searches, in turn, can be used to generate reports. A data model is composed of one or more “objects” (or “data model objects”) that define or otherwise correspond to a specific set of data. An object is defined by constraints and attributes. An object's constraints are search criteria that define the set of events to be operated on by running a search having that search criteria at the time the data model is selected. An object's attributes are the set of fields to be exposed for operating on that set of events generated by the search criteria. Objects in data models can be arranged hierarchically in parent/child relationships. Each child object represents a subset of the dataset covered by its parent object. The top-level objects in data models are collectively referred to as “root objects.” Child objects have inheritance. Child objects inherit constraints and attributes from their parent objects and may have additional constraints and attributes of their own. Child objects provide a way of filtering events from parent objects. Because a child object may provide an additional constraint in addition to the constraints it has inherited from its parent object, the dataset it represents may be a subset of the dataset that its parent represents. For example, a first data model object may define a broad set of data pertaining to e-mail activity generally, and another data model object may define specific datasets within the broad dataset, such as a subset of the e-mail data pertaining specifically to e-mails sent. For example, a user can simply select an “e-mail activity” data model object to access a dataset relating to e-mails generally (e.g., sent or received), or select an “e-mails sent” data model object (or data sub-model object) to access a dataset relating to e-mails sent. Because a data model object is defined by its constraints (e.g., a set of search criteria) and attributes (e.g., a set of fields), a data model object can be used to quickly search data to identify a set of events and to identify a set of fields to be associated with the set of events. For example, an “e-mails sent” data model object may specify a search for events relating to e-mails that have been sent, and specify a set of fields that are associated with the events. Thus, a user can retrieve and use the “e-mails sent” data model object to quickly search source data for events relating to sent e-mails, and may be provided with a listing of the set of fields relevant to the events in a user interface screen. Examples of data models can include electronic mail, authentication, databases, intrusion detection, malware, application state, alerts, compute inventory, network sessions, network traffic, performance, audits, updates, vulnerabilities, etc. Data models and their objects can be designed by knowledge managers in an organization, and they can enable downstream users to quickly focus on a specific set of data. A user iteratively applies a model development tool (not shown inFIG.8A) to prepare a query that defines a subset of events and assigns an object name to that subset. A child subset is created by further limiting a query that generated a parent subset. Data definitions in associated schemas can be taken from the common information model (CIM) or can be devised for a particular schema and optionally added to the CIM. Child objects inherit fields from parents and can include fields not present in parents. A model developer can select fewer extraction rules than are available for the sources returned by the query that defines events belonging to a model. Selecting a limited set of extraction rules can be a tool for simplifying and focusing the data model, while allowing a user flexibility to explore the data subset. Development of a data model is further explained in U.S. Pat. Nos. 8,788,525 and 8,788,526, both entitled “DATA MODEL FOR MACHINE DATA FOR SEMANTIC SEARCH”, both issued on 22 Jul. 2014, U.S. Pat. No. 8,983,994, entitled “GENERATION OF A DATA MODEL FOR SEARCHING MACHINE DATA”, issued on 17 Mar. 2015, U.S. Pat. No. 9,128,980, entitled “GENERATION OF A DATA MODEL APPLIED TO QUERIES”, issued on 8 Sep. 2015, and U.S. Pat. No. 9,589,012, entitled “GENERATION OF A DATA MODEL APPLIED TO OBJECT QUERIES”, issued on 7 Mar. 2017, each of which is hereby incorporated by reference in its entirety for all purposes. A data model can also include reports. One or more report formats can be associated with a particular data model and be made available to run against the data model. A user can use child objects to design reports with object datasets that already have extraneous data pre-filtered out. In some embodiments, the data intake and query system108provides the user with the ability to produce reports (e.g., a table, chart, visualization, etc.) without having to enter SPL, SQL, or other query language terms into a search screen. Data models are used as the basis for the search feature. Data models may be selected in a report generation interface. The report generator supports drag-and-drop organization of fields to be summarized in a report. When a model is selected, the fields with available extraction rules are made available for use in the report. The user may refine and/or filter search results to produce more precise reports. The user may select some fields for organizing the report and select other fields for providing detail according to the report organization. For example, “region” and “salesperson” are fields used for organizing the report and sales data can be summarized (subtotaled and totaled) within this organization. The report generator allows the user to specify one or more fields within events and apply statistical analysis on values extracted from the specified one or more fields. The report generator may aggregate search results across sets of events and generate statistics based on aggregated search results. Building reports using the report generation interface is further explained in U.S. patent application Ser. No. 14/503,335, entitled “GENERATING REPORTS FROM UNSTRUCTURED DATA”, filed on 30 Sep. 2014, and which is hereby incorporated by reference in its entirety for all purposes. Data visualizations also can be generated in a variety of formats, by reference to the data model. Reports, data visualizations, and data model objects can be saved and associated with the data model for future use. The data model object may be used to perform searches of other data. FIGS.9-15are interface diagrams of example report generation user interfaces, in accordance with example embodiments. The report generation process may be driven by a predefined data model object, such as a data model object defined and/or saved via a reporting application or a data model object obtained from another source. A user can load a saved data model object using a report editor. For example, the initial search query and fields used to drive the report editor may be obtained from a data model object. The data model object that is used to drive a report generation process may define a search and a set of fields. Upon loading of the data model object, the report generation process may enable a user to use the fields (e.g., the fields defined by the data model object) to define criteria for a report (e.g., filters, split rows/columns, aggregates, etc.) and the search may be used to identify events (e.g., to identify events responsive to the search) used to generate the report. That is, for example, if a data model object is selected to drive a report editor, the graphical user interface of the report editor may enable a user to define reporting criteria for the report using the fields associated with the selected data model object, and the events used to generate the report may be constrained to the events that match, or otherwise satisfy, the search constraints of the selected data model object. The selection of a data model object for use in driving a report generation may be facilitated by a data model object selection interface.FIG.9illustrates an example interactive data model selection graphical user interface900of a report editor that displays a listing of available data models901. The user may select one of the data models902. FIG.10illustrates an example data model object selection graphical user interface1000that displays available data objects1001for the selected data object model902. The user may select one of the displayed data model objects1002for use in driving the report generation process. Once a data model object is selected by the user, a user interface screen1100shown inFIG.11Amay display an interactive listing of automatic field identification options1101based on the selected data model object. For example, a user may select one of the three illustrated options (e.g., the “All Fields” option1102, the “Selected Fields” option1103, or the “Coverage” option (e.g., fields with at least a specified % of coverage)1104). If the user selects the “All Fields” option1102, all of the fields identified from the events that were returned in response to an initial search query may be selected. That is, for example, all of the fields of the identified data model object fields may be selected. If the user selects the “Selected Fields” option1103, only the fields from the fields of the identified data model object fields that are selected by the user may be used. If the user selects the “Coverage” option1104, only the fields of the identified data model object fields meeting a specified coverage criteria may be selected. A percent coverage may refer to the percentage of events returned by the initial search query that a given field appears in. Thus, for example, if an object dataset includes 10,000 events returned in response to an initial search query, and the “avg_age” field appears in 854 of those 10,000 events, then the “avg_age” field would have a coverage of 8.54% for that object dataset. If, for example, the user selects the “Coverage” option and specifies a coverage value of 2%, only fields having a coverage value equal to or greater than 2% may be selected. The number of fields corresponding to each selectable option may be displayed in association with each option. For example, “97” displayed next to the “All Fields” option1102indicates that 97 fields will be selected if the “All Fields” option is selected. The “3” displayed next to the “Selected Fields” option1103indicates that 3 of the 97 fields will be selected if the “Selected Fields” option is selected. The “49” displayed next to the “Coverage” option1104indicates that 49 of the 97 fields (e.g., the 49 fields having a coverage of 2% or greater) will be selected if the “Coverage” option is selected. The number of fields corresponding to the “Coverage” option may be dynamically updated based on the specified percent of coverage. FIG.11Billustrates an example graphical user interface screen1105displaying the reporting application's “Report Editor” page. The screen may display interactive elements for defining various elements of a report. For example, the page includes a “Filters” element1106, a “Split Rows” element1107, a “Split Columns” element1108, and a “Column Values” element1109. The page may include a list of search results1111. In this example, the Split Rows element1107is expanded, revealing a listing of fields1110that can be used to define additional criteria (e.g., reporting criteria). The listing of fields1110may correspond to the selected fields. That is, the listing of fields1110may list only the fields previously selected, either automatically and/or manually by a user.FIG.11Cillustrates a formatting dialogue1112that may be displayed upon selecting a field from the listing of fields1110. The dialogue can be used to format the display of the results of the selection (e.g., label the column for the selected field to be displayed as “component”). FIG.11Dillustrates an example graphical user interface screen1105including a table of results1113based on the selected criteria including splitting the rows by the “component” field. A column1114having an associated count for each component listed in the table may be displayed that indicates an aggregate count of the number of times that the particular field-value pair (e.g., the value in a row for a particular field, such as the value “BucketMover” for the field “component”) occurs in the set of events responsive to the initial search query. FIG.12illustrates an example graphical user interface screen1200that allows the user to filter search results and to perform statistical analysis on values extracted from specific fields in the set of events. In this example, the top ten product names ranked by price are selected as a filter1201that causes the display of the ten most popular products sorted by price. Each row is displayed by product name and price1202. This results in each product displayed in a column labeled “product name” along with an associated price in a column labeled “price”1206. Statistical analysis of other fields in the events associated with the ten most popular products have been specified as column values1203. A count of the number of successful purchases for each product is displayed in column1204. These statistics may be produced by filtering the search results by the product name, finding all occurrences of a successful purchase in a field within the events and generating a total of the number of occurrences. A sum of the total sales is displayed in column1205, which is a result of the multiplication of the price and the number of successful purchases for each product. The reporting application allows the user to create graphical visualizations of the statistics generated for a report. For example,FIG.13illustrates an example graphical user interface1300that displays a set of components and associated statistics1301. The reporting application allows the user to select a visualization of the statistics in a graph (e.g., bar chart, scatter plot, area chart, line chart, pie chart, radial gauge, marker gauge, filler gauge, etc.), where the format of the graph may be selected using the user interface controls1302along the left panel of the user interface1300.FIG.14illustrates an example of a bar chart visualization1400of an aspect of the statistical data1301.FIG.15illustrates a scatter plot visualization1500of an aspect of the statistical data1301. The above-described system provides significant flexibility by enabling a user to analyze massive quantities of minimally-processed data “on the fly” at search time using a late-binding schema, instead of storing pre-specified portions of the data in a database at ingestion time. This flexibility enables a user to see valuable insights, correlate data, and perform subsequent queries to examine interesting aspects of the data that may not have been apparent at ingestion time. However, performing extraction and analysis operations at search time can involve a large amount of data and require a large number of computational operations, which can cause delays in processing the queries. Advantageously, the data intake and query system also employs a number of unique acceleration techniques that have been developed to speed up analysis operations performed at search time. These techniques include: (1) performing search operations in parallel across multiple indexers; (2) using a keyword index; (3) using a high performance analytics store; and (4) accelerating the process of generating reports. These novel techniques are described in more detail below. To facilitate faster query processing, a query can be structured such that multiple indexers perform the query in parallel, while aggregation of search results from the multiple indexers is performed locally at the search head. For example,FIG.16is an example search query received from a client and executed by search peers, in accordance with example embodiments.FIG.16illustrates how a search query1602received from a client at a search head210can split into two phases, including: (1) subtasks1604(e.g., data retrieval or simple filtering) that may be performed in parallel by indexers206for execution, and (2) a search results aggregation operation1606to be executed by the search head when the results are ultimately collected from the indexers. During operation, upon receiving search query1602, a search head210determines that a portion of the operations involved with the search query may be performed locally by the search head. The search head modifies search query1602by substituting “stats” (create aggregate statistics over results sets received from the indexers at the search head) with “prestats” (create statistics by the indexer from local results set) to produce search query1604, and then distributes search query1604to distributed indexers, which are also referred to as “search peers” or “peer indexers.” Note that search queries may generally specify search criteria or operations to be performed on events that meet the search criteria. Search queries may also specify field names, as well as search criteria for the values in the fields or operations to be performed on the values in the fields. Moreover, the search head may distribute the full search query to the search peers as illustrated inFIG.6A, or may alternatively distribute a modified version (e.g., a more restricted version) of the search query to the search peers. In this example, the indexers are responsible for producing the results and sending them to the search head. After the indexers return the results to the search head, the search head aggregates the received results1606to form a single search result set. By executing the query in this manner, the system effectively distributes the computational operations across the indexers while minimizing data transfers. As described above with reference to the flow charts inFIG.5AandFIG.6A, data intake and query system108can construct and maintain one or more keyword indices to quickly identify events containing specific keywords. This technique can greatly speed up the processing of queries involving specific keywords. As mentioned above, to build a keyword index, an indexer first identifies a set of keywords. Then, the indexer includes the identified keywords in an index, which associates each stored keyword with references to events containing that keyword, or to locations within events where that keyword is located. When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. To speed up certain types of queries, some embodiments of system108create a high performance analytics store, which is referred to as a “summarization table,” that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the events and includes references to events containing the specific value in the specific field. For example, an example entry in a summarization table can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. This optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the summarization table to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. Also, if the system needs to process all events that have a specific field-value combination, the system can use the references in the summarization table entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In some embodiments, the system maintains a separate summarization table for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific summarization table includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate summarization table for each indexer. The indexer-specific summarization table includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific summarization tables may also be bucket-specific. The summarization table can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some cases, when the summarization tables may not cover all of the events that are relevant to a query, the system can use the summarization tables to obtain partial results for the events that are covered by summarization tables, but may also have to search through other events that are not covered by the summarization tables to produce additional results. These additional results can then be combined with the partial results to produce a final set of results for the query. The summarization table and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “DISTRIBUTED HIGH PERFORMANCE ANALYTICS STORE”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, issued on 8 Sep. 2015, and U.S. patent application Ser. No. 14/815,973, entitled “GENERATING AND STORING SUMMARIZATION TABLES FOR SETS OF SEARCHABLE EVENTS”, filed on 1 Aug. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. To speed up certain types of queries, e.g., frequently encountered queries or computationally intensive queries, some embodiments of system108create a high performance analytics store, which is referred to as a “summarization table,” (also referred to as a “lexicon” or “inverted index”) that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. For example, an example entry in an inverted index can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. Creating the inverted index data structure avoids needing to incur the computational overhead each time a statistical query needs to be run on a frequently encountered field-value pair. In order to expedite queries, in most embodiments, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. Note that the term “summarization table” or “inverted index” as used herein is a data structure that may be generated by an indexer that includes at least field names and field values that have been extracted and/or indexed from event records. An inverted index may also include reference values that point to the location(s) in the field searchable data store where the event records that include the field may be found. Also, an inverted index may be stored using well-known compression techniques to reduce its storage size. Further, note that the term “reference value” (also referred to as a “posting value”) as used herein is a value that references the location of a source record in the field searchable data store. In some embodiments, the reference value may include additional information about each record, such as timestamps, record size, meta-data, or the like. Each reference value may be a unique identifier which may be used to access the event data directly in the field searchable data store. In some embodiments, the reference values may be ordered based on each event record's timestamp. For example, if numbers are used as identifiers, they may be sorted so event records having a later timestamp always have a lower valued identifier than event records with an earlier timestamp, or vice-versa. Reference values are often included in inverted indexes for retrieving and/or identifying event records. In one or more embodiments, an inverted index is generated in response to a user-initiated collection query. The term “collection query” as used herein refers to queries that include commands that generate summarization information and inverted indexes (or summarization tables) from event records stored in the field searchable data store. Note that a collection query is a special type of query that can be user-generated and is used to create an inverted index. A collection query is not the same as a query that is used to call up or invoke a pre-existing inverted index. In one or more embodiments, a query can comprise an initial step that calls up a pre-generated inverted index on which further filtering and processing can be performed. For example, referring back toFIG.6B, a set of events can be generated at block640by either using a “collection” query to create a new inverted index or by calling up a pre-generated inverted index. A query with several pipelined steps will start with a pre-generated index to accelerate the query. FIG.7Cillustrates the manner in which an inverted index is created and used in accordance with the disclosed embodiments. As shown inFIG.7C, an inverted index722can be created in response to a user-initiated collection query using the event data723stored in the raw record data store. For example, a non-limiting example of a collection query may include “collect clientip=127.0.0.1” which may result in an inverted index722being generated from the event data723as shown inFIG.7C. Each entry in inverted index722includes an event reference value that references the location of a source record in the field searchable data store. The reference value may be used to access the original event record directly from the field searchable data store. In one or more embodiments, if one or more of the queries is a collection query, the responsive indexers may generate summarization information based on the fields of the event records located in the field searchable data store. In at least one of the various embodiments, one or more of the fields used in the summarization information may be listed in the collection query and/or they may be determined based on terms included in the collection query. For example, a collection query may include an explicit list of fields to summarize. Or, in at least one of the various embodiments, a collection query may include terms or expressions that explicitly define the fields, e.g., using regex rules. InFIG.7C, prior to running the collection query that generates the inverted index722, the field name “clientip” may need to be defined in a configuration file by specifying the “access_combined” source type and a regular expression rule to parse out the client IP address. Alternatively, the collection query may contain an explicit definition for the field name “clientip” which may obviate the need to reference the configuration file at search time. In one or more embodiments, collection queries may be saved and scheduled to run periodically. These scheduled collection queries may periodically update the summarization information corresponding to the query. For example, if the collection query that generates inverted index722is scheduled to run periodically, one or more indexers would periodically search through the relevant buckets to update inverted index722with event data for any new events with the “clientip” value of “127.0.0.1.” In some embodiments, the inverted indexes that include fields, values, and reference value (e.g., inverted index722) for event records may be included in the summarization information provided to the user. In other embodiments, a user may not be interested in specific fields and values contained in the inverted index, but may need to perform a statistical query on the data in the inverted index. For example, referencing the example ofFIG.7Crather than viewing the fields within summarization table722, a user may want to generate a count of all client requests from IP address “127.0.0.1.” In this case, the search engine would simply return a result of “4” rather than including details about the inverted index722in the information provided to the user. The pipelined search language, e.g., SPL of the SPLUNK® ENTERPRISE system can be used to pipe the contents of an inverted index to a statistical query using the “stats” command for example. A “stats” query refers to queries that generate result sets that may produce aggregate and statistical results from event records, e.g., average, mean, max, min, rms, etc. Where sufficient information is available in an inverted index, a “stats” query may generate their result sets rapidly from the summarization information available in the inverted index rather than directly scanning event records. For example, the contents of inverted index722can be pipelined to a stats query, e.g., a “count” function that counts the number of entries in the inverted index and returns a value of “4.” In this way, inverted indexes may enable various stats queries to be performed absent scanning or search the event records. Accordingly, this optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the inverted index to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. In some embodiments, the system maintains a separate inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate inverted index for each indexer. The indexer-specific inverted index includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific inverted indexes may also be bucket-specific. In at least one or more embodiments, if one or more of the queries is a stats query, each indexer may generate a partial result set from previously generated summarization information. The partial result sets may be returned to the search head that received the query and combined into a single result set for the query As mentioned above, the inverted index can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some embodiments, if summarization information is absent from an indexer that includes responsive event records, further actions may be taken, such as, the summarization information may be generated on the fly, warnings may be provided the user, the collection query operation may be halted, the absence of summarization information may be ignored, or the like, or combination thereof. In one or more embodiments, an inverted index may be set up to update continually. For example, the query may ask for the inverted index to update its result periodically, e.g., every hour. In such instances, the inverted index may be a dynamic data structure that is regularly updated to include information regarding incoming events. In some cases, e.g., where a query is executed before an inverted index updates, when the inverted index may not cover all of the events that are relevant to a query, the system can use the inverted index to obtain partial results for the events that are covered by inverted index, but may also have to search through other events that are not covered by the inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data on the data store to supplement the partial results. These additional results can then be combined with the partial results to produce a final set of results for the query. Note that in typical instances where an inverted index is not completely up to date, the number of events that an indexer would need to search through to supplement the results from the inverted index would be relatively small. In other words, the search to get the most recent results can be quick and efficient because only a small number of event records will be searched through to supplement the information from the inverted index. The inverted index and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “DISTRIBUTED HIGH PERFORMANCE ANALYTICS STORE”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, filed on 31 Jan. 2014, and U.S. patent application Ser. No. 14/815,973, entitled “STORAGE MEDIUM AND CONTROL DEVICE”, filed on 21 Feb. 2014, each of which is hereby incorporated by reference in its entirety. In one or more embodiments, if the system needs to process all events that have a specific field-value combination, the system can use the references in the inverted index entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In other words, the system can use the reference values to locate the associated event data in the field searchable data store and extract further information from those events, e.g., extract further field values from the events for purposes of filtering or processing or both. The information extracted from the event data using the reference values can be directed for further filtering or processing in a query using the pipeline search language. The pipelined search language will, in one embodiment, include syntax that can direct the initial filtering step in a query to an inverted index. In one embodiment, a user would include syntax in the query that explicitly directs the initial searching or filtering step to the inverted index. Referencing the example inFIG.7C, if the user determines that she needs the user id fields associated with the client requests from IP address “127.0.0.1,” instead of incurring the computational overhead of performing a brand new search or re-generating the inverted index with an additional field, the user can generate a query that explicitly directs or pipes the contents of the already generated inverted index722to another filtering step requesting the user ids for the entries in inverted index722where the server response time is greater than “0.0900” microseconds. The search engine would use the reference values stored in inverted index722to retrieve the event data from the field searchable data store, filter the results based on the “response time” field values and, further, extract the user id field from the resulting event data to return to the user. In the present instance, the user ids “frank” and “carlos” would be returned to the user from the generated results table722. In one embodiment, the same methodology can be used to pipe the contents of the inverted index to a processing step. In other words, the user is able to use the inverted index to efficiently and quickly perform aggregate functions on field values that were not part of the initially generated inverted index. For example, a user may want to determine an average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” In this case, the search engine would again use the reference values stored in inverted index722to retrieve the event data from the field searchable data store and, further, extract the object size field values from the associated events731,732,733and734. Once, the corresponding object sizes have been extracted (i.e. 2326, 2900, 2920, and 5000), the average can be computed and returned to the user. In one embodiment, instead of explicitly invoking the inverted index in a user-generated query, e.g., by the use of special commands or syntax, the SPLUNK® ENTERPRISE system can be configured to automatically determine if any prior-generated inverted index can be used to expedite a user query. For example, the user's query may request the average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” without any reference to or use of inverted index722. The search engine, in this case, would automatically determine that an inverted index722already exists in the system that could expedite this query. In one embodiment, prior to running any search comprising a field-value pair, for example, a search engine may search though all the existing inverted indexes to determine if a pre-generated inverted index could be used to expedite the search comprising the field-value pair. Accordingly, the search engine would automatically use the pre-generated inverted index, e.g., index722to generate the results without any user-involvement that directs the use of the index. Using the reference values in an inverted index to be able to directly access the event data in the field searchable data store and extract further information from the associated event data for further filtering and processing is highly advantageous because it avoids incurring the computation overhead of regenerating the inverted index with additional fields or performing a new search. The data intake and query system includes one or more forwarders that receive raw machine data from a variety of input data sources, and one or more indexers that process and store the data in one or more data stores. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. In one or more embodiments, a multiple indexer implementation of the search system would maintain a separate and respective inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. As explained above, a search head would be able to correlate and synthesize data from across the various buckets and indexers. This feature advantageously expedites searches because instead of performing a computationally intensive search in a centrally located inverted index that catalogues all the relevant events, an indexer is able to directly search an inverted index stored in a bucket associated with the time-range specified in the query. This allows the search to be performed in parallel across the various indexers. Further, if the query requests further filtering or processing to be conducted on the event data referenced by the locally stored bucket-specific inverted index, the indexer is able to simply access the event records stored in the associated bucket for further filtering and processing instead of needing to access a central repository of event records, which would dramatically add to the computational overhead. In one embodiment, there may be multiple buckets associated with the time-range specified in a query. If the query is directed to an inverted index, or if the search engine automatically determines that using an inverted index would expedite the processing of the query, the indexers will search through each of the inverted indexes associated with the buckets for the specified time-range. This feature allows the High Performance Analytics Store to be scaled easily. In certain instances, where a query is executed before a bucket-specific inverted index updates, when the bucket-specific inverted index may not cover all of the events that are relevant to a query, the system can use the bucket-specific inverted index to obtain partial results for the events that are covered by bucket-specific inverted index, but may also have to search through the event data in the bucket associated with the bucket-specific inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data stored in the bucket (that was not yet processed by the indexer for the corresponding inverted index) to supplement the partial results from the bucket-specific inverted index. FIG.7Dpresents a flowchart illustrating how an inverted index in a pipelined search query can be used to determine a set of event data that can be further limited by filtering or processing in accordance with the disclosed embodiments. At block742, a query is received by a data intake and query system. In some embodiments, the query can be received as a user generated query entered into search bar of a graphical user search interface. The search interface also includes a time range control element that enables specification of a time range for the query. At block744, an inverted index is retrieved. Note, that the inverted index can be retrieved in response to an explicit user search command inputted as part of the user generated query. Alternatively, the search engine can be configured to automatically use an inverted index if it determines that using the inverted index would expedite the servicing of the user generated query. Each of the entries in an inverted index keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. In order to expedite queries, in most embodiments, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. At block746, the query engine determines if the query contains further filtering and processing steps. If the query contains no further commands, then, in one embodiment, summarization information can be provided to the user at block754. If, however, the query does contain further filtering and processing commands, then at block750, the query engine determines if the commands relate to further filtering or processing of the data extracted as part of the inverted index or whether the commands are directed to using the inverted index as an initial filtering step to further filter and process event data referenced by the entries in the inverted index. If the query can be completed using data already in the generated inverted index, then the further filtering or processing steps, e.g., a “count” number of records function, “average” number of records per hour etc. are performed and the results are provided to the user at block752. If, however, the query references fields that are not extracted in the inverted index, then the indexers will access event data pointed to by the reference values in the inverted index to retrieve any further information required at block756. Subsequently, any further filtering or processing steps are performed on the fields extracted directly from the event data and the results are provided to the user at step758. In some embodiments, a data server system such as the data intake and query system can accelerate the process of periodically generating updated reports based on query results. To accelerate this process, a summarization engine automatically examines the query to determine whether generation of updated reports can be accelerated by creating intermediate summaries. If reports can be accelerated, the summarization engine periodically generates a summary covering data obtained during a latest non-overlapping time period. For example, where the query seeks events meeting a specified criteria, a summary for the time period includes only events within the time period that meet the specified criteria. Similarly, if the query seeks statistics calculated from the events, such as the number of events that match the specified criteria, then the summary for the time period includes the number of events in the period that match the specified criteria. In addition to the creation of the summaries, the summarization engine schedules the periodic updating of the report associated with the query. During each scheduled report update, the query engine determines whether intermediate summaries have been generated covering portions of the time period covered by the report update. If so, then the report is generated based on the information contained in the summaries. Also, if additional event data has been received and has not yet been summarized, and is required to generate the complete report, the query can be run on these additional events. Then, the results returned by this query on the additional events, along with the partial results obtained from the intermediate summaries, can be combined to generate the updated report. This process is repeated each time the report is updated. Alternatively, if the system stores events in buckets covering specific time ranges, then the summaries can be generated on a bucket-by-bucket basis. Note that producing intermediate summaries can save the work involved in re-running the query for previous time periods, so advantageously only the newer events needs to be processed while generating an updated report. These report acceleration techniques are described in more detail in U.S. Pat. No. 8,589,403, entitled “COMPRESSED JOURNALING IN EVENT TRACKING FILES FOR METADATA RECOVERY AND REPLICATION”, issued on 19 Nov. 2013, U.S. Pat. No. 8,412,696, entitled “REAL TIME SEARCHING AND REPORTING”, issued on 2 Apr. 2011, and U.S. Pat. Nos. 8,589,375 and 8,589,432, both also entitled “REAL TIME SEARCHING AND REPORTING”, both issued on 19 Nov. 2013, each of which is hereby incorporated by reference in its entirety for all purposes. The data intake and query system provides various schemas, dashboards, and visualizations that simplify developers' tasks to create applications with additional capabilities. One such application is an enterprise security application, such as SPLUNK® ENTERPRISE SECURITY, which performs monitoring and alerting operations and includes analytics to facilitate identifying both known and unknown security threats based on large volumes of data stored by the data intake and query system. The enterprise security application provides the security practitioner with visibility into security-relevant threats found in the enterprise infrastructure by capturing, monitoring, and reporting on data from enterprise security devices, systems, and applications. Through the use of the data intake and query system searching and reporting capabilities, the enterprise security application provides a top-down and bottom-up view of an organization's security posture. The enterprise security application leverages the data intake and query system search-time normalization techniques, saved searches, and correlation searches to provide visibility into security-relevant threats and activity and generate notable events for tracking. The enterprise security application enables the security practitioner to investigate and explore the data to find new or unknown threats that do not follow signature-based patterns. Conventional Security Information and Event Management (SIEM) systems lack the infrastructure to effectively store and analyze large volumes of security-related data. Traditional SIEM systems typically use fixed schemas to extract data from pre-defined security-related fields at data ingestion time and store the extracted data in a relational database. This traditional data extraction process (and associated reduction in data size) that occurs at data ingestion time inevitably hampers future incident investigations that may need original data to determine the root cause of a security issue, or to detect the onset of an impending security threat. In contrast, the enterprise security application system stores large volumes of minimally-processed security-related data at ingestion time for later retrieval and analysis at search time when a live security threat is being investigated. To facilitate this data retrieval process, the enterprise security application provides pre-specified schemas for extracting relevant values from the different types of security-related events and enables a user to define such schemas. The enterprise security application can process many types of security-related information. In general, this security-related information can include any information that can be used to identify security threats. For example, the security-related information can include network-related information, such as IP addresses, domain names, asset identifiers, network traffic volume, uniform resource locator strings, and source addresses. The process of detecting security threats for network-related information is further described in U.S. Pat. No. 8,826,434, entitled “SECURITY THREAT DETECTION BASED ON INDICATIONS IN BIG DATA OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 2 Sep. 2014, U.S. Pat. No. 9,215,240, entitled “INVESTIGATIVE AND DYNAMIC DETECTION OF POTENTIAL SECURITY-THREAT INDICATORS FROM EVENTS IN BIG DATA”, issued on 15 Dec. 2015, U.S. Pat. No. 9,173,801, entitled “GRAPHIC DISPLAY OF SECURITY THREATS BASED ON INDICATIONS OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 3 Nov. 2015, U.S. Pat. No. 9,248,068, entitled “SECURITY THREAT DETECTION OF NEWLY REGISTERED DOMAINS”, issued on 2 Feb. 2016, U.S. Pat. No. 9,426,172, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME ACCESSES”, issued on 23 Aug. 2016, and U.S. Pat. No. 9,432,396, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME REGISTRATIONS”, issued on 30 Aug. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. Security-related information can also include malware infection data and system configuration information, as well as access control information, such as login/logout information and access failure notifications. The security-related information can originate from various sources within a data center, such as hosts, virtual machines, storage devices and sensors. The security-related information can also originate from various sources in a network, such as routers, switches, email servers, proxy servers, gateways, firewalls and intrusion-detection systems. During operation, the enterprise security application facilitates detecting “notable events” that are likely to indicate a security threat. A notable event represents one or more anomalous incidents, the occurrence of which can be identified based on one or more events (e.g., time stamped portions of raw machine data) fulfilling pre-specified and/or dynamically-determined (e.g., based on machine-learning) criteria defined for that notable event. Examples of notable events include the repeated occurrence of an abnormal spike in network usage over a period of time, a single occurrence of unauthorized access to system, a host communicating with a server on a known threat list, and the like. These notable events can be detected in a number of ways, such as: (1) a user can notice a correlation in events and can manually identify that a corresponding group of one or more events amounts to a notable event; or (2) a user can define a “correlation search” specifying criteria for a notable event, and every time one or more events satisfy the criteria, the application can indicate that the one or more events correspond to a notable event; and the like. A user can alternatively select a pre-defined correlation search provided by the application. Note that correlation searches can be run continuously or at regular intervals (e.g., every hour) to search for notable events. Upon detection, notable events can be stored in a dedicated “notable events index,” which can be subsequently accessed to generate various visualizations containing security-related information. Also, alerts can be generated to notify system operators when important notable events are discovered. The enterprise security application provides various visualizations to aid in discovering security threats, such as a “key indicators view” that enables a user to view security metrics, such as counts of different types of notable events. For example,FIG.17Aillustrates an example key indicators view1700that comprises a dashboard, which can display a value1701, for various security-related metrics, such as malware infections1702. It can also display a change in a metric value1703, which indicates that the number of malware infections increased by 63 during the preceding interval. Key indicators view1700additionally displays a histogram panel1704that displays a histogram of notable events organized by urgency values, and a histogram of notable events organized by time intervals. This key indicators view is described in further detail in pending U.S. patent application Ser. No. 13/956,338, entitled “KEY INDICATORS VIEW”, filed on 31 Jul. 2013, and which is hereby incorporated by reference in its entirety for all purposes. These visualizations can also include an “incident review dashboard” that enables a user to view and act on “notable events.” These notable events can include: (1) a single event of high importance, such as any activity from a known web attacker; or (2) multiple events that collectively warrant review, such as a large number of authentication failures on a host followed by a successful authentication. For example,FIG.17Billustrates an example incident review dashboard1710that includes a set of incident attribute fields1711that, for example, enables a user to specify a time range field1712for the displayed events. It also includes a timeline1713that graphically illustrates the number of incidents that occurred in time intervals over the selected time range. It additionally displays an events list1714that enables a user to view a list of all of the notable events that match the criteria in the incident attributes fields1711. To facilitate identifying patterns among the notable events, each notable event can be associated with an urgency value (e.g., low, medium, high, critical), which is indicated in the incident review dashboard. The urgency value for a detected event can be determined based on the severity of the event and the priority of the system component associated with the event. As mentioned above, the data intake and query platform provides various features that simplify the developer's task to create various applications. One such application is a virtual machine monitoring application, such as SPLUNK® APP FOR VMWARE® that provides operational visibility into granular performance metrics, logs, tasks and events, and topology from hosts, virtual machines and virtual centers. It empowers administrators with an accurate real-time picture of the health of the environment, proactively identifying performance and capacity bottlenecks. Conventional data-center-monitoring systems lack the infrastructure to effectively store and analyze large volumes of machine-generated data, such as performance information and log data obtained from the data center. In conventional data-center-monitoring systems, machine-generated data is typically pre-processed prior to being stored, for example, by extracting pre-specified data items and storing them in a database to facilitate subsequent retrieval and analysis at search time. However, the rest of the data is not saved and discarded during pre-processing. In contrast, the virtual machine monitoring application stores large volumes of minimally processed machine data, such as performance information and log data, at ingestion time for later retrieval and analysis at search time when a live performance issue is being investigated. In addition to data obtained from various log files, this performance-related information can include values for performance metrics obtained through an application programming interface (API) provided as part of the vSphere Hypervisor™ system distributed by VMware, Inc. of Palo Alto, California. For example, these performance metrics can include: (1) CPU-related performance metrics; (2) disk-related performance metrics; (3) memory-related performance metrics; (4) network-related performance metrics; (5) energy-usage statistics; (6) data-traffic-related performance metrics; (7) overall system availability performance metrics; (8) cluster-related performance metrics; and (9) virtual machine performance statistics. Such performance metrics are described in U.S. patent application Ser. No. 14/167,316, entitled “CORRELATION FOR USER-SELECTED TIME RANGES OF VALUES FOR PERFORMANCE METRICS OF COMPONENTS IN AN INFORMATION-TECHNOLOGY ENVIRONMENT WITH LOG DATA FROM THAT INFORMATION-TECHNOLOGY ENVIRONMENT”, filed on 29 Jan. 2014, and which is hereby incorporated by reference in its entirety for all purposes. To facilitate retrieving information of interest from performance data and log files, the virtual machine monitoring application provides pre-specified schemas for extracting relevant values from different types of performance-related events, and also enables a user to define such schemas. The virtual machine monitoring application additionally provides various visualizations to facilitate detecting and diagnosing the root cause of performance problems. For example, one such visualization is a “proactive monitoring tree” that enables a user to easily view and understand relationships among various factors that affect the performance of a hierarchically structured computing system. This proactive monitoring tree enables a user to easily navigate the hierarchy by selectively expanding nodes representing various entities (e.g., virtual centers or computing clusters) to view performance information for lower-level nodes associated with lower-level entities (e.g., virtual machines or host systems). Example node-expansion operations are illustrated inFIG.17C, wherein nodes1733and1734are selectively expanded. Note that nodes1731-1739can be displayed using different patterns or colors to represent different performance states, such as a critical state, a warning state, a normal state or an unknown/offline state. The ease of navigation provided by selective expansion in combination with the associated performance-state information enables a user to quickly diagnose the root cause of a performance problem. The proactive monitoring tree is described in further detail in U.S. Pat. No. 9,185,007, entitled “PROACTIVE MONITORING TREE WITH SEVERITY STATE SORTING”, issued on 10 Nov. 2015, and U.S. Pat. No. 9,426,045, also entitled “PROACTIVE MONITORING TREE WITH SEVERITY STATE SORTING”, issued on 23 Aug. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. The virtual machine monitoring application also provides a user interface that enables a user to select a specific time range and then view heterogeneous data comprising events, log data, and associated performance metrics for the selected time range. For example, the screen illustrated inFIG.17Ddisplays a listing of recent “tasks and events” and a listing of recent “log entries” for a selected time range above a performance-metric graph for “average CPU core utilization” for the selected time range. Note that a user is able to operate pull-down menus1742to selectively display different performance metric graphs for the selected time range. This enables the user to correlate trends in the performance-metric graph with corresponding event and log data to quickly determine the root cause of a performance problem. This user interface is described in more detail in U.S. patent application Ser. No. 14/167,316, entitled “CORRELATION FOR USER-SELECTED TIME RANGES OF VALUES FOR PERFORMANCE METRICS OF COMPONENTS IN AN INFORMATION-TECHNOLOGY ENVIRONMENT WITH LOG DATA FROM THAT INFORMATION-TECHNOLOGY ENVIRONMENT”, filed on 29 Jan. 2014, and which is hereby incorporated by reference in its entirety for all purposes. As previously mentioned, the data intake and query platform provides various schemas, dashboards and visualizations that make it easy for developers to create applications to provide additional capabilities. One such application is an IT monitoring application, such as SPLUNK® IT SERVICE INTELLIGENCE™, which performs monitoring and alerting operations. The IT monitoring application also includes analytics to help an analyst diagnose the root cause of performance problems based on large volumes of data stored by the data intake and query system as correlated to the various services an IT organization provides (a service-centric view). This differs significantly from conventional IT monitoring systems that lack the infrastructure to effectively store and analyze large volumes of service-related events. Traditional service monitoring systems typically use fixed schemas to extract data from pre-defined fields at data ingestion time, wherein the extracted data is typically stored in a relational database. This data extraction process and associated reduction in data content that occurs at data ingestion time inevitably hampers future investigations, when all of the original data may be needed to determine the root cause of or contributing factors to a service issue. In contrast, an IT monitoring application system stores large volumes of minimally-processed service-related data at ingestion time for later retrieval and analysis at search time, to perform regular monitoring, or to investigate a service issue. To facilitate this data retrieval process, the IT monitoring application enables a user to define an IT operations infrastructure from the perspective of the services it provides. In this service-centric approach, a service such as corporate e-mail may be defined in terms of the entities employed to provide the service, such as host machines and network devices. Each entity is defined to include information for identifying all of the events that pertains to the entity, whether produced by the entity itself or by another machine, and considering the many various ways the entity may be identified in machine data (such as by a URL, an IP address, or machine name). The service and entity definitions can organize events around a service so that all of the events pertaining to that service can be easily identified. This capability provides a foundation for the implementation of Key Performance Indicators. One or more Key Performance Indicators (KPI's) are defined for a service within the IT monitoring application. Each KPI measures an aspect of service performance at a point in time or over a period of time (aspect KPI's). Each KPI is defined by a search query that derives a KPI value from the machine data of events associated with the entities that provide the service. Information in the entity definitions may be used to identify the appropriate events at the time a KPI is defined or whenever a KPI value is being determined. The KPI values derived over time may be stored to build a valuable repository of current and historical performance information for the service, and the repository, itself, may be subject to search query processing. Aggregate KPIs may be defined to provide a measure of service performance calculated from a set of service aspect KPI values; this aggregate may even be taken across defined timeframes and/or across multiple services. A particular service may have an aggregate KPI derived from substantially all of the aspect KPI's of the service to indicate an overall health score for the service. The IT monitoring application facilitates the production of meaningful aggregate KPI's through a system of KPI thresholds and state values. Different KPI definitions may produce values in different ranges, and so the same value may mean something very different from one KPI definition to another. To address this, the IT monitoring application implements a translation of individual KPI values to a common domain of “state” values. For example, a KPI range of values may be 1-100, or 50-275, while values in the state domain may be ‘critical,’ warning,′ ‘normal,’ and ‘informational’. Thresholds associated with a particular KPI definition determine ranges of values for that KPI that correspond to the various state values. In one case, KPI values 95-100 may be set to correspond to ‘critical’ in the state domain. KPI values from disparate KPI's can be processed uniformly once they are translated into the common state values using the thresholds. For example, “normal 80% of the time” can be applied across various KPI's. To provide meaningful aggregate KPI's, a weighting value can be assigned to each KPI so that its influence on the calculated aggregate KPI value is increased or decreased relative to the other KPI's. One service in an IT environment often impacts, or is impacted by, another service. The IT monitoring application can reflect these dependencies. For example, a dependency relationship between a corporate e-mail service and a centralized authentication service can be reflected by recording an association between their respective service definitions. The recorded associations establish a service dependency topology that informs the data or selection options presented in a GUI, for example. (The service dependency topology is like a “map” showing how services are connected based on their dependencies.) The service topology may itself be depicted in a GUI and may be interactive to allow navigation among related services. Entity definitions in the IT monitoring application can include informational fields that can serve as metadata, implied data fields, or attributed data fields for the events identified by other aspects of the entity definition. Entity definitions in the IT monitoring application can also be created and updated by an import of tabular data (as represented in a CSV, another delimited file, or a search query result set). The import may be GUI-mediated or processed using import parameters from a GUI-based import definition process. Entity definitions in the IT monitoring application can also be associated with a service by means of a service definition rule. Processing the rule results in the matching entity definitions being associated with the service definition. The rule can be processed at creation time, and thereafter on a scheduled or on-demand basis. This allows dynamic, rule-based updates to the service definition. During operation, the IT monitoring application can recognize notable events that may indicate a service performance problem or other situation of interest. These notable events can be recognized by a “correlation search” specifying trigger criteria for a notable event: every time KPI values satisfy the criteria, the application indicates a notable event. A severity level for the notable event may also be specified. Furthermore, when trigger criteria are satisfied, the correlation search may additionally or alternatively cause a service ticket to be created in an IT service management (ITSM) system, such as a systems available from ServiceNow, Inc., of Santa Clara, California. SPLUNK® IT SERVICE INTELLIGENCE™ provides various visualizations built on its service-centric organization of events and the KPI values generated and collected. Visualizations can be particularly useful for monitoring or investigating service performance. The IT monitoring application provides a service monitoring interface suitable as the home page for ongoing IT service monitoring. The interface is appropriate for settings such as desktop use or for a wall-mounted display in a network operations center (NOC). The interface may prominently display a services health section with tiles for the aggregate KPI's indicating overall health for defined services and a general KPI section with tiles for KPI's related to individual service aspects. These tiles may display KPI information in a variety of ways, such as by being colored and ordered according to factors like the KPI state value. They also can be interactive and navigate to visualizations of more detailed KPI information. The IT monitoring application provides a service-monitoring dashboard visualization based on a user-defined template. The template can include user-selectable widgets of varying types and styles to display KPI information. The content and the appearance of widgets can respond dynamically to changing KPI information. The KPI widgets can appear in conjunction with a background image, user drawing objects, or other visual elements, that depict the IT operations environment, for example. The KPI widgets or other GUI elements can be interactive so as to provide navigation to visualizations of more detailed KPI information. The IT monitoring application provides a visualization showing detailed time-series information for multiple KPI's in parallel graph lanes. The length of each lane can correspond to a uniform time range, while the width of each lane may be automatically adjusted to fit the displayed KPI data. Data within each lane may be displayed in a user selectable style, such as a line, area, or bar chart. During operation a user may select a position in the time range of the graph lanes to activate lane inspection at that point in time. Lane inspection may display an indicator for the selected time across the graph lanes and display the KPI value associated with that point in time for each of the graph lanes. The visualization may also provide navigation to an interface for defining a correlation search, using information from the visualization to pre-populate the definition. The IT monitoring application provides a visualization for incident review showing detailed information for notable events. The incident review visualization may also show summary information for the notable events over a time frame, such as an indication of the number of notable events at each of a number of severity levels. The severity level display may be presented as a rainbow chart with the warmest color associated with the highest severity classification. The incident review visualization may also show summary information for the notable events over a time frame, such as the number of notable events occurring within segments of the time frame. The incident review visualization may display a list of notable events within the time frame ordered by any number of factors, such as time or severity. The selection of a particular notable event from the list may display detailed information about that notable event, including an identification of the correlation search that generated the notable event. The IT monitoring application provides pre-specified schemas for extracting relevant values from the different types of service-related events. It also enables a user to define such schemas. FIG.18is a block diagram of an embodiment of the data processing environment200described previously with reference toFIG.2that includes a distributed ledger system1802as a data source202of the data intake and query system108, a distributed ledger system monitor1804(also referred to herein as monitor1804), and a client device204to interact with data associated with the data intake and query system108. Non-limiting examples of a distributed ledger system1802include, but are not limited to, Ethereum, Hyperledger Fabric, Quorum, Guardtime, KSI, etc. The distributed ledger system monitor1804can be used to monitor or obtain data associated with the distributed ledger system1802. The monitor1804can be implemented using one or more computing devices, virtual machines, containers, pods, another virtualization technology, or the like, in communication with one or more nodes1806of the distributed ledger system1802. For example, in some embodiments, the monitor1804can be implemented on the same or across different computing devices as distinct container instances, with each container having access to a subset of the resources of a host computing device (e.g., a subset of the memory or processing time of the processors of the host computing device), but sharing a similar operating system. For example, the monitor1804can be implemented as one or more Docker containers, which are managed by an orchestration platform of an isolated execution environment system, such as Kubernetes. Although illustrated as being distinct from the data intake and query system108and distributed ledger system1802, it will be understood that in some embodiments, the monitor1804can be implemented as part of the data intake and query system108and/or distributed ledger system1802. For example, the monitor1804can be implemented using or on one or more nodes1806of the distributed ledger system1802and/or be implemented using one or more components of the data intake and query system108. In certain embodiments, such as when the distributed ledger system1802is implemented using an isolated execution environment system, such as, but not limited to Kubernetes, Docker, etc., the monitor1804can be implemented as an isolated execution environment of the isolated execution environment system and/or using an isolated execution environment system that is separate from the isolated execution environment system used to implement the distributed ledger system1802. In some embodiments, the monitor1804interfaces with the distributed ledger system1802to collect data from one or more components of the distributed ledger system1802, such as the nodes1806. In certain embodiments, the monitor1804can collect different types of data from the distributed ledger system1802. In some embodiments, the monitor1804receives, from one or more nodes1806, distributed ledger transactions, blocks, metrics data, and/or log data. Although only one monitor1804is shown inFIG.18, it will be understood that multiple monitors can be used to collect data from the distributed ledger system1802. In some embodiments, one or more monitors can collect data from each node1806(e.g., from each peer node1806and/or ordering node1806) or a subset of the nodes1806(e.g., one or more peer nodes1806). In some embodiments, the log data can be generated in response to one or more activities on a node, such as an error, receipt of a request from another node1806or client computing device, or in response to the node1806processing a transaction of the distributed ledger system1802. The log data can include information about the activity, such as an identification of the error, a transaction identifier corresponding to the transaction being processed and the nature of the processing task, etc. In some embodiments, the log data can correspond to or identify different transactions that are being processed by the nodes1806. For example, the log data generated by a peer node1806(as will be described herein) can indicate the processing task being applied to a particular proposed transaction (e.g., receive transaction, endorse transaction, validate/invalidate transaction, commit block with transaction to blockchain, read/write the proposed changes of the transaction to the ledger state1904, etc.). Similarly, an ordering node1806(as will be described herein) can generate log data indicative of activities it is executing relative to a transaction (e.g., receive endorsed transaction, order transaction, add transaction to a block, communicate transaction to peer nodes1806as part of the block, committing transaction to blockchain as part of a block, etc.). Depending on the implementation of the nodes1806, the log data can be stored in a data store of the nodes, and/or converted and stored as part of log data of an isolated execution environment system, etc. For example, if the nodes1806are implemented using one or more isolated execution environments, the log data may undergo processing by the isolated execution environment system and stored as part of a log file of the isolated execution environment system. For example, the log data may be wrapped in a JSON wrapper and stored as part of a Docker or Kubernetes log file, etc. As described herein, the generated metrics can include information about the performance metrics of the node1806and/or the distributed ledger system1802, such as, but not limited to, (1) CPU-related performance metrics; (2) disk-related performance metrics; (3) memory-related performance metrics; (4) network-related performance metrics; (5) energy-usage statistics; (6) data-traffic-related performance metrics; (7) overall system availability performance metrics; (8) cluster-related performance metrics; and (9) virtual machine performance statistics, etc. In some cases, the metrics are stored in a data store associated with a node1806. In some cases, the metrics can include a timestamp corresponding to when the metric was measured/obtained. The transaction notifications can include information about a block (including its transactions) that is to be committed to a blockchain. In some cases, the transaction notifications can correspond to individual transactions of a block, the entire block, or parts of a transactions, such as the bytecode used as part of a transaction, etc. In some cases, the transaction notifications can include the entire content of a block (e.g., the header portion, body portion, transactions, metadata, etc.), or a summary of information, such as an indication of which transactions of a block were validated/invalidated and/or committed to a blockchain. In certain embodiments, the transaction notifications can be stored in a data store, a publication-subscription (pub-sub) messaging system, or buffer. The transaction notifications can differ from the log data. For example, the log data can be generated asynchronously as various activities occur on different nodes1806(e.g., errors, specific processing tasks, etc.), whereas the transaction notifications can be generated as a result of a block being committed to a blockchain. For example, in some cases, peer nodes1806and/or ordering nodes1806can generate log data but only peer nodes1806can generate transaction notifications. Further, the transaction notifications can differ from log data in that the log data can include unstructured raw machine data, whereas the transaction notifications can include structured data that identifies the block (or portions thereof) that is to be committed to a blockchain or a summary related to transactions of the block that is to be committed (e.g., identification of validated/invalidated transactions). In addition, the transaction notifications can include information about multiple transactions and/or multiple transaction identifiers, whereas the log data may include information about only one transaction and/or only one transaction identifier. As mentioned, the monitor1804can collect any one or any combination of the data generated by the nodes1806. In some embodiments, the monitor1804is configured to obtain one type of data, such as the transaction notifications. In some such embodiments, the monitor1804can interact with a respective node1806to obtain the transaction notifications. As described herein, in some cases, the transaction notifications are posted to a pub-sub. As such, the monitor can subscribe to the pub-sub to obtain the relevant transaction notifications. In some cases, a node1806is associated with multiple channels and the transaction notifications for the different channels are found on different topics of a pub-sub or on different pub-subs. In these cases, the monitor1804can be configured to subscribe to the different topics and/or pub-subs. In this way, the monitor1804can collect the relevant transaction notifications from a node1806. In some cases, the monitor102processes the transaction notifications. For example, in some cases, portions of the transaction notification, such as the details of the individual transactions, may be encrypted or encoded. In these examples, the monitor1804can decode byte strings to readable UTF8 strings or hex. Further, the transaction notifications may include information about multiple transactions. In some such embodiments, the monitor102may parse information about individual transactions and separately communicate the information about individual transactions to the data intake and query system108(as well as the entire transaction notification). In certain cases, each communication can include a transaction identifier that identifies the corresponding transaction. The data intake and query system108can store the separate communications as individual events. Accordingly, the monitor1804can be used to generate multiple events from one transaction notification. In some embodiments, the data intake and query system108can store the individual events generated from the transaction notifications in an index that is separate from an index that store metrics data and/or log data. Furthermore, the monitor1804and/or data intake and query system108can extract the transaction identifiers from the communications received from the monitor1804using one or more regex rules. In some such embodiments, the data intake and query system108can store the transaction identifiers in one or more inverted indexes that associate the transaction identifier with the event that includes it. In some cases, the monitor1804can extract additional information from the transaction notifications, such as, but not limited to channel information (e.g., the channel associated with the transaction and/or blockchain), node information (e.g., identification of the nodes that endorsed, ordered, and/or validated the transaction), etc. The data intake and query system108can store any one or any combination of the extracted information in one or more inverted indexes. FIG.19Ais a block diagram illustrating an example of a distributed ledger system1802that provides one or more distributed ledgers1808A-1808F (generically referred to as ledger(s)1808) or blockchains across one or more nodes1806A-1806F (generically referred to as node(s)1806). The nodes1806can communicate via a network1902. The network1902can be the same as network104or a different public or private network. Each node1806can be implemented using individual computing devices, distributed processing systems, servers, isolated execution environments (e.g., containers, virtual machines, etc.), shared computing resources, and so on. In some embodiments, the nodes1806can be implemented on the same or as part of different isolated execution environment systems (e.g., as different containers or pods of the same or different Kubernetes cluster or Docker swarm). In the illustrated embodiment ofFIG.19, each node1806is shown to include a ledger1808(which may include more than one ledger), which can be stored across one or more data stores, etc. In some embodiments, the ledger1808of each node1806can include one or more blockchains, etc. In some cases, the ledgers1808of the different nodes1806correspond to each other, include the same or matching data entries, or include the same data. The distributed nodes1806can store, maintain and/or update their respective ledger1808. Each node1806can be configured for storing a version of the distributed ledger1808(or a portion thereof), and the distributed ledger1808may be updated from time to time with modifications to the ledger1808and/or ledger entries, such as insertion of a ledger entry (also referred to herein as a block) or an update of a ledger entry. The distributed ledger system1802may be adapted such that, where issues arise with the distributed ledger1808(e.g., hash collisions, insertions at the same time, corrupted ledgers/ledger entries), the issues are resolved based at least on issue resolution logic. For example, such logic may be distributed among each of the nodes1806and/or their computing systems and can be used to improve or ensure consistency between copies of the ledgers at the different nodes. In some embodiments, issues may arise that can cause a distributed ledger1808to “fork” and/or spawn another instance, for example, where a collision cannot be automatically resolved between the nodes. In such cases, the resolution logic can be used to determine when to “fork” or spawn another instance, etc. It will be understood that each node1806can include fewer or more components. For example, each node1806can include processors, buffers, applications, databases, etc. In some cases, the nodes1806can include executable instructions or code that when executed by the node1806cause the node1806to modify a corresponding ledger1808or generate a transaction that is to be stored in a block of a blockchain. In some cases, the executable instructions can be bytecode and can be used to implement or execute a smart contract relative to the ledger1808. As described herein, the nodes1806can include at least a decentralized set of computing devices and may even include personal computing devices for individuals, and so on. For example, a ledger1808may be stored on a large number of publicly available devices, each acting as a “node” for storing a copy of the ledger1808(e.g., being collaboratively maintained by anonymous peers on a network). In some embodiments, the ledger1808is only stored and maintained on a set of trusted “nodes”, such as on a private network or on the computing systems of authorized users. In some embodiments, a combination and/or a “mix” of both trusted nodes and public nodes may be utilized, with the same and/or different rules being applied to activities performed at each (e.g., a different validation process may be used for untrusted nodes, or simply untrusted nodes may be unable to perform certain activities). In some embodiments, there may be different levels of nodes with differing characteristics and applied logic. The ledgers1808, ledger entries, and/or information stored on the ledger entries may be used to store information received from one or more computing devices. For example, the information may include banking information, other commercial information, smart contracts, etc. Further, the ledger1808and ledger entries may utilize encryption technology to facilitate and/or validate digital signatures or the data received from the computing devices. In some embodiments, the ledger1808is publicly accessible. In some embodiments, the ledger1808is only accessible to select, authorized nodes having the appropriate permissions. In some embodiments, portions of the ledger1808are public and portions of the ledger1808are private. When the ledger1808is publicly accessible, the ledger1808may be adapted to only store information incidental to a transaction or a document relating to a transaction, and may be adapted such that identifiable information is removed but validation information is maintained (e.g., storing a hash value computed from the underlying information). Further, the information stored on the ledger1808may be encrypted (non-limiting example: using a public key of a key pair associated with the data intake and query system108), redacted, compressed, transformed (e.g., through a one-way transformation or a reversible transformation), and so on. Each of the one or more nodes1806may have, at various times, versions of the ledger1808, and the ledger1808may be maintained through the propagation of entries and/or updates that may be copied across ledgers1808. Ledger entries may contain elements of information (e.g., header information and/or other data). There may be various rules and/or logic involved in activities relating to the ledger entries (e.g., creating, updating, validating, deleting); for example, a majority, supermajority, or unanimous consent between nodes may be enforced as a condition to an activity relating to an entry. In some embodiments, distributed ledgers1808are utilized and the ledger entries are adapted to have various linkages to one another such that the integrity of the ledger entries can be reinforced and/or validated. For example, the linkages may include hashes computed based on prior entries in the ledger1808, which may be utilized to determine whether a ledger entry is a fraudulent entry by reviewing the correctness of the hash based on performing the hash on information stored on prior entries. The ledger1808may be maintained through, for example, a “distributed network system”, the distributed network system providing decentralized control and storage of the ledger1808at the one or more nodes (which may be considered “nodes” of the system). The number of “nodes” may be fixed or vary with time, and increasing or decreasing the number of “nodes” may impact the performance and/or security of the system. The ledger1808copies stored and maintained at each “node” provide cross-validation with one another in the event of conflicts between ledgers1808, and various cryptographic and/or hashing algorithms may be utilized during the generation, updating, deletion, linking, and so on, of ledger entries such that ledger entries have increased resiliency to unauthorized tampering or modification. For example, a blockchain ledger1808may be distributed across nodes1806and used to track information received from one or more computing devices. The blockchain ledger1808may have entries linked to one another using cryptographic records, and entries in the blockchain may be ordered, time stamped, and/or associated with metadata. These and other methods can be used for protection against “double” transfers and unauthorized modification of ledger entries. FIG.19Bis a block diagram illustrating another example of a distributed ledger system1802that includes different types of nodes1806. Specifically, the illustrated example ofFIG.19Bincludes four peer nodes1806A,1806C,1806D,1806F (generically referred to as peer node(s)1806) and two ordering nodes1806B,1806D (generically referred to as ordering node(s)1806). It will be understood that fewer or more nodes can be included as desired. For example, the distributed ledger system1802can include only one ordering node1806or two or more ordering nodes1806. Similarly, the distributed ledger system1802can include fewer or more peer nodes1806as desired. As described herein, the peer nodes1806and ordering nodes1806can be implemented using one or more computing devices, isolated execution environments, etc. In some embodiments, each peer node1806and/or ordering node1806can be associated with the same or different organization, entity, or user. For example, one company may be associated with or control peer nodes1806A,1806C and ordering node1806B, a second company may be associated with or control peer node1806D and ordering node1806E, and a third company may be associated with or control peer node1806F. A non-limiting example of a distributed ledger system1802that includes peer nodes1806and ordering nodes1806is the Hyperledger Fabric. For simplicity in describingFIG.19B, the peer nodes1806and ordering nodes1806are described with reference to a common channel that enables private communications/transactions between the illustrated nodes1806A-1806F. However, it will be understood that the peer nodes1806and ordering nodes1806can be associated with multiple channels that each enable private communications/transactions between nodes associated with the channel and/or be associated with multiple consortiums made up of organizations that control the individual nodes1806. Further, it will be understood that each peer node1806can include one or more peer node ledgers1808and/or ledger states1904and perform the functions described herein for each channel with which the peer node1806is associated. Similarly, each ordering node1806can include an ordering node ledger1808and perform the functions described herein for each channel with which the ordering node1806is associated. In some cases, each channel can include at least one ordering node1806and multiple peer nodes1806. In certain embodiments, a channel is associated with multiple peer nodes1806and only one ordering node1806. In certain cases, multiple ordering nodes1806can be associated with the same channel. In the illustrated embodiment ofFIG.19B, each of the peer nodes1806A,1806C,1806D,1806E includes a respective peer node ledger1808A,1808C,1808D,1808F (generically referred to as peer node ledger(s)1808) and a respective ledger state1904A,1904C,1904D,1904E (generically referred to as ledger state(s)1904), and can be used to receive proposed transactions from a client computing device (not shown), endorse transactions, communicate endorsed transactions to a client computing device or ordering node1806, validate transactions of a block, commit blocks to a respective peer node ledger1808, and/or update a respective ledger state1904. Similar to the description of ledgers1808with reference toFIG.19A, the peer node ledgers1808can include one or more ledgers or blockchains. Further, the peer node ledgers1808of the different peer nodes1806can correspond to each other, include the same or matching entries, transactions, blocks, blockchains, etc. In some cases, the peer node ledger1808can include blocks formed from validated transactions, but may exclude invalidated transactions. In certain embodiments, the peer node ledgers1808can include blocks formed from validated and invalidated (or failed) transactions. In certain embodiments, such as embodiments in which an ordering node1806maintains an ordering node ledger1808, the peer node ledgers1808can correspond to or match the ordering node ledgers1808of the ordering nodes1806and/or be different. For example, in some cases, the ordering node ledgers1808can include all endorsed transactions, regardless of whether they are validated and the peer node ledgers1808can include endorsed and validated transactions but not endorsed and invalidated or failed transactions. In certain embodiments, the peer node ledgers1808can include one ledger or blockchain that matches the ordering node ledger1808and another ledger that does not match the ordering node ledger1808. In some cases, the peer node ledger1808is generated based on blocks received from an ordering node1806. For example, the peer node1806can review the transactions of a received block and, if a transaction is validated, can include the transaction as part of a block for the peer node ledger1808. Accordingly, in certain embodiments a block of a peer node1806may have fewer transactions (or none) compared to a corresponding block received from the ordering node1806and/or found in the ordering node ledger1808. As described herein at least with reference toFIG.20, when a peer node ledger1808is implemented as a blockchain, each block of the blockchain can include a header portion (including metadata) and a body portion. The header portion and/or metadata can include a block number (e.g., which block the block is in the blockchain), one or more content identifiers for the current block, a content identifier for a previous block, one or more timestamps (e.g., when block was created, added to the blockchain, etc.), a digital certificate, a public key (of a public-private key pair), a digital signature of the peer node1806that added the block to the blockchain, and/or indicators as to whether a transaction of the block is valid/invalid, etc. In addition, in some cases, the header portion can include hashes or content identifiers for individual transactions of a block, and the body portion of a block in the blockchain can include one or more transactions or transaction data associated with a transaction. In certain embodiments, each transaction can include header information (e.g., bytecode used to generate the transaction, software version, etc.), digital signature of the client computing device that initiated the transaction, a signature or identifier of the endorsing peer nodes1806(peer nodes1806that signed and/or endorsed the transaction), channel information (which channel the transaction is associated with), a signature or identifier of the ordering node1806that ordered the transaction in the block, a proposed change to the peer node ledger1808, an expected input/output of the transaction (e.g., the content of the ledger state1904before and after the transaction is executed, etc.), etc. The ledger state1904can include one or more key-value pairs reflecting the value or state of the key (of the key-value pair), and can be implemented as a database in one or more data stores of a peer node1806. In some embodiments, the ledger state1904reflects a current state or value of the keys based on the transactions in the corresponding peer node ledger1808or blockchain. As a non-limiting example, if the peer node ledger1808reflects transactions (e.g., debits and credits) associated with a particular bank account or other intangible object, the ledger state1904can reflect the current value of money in the bank account based on all previous transactions. As another non-limiting example, the ledger state1904can reflect a current ownership of a car or other physical object based on previous (validated) transactions associated with the car found in the peer node ledger1808. Accordingly, as a peer node1806adds a block with one or more transactions to a peer node ledger1808or blockchain, the peer node1806can update the ledger state1904for keys that were altered based on any one or any combination of the (validated) transactions of the block. Similar to the peer node ledgers1808, the ledger states1904of the different peer nodes1806can correspond to each other, include the same or matching key-value pairs, etc. Although not illustrated, it will be understood that each peer node1806can include fewer or more components. For example, as mentioned, each peer node1806can include multiple peer node ledgers1808, as well as bytecodes, permissions, etc. This information can be stored on one or more data store associated with the peer node1806. The permissions can indicate which channels, organizations, or other components, the peer node1806is associated with and/or what information the peer node1806is allowed to access or edit, etc. The bytecodes can include executable instructions that the peer node1806is to execute and which can generate or be used to endorse or validate transactions for a block of a blockchain. For example, a bytecode can indicate that a peer node1806is to read/write information to a ledger state1904. A client computing device (not shown) can cause the peer node1806to execute the bytecode by providing the peer node1806with one or more inputs. For example, if the bytecode is used to reflect the change in ownership of a car, the client computing device can identify the subject car and the identity of the parties involved in the transaction (e.g., buyer and seller). The peer node1806can use the bytecode to verify whether the ledger state1904includes the identified car and the parties are valid (e.g., identified owner owns the car and buyer is able to purchase the car), etc. Based on the bytecode, the relevant peer nodes1806can endorse or validate a transaction that is to be included as part of a block in a blockchain. FIG.20is a block diagram illustrating an embodiment of a blockchain2000or distributed ledger that includes blocks that are linked together. The blockchain2000can correspond to a peer node blockchain (non-limiting example: include only validated transactions or an indication of valid/invalid transactions) and/or an ordering node blockchain (non-limiting example: include transactions regardless of validation). In the illustrated embodiment, four blocks2002A,2002B,2002C,2002D (generically referred to as block(s)2002) of the blockchain2000are shown, with each block2002including a header portion2004A,2004B,2004C,2004D (generically referred to as header portion(s)2004) and a body portion2006A,2006B,2006C,2006D (generically referred to as body portion2006). However, it will be understood that each block2002can include fewer or more sections, etc. For example, in some embodiments, each block2002can include only a body portion2006or only a header portions2004(e.g., if a peer node1806determines that no transactions of a block received from an ordering node1806can be validated, the peer node1806can generate a block with no transactions). In addition, for simplicity, some details of the blocks2002are not shown. For example, additional information can be included in the header portions and/or body portions, etc. The distributed ledger system1802can generate blocks based on various criteria, such as, but not limited to, the passage of a predetermined time interval, the size or amount of data/transactions received, the determination of a solution to a computational puzzle that is determined by a difficulty parameter, the number of block entries (or transactions), or generated content identifiers received, etc. In some embodiments, the distributed ledger system1802can generate a block based on a predetermined period of time. For example, the distributed ledger system1802can generate a block for the blockchain2000once a second, every 10 seconds, once a minute, every 10 min., every hour, etc. In certain embodiments, the distributed ledger system1802can generate a block based on the size or amount of data corresponding to one or more transactions. For example, the distributed ledger system1802can generate a block for each group of transactions that forms a megabyte or gigabyte of data. In some embodiments, the distributed ledger system1802can generate a block based a node or computing system determining a solution to a computational puzzle that is based on a difficulty parameter. In certain cases, the difficulty parameter changes over time to ensure that blocks are likely to produce on a regular time interval. In some cases, the distributed ledger system can generate a block based on a number of block entries, transactions, or content identifiers. For example, the distributed ledger system1802can generate a block for each transaction or each set of 100, 1000, or 1,000,000 transactions, etc. The distributed ledger system1802can use any one or any combination of the aforementioned techniques to generate a block. In the illustrated embodiment, the header portions2004include a content identifier (in this example a hash) associated with the previous block (e.g., a hash of the body portion of the previous block) and a content identifier for the current block (e.g., for the body portion of the current block). For example, the header portion2004B includes the hash “49vvszj39fjpa,” which corresponds to the hash of the body portion2006A and the hash “69yu8qo4prb5,” which corresponds to the hash of the body portion2006B. It will be understood that less, different, or more information can be included in the header portions2004, as desired. For example, the header portions2004can include a nonce, root hash of a Merkle tree, timestamp, difficulty level, software version number, block number indicating the number of blocks in the blockchain that precede the block, etc. The nonce can correspond to the number used to obtain a hash that is smaller than a target hash. For example, in some embodiments, before a group of transactions can be added as a block to the blockchain2000, the distributed ledger system1802may require that the hash of the content of the block (e.g., the hash of the body portion2006) be lower than a threshold number. To meet that criteria, a node1806can add a nonce value and hash the combination of the nonce value and the content of the block. If the resulting hash does not meet the size criteria, the node can repeatedly increment the nonce value and take hash again until the threshold is satisfied. The final nonce value can be included in the block. As another example, the header portions2004can include hashes of the entire previous block (header and/or body portion), one or more timestamps (or time range) reflecting the time when the block was started, completed, and/or added to the blockchain2000, and/or a difficulty level. In certain cases, the timestamp can correspond to current day and time and/or the amount of time elapsed from a particular time. The difficulty level can, in certain cases, correspond to the size of the hash. In certain cases, a smaller hash can correspond to a higher difficulty level. The root hash can correspond to the hash created based on hashes of multiple transactions, including any hashes of hashes generated by hashing transactions, and so on. The header portions2004can include a content identifier for each transaction included in the body portion2006. For example, the header portion2004can include a hash of each transaction in the body portion2006. In certain embodiments, the header portion2004can include a digital certificate, public key, and/or digital signature associated with the peer node1806or ordering node1806that created it. In some cases, the header portion2004(or other metadata) can include an indicator for each transaction indicating whether the transaction was validated by a peer node1806. In some embodiments, where the ordering node ledger1808and the peer node ledger1808are different, the header portion of a block in a peer node ledger1808can include an indication of the block in the ordering node ledger to which it relates. For example, if OrderBlock_12 in an ordering node blockchain includes Transaction_A that is later invalidated and excluded from a corresponding PeerBlock_12 in a peer node blockchain, the header portion of the PeerBlock_12 can include an identifier that identifies OrderBlock_12 in the ordering node blockchain as including Transaction_A. With continued reference toFIG.20, the body portions2006can include one or more block entries for each transaction of the block. In some embodiments, the block entries can be compressed and/or the content of one or more block entries (or all block entries) can be encoded or encrypted using a public key of a key pair associated with the computing device that provided the information for the block entry. In this way, the distributed ledger system1802can limit the accessibility of the block entries. In some embodiments, the block entries can include transaction data, such as but not limited to, a transaction identifier, node signatures (e.g., endorsing/validating peer nodes1806, ordering nodes1806, etc.), client computing device signatures, proposed ledger changes, expected input/output of the transaction, bytecode identification, inputs into the bytecode, channel information, timestamp of creation, etc. In some cases, each proposed transaction received by a peer node can be assigned (or come with) a transaction identifier or transaction ID. The transaction identifier can follow the transaction throughout the validation process and/or be included as part of transaction in a block entry of a block. The digital signatures can include any one or any combination of a digital signature from the client computing device that initiated the proposed transaction, a digital signature corresponding to the peer nodes1806that endorsed the transaction, the digital signature of the ordering node(s)1806that ordered the transactions in and/or created the block, and/or the digital signature of the peer node1806that validated the transaction as part of the block and/or committed the block to the blockchain. In certain cases, the transaction data of a block entry can include the proposed change to the ledger state1904, including the proposed key-value pairs before and after the transaction is executed. In certain cases, the transaction data can include an identification of the bytecode that generated or corresponds to the transaction. In the illustrated embodiment, the block entry for the transactions of body portion306A includes a transaction identifier that uniquely identifies the transaction, an indication of ledger changes, the identification of the channel with which the blockchain is related (channel 5), the signatures of the endorsing peer nodes (peer node 1 and peer node 2 for the first transaction and peer node 3 for the second transaction), the signature of the ordering node that ordered the transactions (ordering node 2), and the signature of the validating peer node (peer node 6). As shown, given that the transactions are included in the same blockchain, the channel and validating peer node for the transactions in the body portion2006A is the same. However, the endorsing peer nodes are different. As described herein, this can be due to the peer nodes involved in a transaction as determined by the bytecode and/or request made by a client computing device. As described herein, the information in the block2002A can be used to generate one or more transaction notifications. For example, one transaction notification can include the entirety of the block2002A. As another example, a transaction notification can include information about the validation of the transactions in the block. For example, the transaction notification can identify the transactions of a block that are validated and/or invalidated, etc. FIGS.21A-21Dare data flow illustrating an embodiment of a distributed ledger system1802processing a transaction and generating and storing a block that includes the transaction to a blockchain. In some cases, the validation process described herein with reference toFIGS.21A-21Dcan correspond to the validation of one or more transactions on a particular channel within the distributed ledger system1802. In the illustrated embodiment ofFIG.21A, (1) a client computing device2102proposes a transaction to peer nodes1806A and1806F and receives an endorsed transaction in return. As mentioned, the peer nodes1806A and1806F can be associated with different parties or organizations. Further, the proposed transaction may relate to a proposed physical transaction between the different organizations. The peer nodes1806A and1806F process the proposed transaction and determine whether to endorse it. In certain embodiments, upon receipt of the proposed transaction, the peer nodes1806A can assign a transaction identifier to the proposed transaction. In certain embodiments, the client computing device2102can generate a transaction identifier for the proposed transaction and communicate the transaction identifier to the peer nodes1806A and1806F with the or as part of the proposed transaction. The peer nodes1806and ordering node1806can use the transaction identifier to uniquely identify the transaction throughout the validation process. In some cases, processing the proposed transaction can include executing bytecode related to the proposed transaction using one or more blocks of a respective peer node ledger1808or by referencing the ledger state1904. In certain cases, the execution of the bytecode does not modify any blocks or the ledger state1904, but merely verifies whether the proposed transaction could be done based on the information in the blocks and ledger state1904. In response to the proposed transaction, the peer nodes1806A,1806F can endorse the proposed transaction. For example, if the proposed transaction includes the proper credentials and references the correct values of the ledger state1904, and identifies the proper values as part of the transaction, the peer nodes1806A,1806F can endorse the proposed transaction. As yet another example, the peer nodes1806can endorse a transaction based on a user determining that an entity associated with the peer node1806desires to proceed with the transaction. For example, if the transaction corresponds to the change in ownership, then the entities associated with the change in ownership can endorse the proposed transaction via the peer nodes1806. In some cases, the peer nodes1806A,1806F can endorse the proposed transaction by digitally signing the proposed transaction using a private key of a public-private key pair. In certain cases, if the peer nodes1806A,1806F do not endorse the proposed transaction (within a particular amount of time) the transaction can fail or the client computing device2102can resubmit the proposed transaction at a later time. In the illustrated embodiment ofFIG.21A, the client computing device2102communicates with peer nodes1806A and1806F. However, it will be understood that the interactions can vary depending on the type of transaction, permissions, etc. In some cases, based on the transaction, the client computing device2102interacts with only one peer node1806. In certain embodiments, the client computing device2102can interact with multiple peer nodes1806. Further, in some embodiments, as part of the validation, one peer node1806can interact with another peer node1806. For example, if the transaction is a transfer of ownership between an entity associated with peer node1806A and a different entity associated with peer node1806F, and the transaction is initiated with peer node1806A, the peer node1806A can communicate the proposed transaction to peer node1806F for endorsement. In certain embodiments, an application executing on the client computing device2102identifies the peer nodes1806that are associated with a particular proposed transaction and communicates the proposed transaction to the different peer nodes1806for endorsement. In some cases, the peer nodes can endorse the proposed transaction in a round robin fashion. For example, after one peer node1806endorses the proposed transaction, it can forward the proposed transaction to another peer node for endorsement until a threshold number (e.g., all or a particular subset) of the peer nodes1806have endorsed the proposed transaction. In some embodiments, the ordering nodes106are not involved with the endorsement of the proposed transaction. With reference toFIG.21B, the client computing device2102can (2) request the ordering nodes1806B,1806E to order the transaction. As part of requesting the ordering nodes1806to order the transaction, the client computing device2102can provide the ordering nodes1806B,1806E with the endorsed transaction. Although illustrated as providing the endorsed transactions to two ordering nodes, it will be understood that the client computing device2102can provide the endorsed transactions to fewer or more ordering nodes1806as desired. In addition, in certain embodiments, one or more of the endorsing peer nodes1806A,1806F can provide the endorsed transaction to the ordering nodes1806B,1806E for ordering. The ordering nodes1806B,1806E can (3) process the endorsed transaction received from the client computing device2102. In some cases, processing the endorsed transaction can include ordering the endorsed transaction relative to other endorsed transactions of the distributed ledger system1802. For example, multiple client computing devices2102can be interacting with any one or any combination of the peer nodes1806to generate endorsed transactions. The ordering nodes1806can receive the endorsed transactions and order them. In certain embodiments, the ordering nodes1806can order the endorsed transactions based on a timestamp, such as the first, last, or an average of the timestamps of one or more of the endorsements (e.g., the timestamp associated with the peer node1806A and/or the peer node1806F), the timestamp of the proposed transaction submission or creation, etc. In addition, as part of processing the endorsed transactions, the ordering nodes1806B,1806E can generate a block for a blockchain using the endorsed transactions, including generating a header, body, and/or other parts of the block, as discussed above. In some cases, the ordering nodes1806B,1806E can append the generated blocks to a local blockchain or ordering node ledger1808. In some cases, when appending the generated blocks to the local blockchain, the ordering nodes1806do not validate the transactions of the block. In certain embodiments, the peer nodes1806are not involved with the ordering of the transactions and/or the creation of the blocks from the ordered transactions. With reference toFIG.21C, the ordering nodes1806B,1806E (4) communicate the generated blocks to the peer nodes1806A,1806C,1806D,1806F for validation and commitment to a blockchain. As described herein, each generated block can include one or more endorsed transactions in a body portion, a header portion, and/or metadata, etc. As described herein, at least with reference toFIG.20, the header portion can include a hash of each transaction in the block, a hash of the hashes of each transaction, a hash of all transactions of the block or the content of a body portion of the block, a hash of a previous block of the blockchain, etc. Although both ordering nodes1806B,1806E are illustrated as providing the generated blocks to all peer nodes1806A,1806C,1806D,1806F, it will be understood that in some cases, each ordering node1806provides the generated blocks to a subset of the peer nodes1806A,1806C,1806D,1806F (e.g., ordering node1806B can send the generated blocks to peer nodes1806A,1806C and ordering node1806E can send the generated blocks to peer nodes1806D,1806F) or only one ordering node1806can provide the generated blocks to all peer nodes1806. The body portion can include one or more transactions or transaction data. As described herein, in some embodiments, the transaction data can include any one or any combination of: a timestamp corresponding to the transactions submission/creation, an identifier of the code (or bytecode) associated with the transaction, a signature or identification of the client computing device (or corresponding application) that initiated the transaction, a signature or identifier of the endorsing peer nodes (peer nodes1806A,1806F that signed and/or endorsed the transaction), a signature or identifier of the ordering node1806B that ordered the transaction and/or generated the block, a proposed change to the ledger, a channel identifier that identifies the channel associated with the blockchain, an expected input/output of the transaction, such as the content of a database of the ledger that stores the key-values associated with different transactions before and after the change is implemented, etc. Further, in some cases, the transaction data can include an identification of log data generated during bytecode execution, a bytecode response, etc. As illustrated atFIG.21D, the peer nodes1806A,1806C,1806D,1806F can (5) validate the transactions in the block and append or commit the block to a peer node ledger1808and/or a peer node blockchain. In certain embodiments, the peer nodes1806A,1806C,1806D,1806F can validate the transactions by comparing the expected inputs (e.g., value indicated in the transaction for a particular key of the ledger state1904compared to the actual value of the key in the ledger state1904). In some cases, if the value or state of the key in the ledger state1904matches the value or state identified by the transaction, the peer node1806can validate the transaction. In certain cases, the peer nodes can validate the transactions based on permissions or other information associated with the endorsing peer nodes1806A,1806F, etc. In addition, in some cases, the peer nodes1806A,1806C,1806D,1806F can update the ledger state1904based on the transactions. For example, as described herein, the ledger state1904can store key-values corresponding to the subject of one or more transactions. When a transaction affects a particular key-value pair, the peer nodes1806can update the key-value pair in the respective ledger state1904and append the corresponding block to the blockchain of the respective peer node ledger1808. As described herein, the ledger state1904can reflect the current state or value of a key based on the combination of valid transactions in a blockchain. Throughout the validation process, the nodes1806can generate different types of data, such as, but not limited to transaction notifications, log data, and/or metrics data. In some cases, the peer nodes1806can generate one or more transaction notifications. The transaction notifications can correspond to individual transactions of a block, the entire block, or parts of a transactions, such as the bytecode used as part of a transaction, etc. In some cases, the transaction notifications can include the entire content of a block (e.g., the header portion, body portion, transactions, metadata, etc.), or a summary of information, such as an indication of which transactions were validated and/or posted to a peer node blockchain. In certain embodiments, the notifications can be stored in a pub-sub or buffer and/or the peer nodes1806can notify the client computing device2102based on the generated transaction notifications, and provide client computing device2102with information about the transaction as part of the block of a blockchain. In some cases, the peer node1806can indicate to the client computing device2102whether the transaction was validated or invalidated, etc. In addition to generating notifications, the nodes1806can generate log data. The log data can correspond to or identify different transactions that are being processed by the nodes1806or other activities related to the node, such as errors, etc. For example, the log data generated by a peer node1806can indicate what the peer node1806doing for a particular proposed transaction (e.g., receive transaction, assign transaction identifier, endorse transaction, validate/invalidate transaction, post block with transaction to blockchain, read/write proposed changes of the transaction to the ledger state1904, etc.). Similarly, the ordering nodes1806can generate log data indicative of activities it is executing relative to the transactions (e.g., receive endorsed transaction, order transaction, generate block, add transaction to a block, communicate transaction to peer nodes as part of the block, post transaction to blockchain as part of a block, etc.). Though log data can capture the activity of a node as the node processes transactions, the log data for the node can, in some cases, only capture the activity of the one node. Depending on the implementation of the nodes1806, the log data can be stored in a data store of the nodes, and/or converted and stored as part of log data of an isolated execution environment system, etc. Moreover, as the nodes1806process data, they can generate certain metrics. For example, the nodes1806can generate CPU usage, diskspace, etc. and other metrics. Though the metrics for a node result from processing performed by the node, metrics data may not capture any information about transactions that were processed. In some cases, the metrics are stored in a data store of the nodes1806. The data intake and query system108can ingest and correlate the data generated by a distributed ledger system1802. In some cases, the data intake and query system108can ingest the data using different components. For example, the data intake and query system108can use a monitor to ingest one type of data and use a forwarder, connector, and/or data adapter for other types of data. Based on the collected data, the data intake and query system108can identify correlations between transactions that are included in a blockchain and corresponding log data and metrics data. This information can provide insight into the inner workings of the distributed ledger system1802, identify performance issues, security issues, errors, etc. By identifying faults, errors, and issues with the different components of the distributed ledger system1802, the data intake and query system108can improve the distributed ledger system1802as a whole. For example, based on the identified issues, system configurations can be adjusted, components can be fixed or reconfigured, etc. In this way, the data intake and query system108can improve the speed, efficiency, throughput, and processing power of the distributed ledger system1802. In addition, by correlating the different data types or associating data from different nodes of the distributed ledger system1802, the data intake and query system108can track the throughput of the system, identify bottlenecks, and be used to make adjustments to the distributed ledger system1802. FIG.22is a data flow diagram illustrating data ingestion from the distributed ledger system1802by the data intake and query system108operating in accordance with aspects of the present disclosure. As noted herein above, the data intake and query system108may implement a Getting-Data-In (GDI) component (such as data adapter, monitor, forwarder, connector, or the like) in order to ingest the distributed ledger transaction data, e.g., by reading log files maintained by one or more nodes of the distributed ledger, listening to the blocks, transactions, and events that are broadcasted to all participating nodes of a distributed ledger, and/or performing other actions. The ingested raw data may be aggregated, decoded, visualized, and/or further processed by the data intake and query system. In the illustrated embodiment ofFIG.22, the node1806is shown as a peer node1806with a peer node ledger1808, a ledger state1904, and a buffer2204. The node1806can generate different types of data, including transactions, blocks, metrics data, and/or log data. In an illustrative example, a transaction can include the transaction identifier, a timestamp, the source account identifier, the destination account identifier, and a bytecode invoking a smart contract by specifying a function name and parameter values. The log data can include information generated by the node as it processes requests, transactions, etc. The log data can, for example, include information about errors, or other issues. In some cases, the log data can include a transaction identifier indicating a particular transaction associated with the generated log data. For example, the log data can indicate that a particular transaction associated with a particular transaction identifier was received, rejected, forwarded, processed, endorsed, ordered, included in a block, validated, invalidated, pruned, caused an error, rejected, used to edit the ledger state1904, etc. The log data can also include information about other occurrences within the node1806, such as, but not limited to, interactions with other nodes1806, setup, administrative communications, configuration settings/changes, etc. As described herein, in some embodiments, the log data may be unstructured raw machine data, whereas the transaction notifications may be structured. As described herein, the metrics can include information about the performance metrics of the node1806and/or the distributed ledger system1802, such as, but not limited to, CPU-related performance metrics; disk-related performance metrics; memory-related performance metrics; network-related performance metrics; energy-usage statistics; data-traffic-related performance metrics; overall system availability performance metrics; cluster-related performance metrics; and virtual machine performance statistics, etc. The different types of data generated by the node1806and/or distributed ledger system1802can be accessible via different paths or stored in different locations of the node1806. For example, the data can be located in a data store, pub/sub, buffer, or real-time data stream. In some embodiments, such as when the distributed ledger system1802is implemented in an isolated execution environment system, the data can be wrapped or converted to another format, such as JSON, and stored as a JSON (or other type) file. In some such cases, a data adapter2202, connector, or monitor can be used to extract the data generated by the node from the wrapper. In some such embodiments where the distributed ledger system1802is implemented using Kubernetes or Docker, log data generated by the node1806can be wrapped in a JSON wrapper and stored as a Docker or Kubernetes log file, which may, for example, be accessible through an API. In some such case, a data adapter or monitor can use the API or another mechanism to extract the log data generated by the node from the JSON wrapper and/or Docker or Kubernetes log file. As described herein, in some cases, the data obtained from the node1806can be available via a messaging buffer2204. In certain embodiments, the buffer2204operates according to a publish-subscribe (“pub-sub”) messaging system. For example, a channel may be represented as one or more “topics” within a pub-sub system, and new transaction notifications may be represented as a “message” within the pub-sub system. The distributed ledger system monitor1804may subscribe to a topic representing desired information (e.g., a particular channel, all transaction notifications, etc.) to receive messages within the topic. Thus, the distributed ledger system monitor1804can be notified of new data categorized under the topic within the buffer2204. A variety of implementations of the pub-sub messaging system may be usable within the buffer2204. As will be appreciated, use of a pub-sub messaging system can provide many benefits, including the ability to retrieve data quickly from the node1806while maintaining or increasing data resiliency. In some embodiments, the distributed ledger system monitor1804can provide the data to the data intake and query system108through a module that provides an intake API to the data intake and query system108. In certain embodiments, the data can be collected from the node1806by installing one or more forwarders1904on the node1806and/or or using an HTTP event collector (indicated by arrow2206). For example, the metrics or log data can be obtained from the node1806using a forwarder1904or HTTP event collector. As described herein, the data obtained from the node1806can be stored in one or more buckets of the data intake and query system108. In some cases, the log data can be stored in one set of buckets associated with one index, the metrics data can be stored in a second set of buckets associated with a second index, and the transaction notifications can be stored in a third set of buckets associated with a third index. However, it will be understood that the data obtained from the node1806can be stored in a variety of ways and formats. Moreover, the data intake and query system108can populate one or more inverted indexes based on the received data. In some embodiments, as the data intake and query system108ingests the data from the node1806, it can extract a transaction identifier using one or more regex rules. For example, the data intake and query system108can use one or more regex rules to extract a transaction identifier from log data and/or transaction notifications. As the log data and transaction notifications are different data types or have a different sourcetype, the data intake and query system108can use different regex rules to extract the corresponding transaction identifier (e.g., use one regex rule to extract transaction identifier from log data and a different regex rule to extract transaction identifiers from transaction notifications). In some cases, the distributed ledger system monitor1804can extract the transaction identifiers from the transaction notifications. The data intake and query system108can include the extracted transaction identifiers as keywords or field-value pairs in one or more inverted indexes, such as the inverted index describe herein with reference toFIG.5B. In certain embodiments, the data intake and query system108can extract a node identifier for each node1806of the distributed ledger system1802. The extracted node identifier can also be stored in one or more inverted indexes. Similarly, the data intake and query system108can extract other data from the transaction notifications and/or log data (non-limiting examples: endorsing, ordering, validating node identifiers, channel identifiers, ledger state edit time1904, etc.) and store the extracted data in one or more inverted indexes. As described herein, the data intake and query system108can correlate different types of data of a particular node or across different nodes and/or associate the same type of data of a particular node or across different nodes. For example, the data intake and query system108can correlate log data with transaction notifications and/or metrics data from the same node or from different nodes. Similarly, the data intake and query system108can associate log data from different nodes or transaction notifications from different nodes, etc. In some cases, a node1806can generate multiple log data entries for each transaction notification. For example, in some embodiments, a transaction notification can correspond to the commitment of a block to a blockchain, whereas the log data can correspond to one or more processing tasks or other activities performed by the node1806. As a node1806can perform multiple processing tasks or activities before committing a block to a blockchain, there can be multiple of entries in log data or multiple log data events for each transaction notification or transaction notification event. In addition, as each peer node1806can maintain its own blockchain, each peer node1806can generate a transaction notification that identifies the same transaction (or includes the same transaction identifier). Accordingly, log data or log data events of one node can be correlated with multiple transaction notifications from different nodes. In some embodiments, when correlating log data and transaction notifications of a particular node, multiple entries of log data can be correlated with one (or more, depending on the embodiment) transaction notification, and when correlating log data and transaction notifications across multiple nodes, one entry of log data (or one log data event) can be correlated with multiple transaction notifications (or transaction notification events) from different nodes. In some such embodiments, the data intake and query system108can correlate log data and transaction notifications of each node before correlating log data and transaction notifications across nodes. In a similar manner, multiple sets of metrics data of a particular node can be associated with a particular entry in log data (or log data event) or transaction notification of the particular node. As described herein, correlating the data obtained from the nodes1806can provide significant insights and improvements for the distributed ledger system1802. In some cases, the data obtained from a single node can be correlated to provide node diagnostics, identify the structure or architecture of the node1806and/or parts of the distributed ledger system1802, identify node failures or bottlenecks, recreate or rebuild the blockchain or ledger state1904, determine the history of a transaction with reference to the node1806or partial history, etc. In some cases, to correlate transaction notifications with log data, the data intake and query system108can identify events that are associated with the same transaction identifier. As described herein, some of the events can correspond to log data (also referred to herein as log data events) from a node1806and other events can correspond to transaction notifications of a node1806(also referred to herein as transaction notification events). Based on a determination that the log data events and the transaction notification events include the same transaction identifier, the data intake and query system108can correlate the different events. In some cases, the data intake and query system108can correlate the metrics with the log data and/or transaction identifiers based on one or more timestamps. For example, as described herein, events can include data associated with a timestamp and metrics can be stored in association with a timestamp. Accordingly, the data intake and query system108can correlate the metrics with the log data and/or transaction notifications using the corresponding timestamps. In this way, the data intake and query system108can determine the relevant metrics of the node1806at the time particular log data and/or transaction notification was generated. This correlation can provide insights into the state of the node1806as when the log data and/or transaction notification was generated. Further, the correlation of different data can provide different insights into the state of the distributed ledger system1802, a transaction, and/or a node1806. For example, by correlating the log data events and transaction notification events of a node, the data intake and query system108can identify node failures in relation to particular transactions, node throughput, determine the amount of validated vs. invalidated nodes, the timing/frequency of the generation of a block or commitment of the block to a blockchain. In certain embodiments, correlating transaction notifications and log data can provide insights into the content of a block of a blockchain. For example, the content of the blockchain may be encoded, encrypted, or otherwise obfuscated for privacy or security purposes. By correlating a transaction of a block with corresponding log data, the data intake and query system108can determine some or all of the content of the transaction of the block. For example, the log data may include information about the transaction that was encrypted or otherwise obfuscated in the block. Similarly, the correlation or association of data across nodes can provide insights into the state of the distributed ledger system1802, a transaction, a transaction history, etc. In some cases, the data intake and query system108can associate the same type of data across multiple nodes1806of the distributed ledger system1802. For example, the data intake and query system108can associate log data events from a first peer node1806with log data events from a second peer node1806. In some cases, the association of the same type of data can be used to identify the history of a transaction as it is received, endorsed, ordered, validated, included in a block, and/or committed to a blockchain. Associating transaction notification events across different nodes can provide insights into the functioning of the distributed ledger system1802. In some cases, the association can be used to identify errors in a particular node1806. For example, if all but one peer node1806of a distributed ledger system1802have committed a particular block to a respective blockchain, the data intake and query system108can identify a potential fault or error with the particular peer node. Similarly, the correlation or association of data across nodes1806can enable the data intake and query system108to compare the throughput and processing of each node1806to identify slower/faster nodes, bottlenecks, etc. Accordingly, by obtaining the different types of data from the nodes of a distributed ledger system1802and correlating the data, the distributed ledger system1802can be improved. Specifically, the correlation can identify vulnerabilities, faults, errors, etc., of the distributed ledger system1802. Correlating the data across the nodes1806can also enable the identification of potential security issues, such as, but not limited to, validated transactions that were not endorsed, digital certificate or signature abnormalities, an abnormal volume of transactions, significant transactions or interactions with computers from a particular geographic area or block of IP addresses, etc. In addition, by correlating the data across the nodes1806, the data intake and query system108can determine the architecture of the distributed ledger system1802. For example, the data intake and query system108can identify the different peer nodes1806and ordering nodes1806of the distributed ledger system1802, etc. For example, the data intake and query system108can identify log data events and transaction notification events that are associated with the same channel (e.g., based on parsing relevant events and/or using one or more inverted indexes). The data intake and query system108can then identify different peer nodes1806and ordering nodes1806associated with the identified log data events and transaction notification events. Based on this information, the data intake and query system108can identify the different nodes1806of a channel. Further, by doing this for some or all channels, the data intake and query system108can identify the nodes1806of the distributed ledger system1802. In addition, the data intake and query system108can determine with which channels each node1806is associated, which nodes1806share bytecodes, frequent transactions between nodes, the size and/or number of nodes1806involved in the individual transactions, etc. By associating and correlating the events, the data intake and query system108can identify the components of the distributed ledger system1802(and/or the status of the components). For example, by associating and correlating events, the data intake and query system108can determine that a particular distributed ledger system1802includes three ordering nodes1806, ten peer nodes1806, four channels, etc. Similarly, the data intake and query system108can determine the status for the different components. For example, the data intake and query system108can determine the number of errors, warnings, responsiveness, etc. of one or more the three ordering nodes1806and ten peer nodes1806. In certain embodiments, the data intake and query system108can obtain and correlate additional types of data. For example, as described herein, blocks of a blockchain can include one or more digital signatures of a peer node1806and/or an ordering node1806. The data intake and query system108can use the digital signature to identify a Certificate Authority associated with the digital signature (e.g., a Certificate Authority that issued a digital certificate to the peer node1806and/or ordering node1806to which the digital signature corresponds). One or more components of the data intake and query system108(e.g., distributed ledger system monitor1804, forwarder, and/or indexer) can query the Certificate Authority to obtain additional identifying attributes of the signer such as name, address, email, company name, phone number, title, etc. This information can be stored by the data intake and query system108and/or correlated with the other data obtained from the distributed ledger system1802to diagnose issues with specific transactions and/or answer business analytics questions. In some cases, the data intake and query system108can correlate the data to identify relationships between the components of the distributed ledger system1802, and generate a visualization based on the relationships. For example, the data intake and query system108can determine which nodes are associated with which channels and can therefore communicate with each other with respect to a particular blockchain. In some cases, the data intake and query system108can store the determined relationships and/or other attributes of the different components in a table. In certain cases, the table can be used to generate a visualization. In certain embodiments the visualization can indicate the relationships of the components of the distributed ledger system1802. For example, the visualization can indicate peer nodes1806and ordering nodes1808that are part of a shared channel or consortium. In addition, the various processing steps of a transaction can be tracked across the different nodes of the distributed ledger system1802and visualized. In this way, the data intake and query system108can identify issues and errors with the distributed ledger system1802, etc. In some embodiments, the visualization can indicate the overall health of individual components of the distributed ledger system1802. For example, the visual representation of the components can be colored green to indicate “healthy” (e.g., fewer than a threshold number of errors/warnings or no errors/warnings) and red to indicate “unhealthy” (e.g., greater than a threshold number of errors, warnings, etc.) The visualization can also enable the user to drill down on visual representations of the components of the distributed ledger system1802to display information, metrics, and log data to determine the performance of the component and/or troubleshoot problem areas. In certain embodiments, such as when a user “drills down” to a particular component, the visualization can display log data, metrics data, trends of log data or metrics data, etc. As noted herein above, the distributed ledger system1802may support a special account type, which is referred to as “contract account.” A message to a contract account activates its executable code implementing a “smart contract,” which may evaluate specified conditions and perform various actions (e.g., transfer cryptocurrency tokens between accounts, write data to internal storage, mint new cryptocurrency tokens, perform calculation, create new smart contracts, etc.). The nodes of the distributed ledger system1802may collectively implement a distributed virtual machine (e.g., the EVM) for executing the code implementing smart contracts. A smart contract can be created in a high level programming language (such as Solidity) and then compiled into the EVM bytecode. FIG.23is a data flow diagram illustrating transaction decoding by a distributed ledger connector of the data intake and query system108operating in accordance with aspects of the present disclosure. In an illustrative example, the distributed ledger connector2302may receive, from a distributed ledger node2304, a transaction2310. In another illustrative example, the transaction2310may be retrieved from a log file (not shown) maintained by the distributed ledger node2304. The transaction2310may include the transaction nonce2312, the transaction processing fee2314, the destination account identifier2316, the transaction value2318, the transaction data2320, and the cryptographic signature2322of the transaction originating node. The transaction nonce2312may specify a sequence number of transactions sent from the given source address. The transaction processing fee2314may specify the maximum amount of crypto currency which the transaction originator is willing to pay for processing the transaction. The destination account identifier2316may identify an external account or a contract account. The transaction value2318may specify an amount of crypto currency to be transferred to the destination account. The transaction data2320may be empty for payment transactions; for smart contract transactions, the transaction data2320contains a message invoking a smart contract. In order to decode a smart contract transaction, the distributed ledger connector2302may retrieve the bytecode2324implementing the smart contract associated with the distributed ledger account2316identified by the transaction as the destination account. In an illustrative example, the bytecode may be retrieved by performing a JSON-RPC call to the destination node specified by the transaction2310. In another illustrative example, the bytecode may be retrieved from a log file maintained by the destination node or another node of the distributed ledger system. Upon retrieving the bytecode, the distributed ledger connector2302may compute its digital fingerprint2326. FIG.24is a data flow diagram of computing a digital fingerprint of a smart contract, in accordance with aspects of the present disclosure. As schematically illustrated byFIG.24, a digital fingerprint of a smart contract can be represented by a cryptographic hash of all distributed ledger function signatures and distributed ledger event signatures contained in the bytecode2400implementing the smart contract. Accordingly, the distributed ledger connector2302may parse the bytecode2400and extract the distributed ledger function signatures2410A-2410P and distributed ledger event signatures2420A-2420Q (collectively referred to as Function and Even Signatures2450). In an illustrative example, a function signature may be represented by the function name followed by a parenthesized list of parameter types that are split by a predefined delimiter (e.g., a comma): FunctionName(param_1_type, param_2_type, . . . param_N_type), where N is the number of parameters. Similarly, a distributed ledger event signature may be represented by the distributed ledger event topic followed by a parenthesized list of parameter types that are split by a predefined delimiter (e.g., a comma): EventTopic(param_1 type, param_2_type, . . . param_M_type), where M is the number of parameters. Alternatively, various other specifications of distributed ledger function signatures and/or distributed ledger event signatures may be utilized, provided that a signature unambiguously reflects at least the function name (or the distributed ledger event name) and parameter types. As noted herein above, the digital fingerprint of a smart contract can be represented by a cryptographic hash of all distributed ledger function signatures and distributed ledger event signatures. A cryptographic hash may be represented by an irreversible function mapping its argument represented by a bit string of arbitrary size to a hash value represented by a bit string of a pre-determined size, such that two different arguments are very unlikely to produce the same hash value. In an illustrative example, a Secure Hash Algorithm (SHA) function, such as SHA-3, may be utilized for computing the digital fingerprint. Alternatively, various other cryptographic hash functions may be utilized. Upon extracting, from the bytecode2400, the distributed ledger function signatures and distributed ledger event signatures2430, the distributed ledger connector2302may compute the value of the chosen hash function2440of the character string produced by concatenating all extracted distributed ledger function signatures and distributed ledger event signatures, thus producing the digital fingerprint2326: Fingerprint=H(concatenation(signature_1, signature_2, . . . , signature_K), where H is the chosen cryptographic hash, and signature_1, signature_2, . . . , signature_K denote the distributed ledger function signatures and distributed ledger event signatures extracted from the contract bytecode. Referring again toFIG.23, the distributed ledger connector2302may utilize the computed digital fingerprint2326for associating the smart contract with a known application binary interface (ABI) definition. The data intake and query system108main maintain a local database2330of ABI definitions of smart contracts and/or access a remote database (not shown inFIG.23) of ABI definitions of smart contracts. In various illustrative examples, the remote database may be a publicly accessible database or a private database maintained by a customer utilizing the data intake and query system108or by a third party. The ABI definition of a smart contract may be represented by a human-readable textual representation (e.g., a JSON file) describing the smart contract and its functions. For each function, its name, parameters types and values, and other pertinent information is specified. Accordingly, for every smart contract having its ABI definition2332stored in the database2330, the data intake and query system may compute the digital fingerprint2334following the procedure described herein above with reference toFIG.24. The computed digital fingerprints2334may be stored in the database2330in association with the respective ABI definitions2332, such that the digital fingerprint2334A is associated with the ABI definition2332A, the digital fingerprint2334B is associated with the ABI definition2332B, etc. The database2330may be indexed by the values of the digital fingerprints2334in order to facilitate efficient identification and retrieval of a smart contract ABI definition associated with a specified digital fingerprint. Thus, the distributed ledger connector2302may search the database2330for a digital fingerprint that matches the computed digital fingerprint2450of the bytecode2322implementing the smart contract associated with the distributed ledger account2316. Upon identifying the matching digital fingerprint2334R, the distributed ledger connector2302may retrieve the associated ABI contract definition2332R. The distributed ledger connector2302may extract, from the ABI contract definition2332R, function signatures specifying, for each function exposed by the ABI definition, its name as well as the names and types of its parameters. The extracted information may be utilized for decoding the transaction data2320thus producing decoded transaction data2350. As noted herein above, for smart contract transactions, the transaction data2320contains a message invoking a smart contract. Accordingly, the distributed ledger connector2302may, upon identifying within the transaction data2320, a smart contract invocation, extract the function signature and parameter values associated with the identified transaction data. In an illustrative example, a predefined number of bytes (e.g., four bytes) of the function call data referenced by the transaction data specify the hash of the signature of the function to be called. Following the function signature (e.g., starting from the fifth byte), the function call contains encoded parameter values. The distributed ledger connector2302may identify, within the ABI definition2332R of the identified matching smart contract, a function signature matching the function signature extracted from the transaction data2320. The definition of the identified matching function (e.g., parameter names and types) may be utilized by the distributed ledger connector2302for decoding the transaction data2320. In particular, the distributed ledger connector2302may decode the parameter values extracted from the transaction data2320in accordance with the parameter types specified by the ABI definition2332R for the identified matching function. The distributed ledger connector2302may further associate each parameter value with a corresponding parameter name specified by the ABI definition2332R for the identified matching function. The decoded transaction data, including the function name, the parameters names, types, and values, may be fed to a data intake and query system108for visualization and/or further processing. FIG.25is a flow diagram of an embodiment of a method2500of decoding distributed ledger transactions, in accordance with aspects of the present disclosure. Method2500and/or each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of a computing device the distributed ledger connector2302of the data intake and query system108. In certain implementations, method2500may be performed by a single processing thread. Alternatively, method2500may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method2500may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processing threads implementing method2500may be executed asynchronously with respect to each other. Therefore, whileFIG.25and the associated description lists the operations of method2500in certain order, various implementations of the method may perform at least some of the described operations in parallel and/or in arbitrary selected orders. Although described as being implemented by the distributed ledger connector2302of the data intake and query system108, it will be understood that one or more elements outlined for method2500can be implemented by one or more computing devices/components that are associated with a data intake and query system108, such as the search head210, indexer206, etc. Thus, the following illustrative embodiment should not be construed as limiting. At block2510, the computing device implementing the distributed ledger connector2302receives distributed ledger transaction. The transaction may include the transaction nonce, the transaction processing fee, the destination account identifier, the transaction value, the transaction data, and the cryptographic signature of the transaction originating node. The transaction nonce may specify a sequence number of transactions sent from the given source address. The transaction processing fee may specify the maximum amount of crypto currency which the transaction originator is willing to pay for processing the transaction. The destination account identifier may identify an external account or a contract account. The transaction value may specify an amount of crypto currency to be transferred to the destination account. The transaction data may be empty for payment transactions; for smart contract transactions, the transaction data contains a message invoking a smart contract. At block2520, the computing device receives the bytecode module associated with the distributed ledger account identified by the transaction. In an illustrative example, the bytecode may be retrieved by performing a JSON-RPC call to the destination node specified by the distributed ledger transaction. In another illustrative example, the bytecode may be retrieved from a log file maintained by the destination node or another node of the distributed ledger system. At block2530, the computing device computes the digital fingerprint of the retrieved bytecode. The digital fingerprint may be represented by a cryptographic hash of all distributed ledger function signatures and distributed ledger event signatures contained by the bytecode. Thus, upon extracting the distributed ledger function signatures and distributed ledger event signatures from the bytecode, the computing device may compute the value of a chosen hash function of the character string produced by concatenating all extracted distributed ledger function signatures and distributed ledger event signatures, thus producing the digital fingerprint, as described in more detail herein above. At block2540, the computing device identifies, among a plurality of ABI definitions stored in the ABI definition database, an ABI definition having the ABI digital fingerprint that matches the computed bytecode digital fingerprint, as described in more detail herein above. At block2550, the computing device produces decoded transaction data by decoding, using the identified ABI definition, the transaction data. In an illustrative example, the computing device may retrieve, from the ABI definition database, the ABI contract definition associated with the identified digital fingerprint matching the computed bytecode fingerprint. The computing device may then extract, from the retrieved ABI contract definition, the function signatures specifying, for each function exposed by the ABI definition, its name as well as the names and types of its parameters. The extracted information may be utilized for decoding the transaction data, as explained in more detail herein below with reference toFIG.26 FIG.26is a flow diagram of an embodiment of a method2600of decoding transaction data, in accordance with aspects of the present disclosure. Method2600and/or each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of a computing device the distributed ledger connector2302of the data intake and query system108. In certain implementations, method2600may be performed by a single processing thread. Alternatively, method2600may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method2600may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processing threads implementing method2600may be executed asynchronously with respect to each other. Therefore, whileFIG.26and the associated description lists the operations of method2600in certain order, various implementations of the method may perform at least some of the described operations in parallel and/or in arbitrary selected orders. Although described as being implemented by the distributed ledger connector2302of the data intake and query system108, it will be understood that one or more elements outlined for method2600can be implemented by one or more computing devices/components that are associated with a data intake and query system108, such as the search head210, indexer206, etc. Thus, the following illustrative embodiment should not be construed as limiting. At block2610, the computing device implementing the distributed ledger connector parses the transaction data to identify a smart contract function invocation (encoded signature hash and parameters). At block2620, the computing device extracts the signature and parameter values of a function invoked by the transaction data. In an illustrative example, a predefined number of bytes (e.g., four bytes) of the identified function call specify the hash of the signature of the function to be called. Following the function signature (e.g., starting from the fifth byte), the function call contains encoded parameter values. At block2630, the computing device identifies, in the ABI definition having the ABI digital fingerprint that matches the bytecode digital fingerprint, a function signature matching the signature extracted from the transaction data. At block2640, the computing device utilizes the definition of the identified matching function (e.g., parameter names and types) for decoding the transaction data. In particular, the computing device may decode the parameter values extracted from the transaction data in accordance with the parameter types specified by the ABI definition for the identified matching function. The computing may further associate each parameter value with a corresponding parameter name specified by the ABI definition for the identified matching function. The decoded transaction data, including the function name, the parameters names, types, and values, may be fed to a data intake and query system for visualization and/or further processing. In some cases, the data intake and query system108can generate one or more visualizations of the results. In certain cases, the visualizations can indicate the path or history of the transaction through the distributed ledger system1802, the architecture of the distributed ledger system1802, and/or the status of individual nodes1806and/or the distributed ledger system1802as a whole. In certain embodiments, the visualization can include a display object for each node of the distributed ledger system1802with one more indicators that indicate a status of the node, such as the number of errors or faults at the node, the number of transactions processed or being processed, the number of channels associated with each node, etc. The data intake and query system108can use the events to identify errors, bottlenecks or other issues in the distributed ledger system1802. For example, the data intake and query system108can identify nodes with smaller or greater throughput, nodes with the most errors, etc. In some cases, the data intake and query system108can use the events to track the lifecycle of a transaction, including the initial submission of transaction to a node1806, endorsement of the transaction, ordering of the transaction, validation of the transaction, and inclusion of the transaction into the ledger1808. If the transaction stops or slows down unacceptably at any point in the journey, the data intake and query system108can diagnose the reason for the slow down and generate an alert. Potential issues may include, errors in the bytecode execution, latency with querying the ledger state1904, network latency, resource contention in the underlying distributed ledger system1802, authentication/authorization issues, etc. In addition, the results can include an identification of errors associated with the processing of the transaction, errors with or created by the bytecode, errors in the bytecode execution, throughput of the node, time taken to process the transaction at different times, latency with querying the ledger state1904, etc. In some cases, based on the association, the data intake and query system108can determine the architecture (or a portion thereof) of the distributed ledger system1802. For example, the information from the various events can indicate which peer nodes endorsed the transaction and are therefore part of the distributed ledger system1802, which ordering node ordered the transaction and is therefore part of the distributed ledger system1802. In addition, the data intake and query system108can identify the number and identity of various channels with which the node1806is associated. In certain embodiments, the data intake and query system108can use the events to determine a type of node processing a particular transaction (e.g., peer node and/or ordering node). For example, if a node does not have any transaction notification events associated with it, the data intake and query system108can determine that it is an ordering node, or if it does have transaction notification events associated with it, the data intake and query system108can determine that it is a peer node, etc. In some cases, by associating and/or correlating the events, the data intake and query system108can recreate the blockchain. Computer programs typically comprise one or more instructions set at various times in various memory devices of a computing device, which, when read and executed by at least one processor, will cause a computing device to execute functions involving the disclosed techniques. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a non-transitory computer-readable storage medium. Any or all of the features and functions described above can be combined with each other, except to the extent it may be otherwise stated above or to the extent that any such embodiments may be incompatible by virtue of their function or structure, as will be apparent to persons of ordinary skill in the art. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described herein may be performed in any sequence and/or in any combination, and (ii) the components of respective embodiments may be combined in any manner. Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims, and other equivalent features and acts are intended to be within the scope of the claims. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense, i.e., in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Likewise, the term “and/or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y or Z, or any combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present. Further, use of the phrase “at least one of X, Y or Z” as used in general is to convey that an item, term, etc. may be either X, Y or Z, or any combination thereof. In some embodiments, certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all are necessary for the practice of the algorithms). In certain embodiments, operations, acts, functions, or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described. Software and other modules may reside and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. Software and other modules may be accessible via local computer memory, via a network, via a browser, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, interactive voice response, command line interfaces, and other suitable interfaces. Further, processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. In certain embodiments, one or more of the components of the data intake and query system108can be implemented in a remote distributed computing system. In this context, a remote distributed computing system or cloud-based service can refer to a service hosted by one more computing resources that are accessible to end users over a network, for example, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a data intake and query system108by managing computing resources configured to implement various aspects of the system (e.g., search head210, indexers206, etc.) and by providing access to the system to end users via a network. When implemented as a cloud-based service, various components of the system108can be implemented using containerization or operating-system-level virtualization, or other virtualization technique. For example, one or more components of the system108(e.g., search head210, indexers206, etc.) can be implemented as separate software containers or container instances. Each container instance can have certain resources (e.g., memory, processor, etc.) of the underlying host computing system assigned to it, but may share the same operating system and may use the operating system's system call interface. Each container may provide an isolated execution environment on the host system, such as by providing a memory space of the host system that is logically isolated from memory space of other containers. Further, each container may run the same or different computer applications concurrently or separately, and may interact with each other. Although reference is made herein to containerization and container instances, it will be understood that other virtualization techniques can be used. For example, the components can be implemented using virtual machines using full virtualization or paravirtualization, etc. Thus, where reference is made to “containerized” components, it should be understood that such components may additionally or alternatively be implemented in other isolated execution environments, such as a virtual machine environment. Likewise, the data repositories shown can represent physical and/or logical data storage, including, e.g., storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations. Embodiments are also described above with reference to flow chart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of the flow chart illustrations and/or block diagrams, and combinations of blocks in the flow chart illustrations and/or block diagrams, may be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer, specially-equipped computer (e.g., comprising a high-performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flow chart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flow chart and/or block diagram block or blocks. The computer program instructions may also be loaded to a computing device or other programmable data processing apparatus to cause operations to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the acts specified in the flow chart and/or block diagram block or blocks. Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims. To reduce the number of claims, certain aspects of the invention are presented below in certain claim forms, but the applicant contemplates other aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C. sec. 112(f) (AIA), other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for,” but use of the term “for” in any other context is not intended to invoke treatment under 35 U. S. C. § 112(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application, in either this application or in a continuing application.
316,520
11860859
The figures depict embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION I. Introduction In one embodiment, a method to more efficiently communicate content and information to clients in a publish-subscribe system is described. The publish-subscribe system includes a topic tree which is organized in a topic hierarchy. The topic tree is comprised of a plurality of topics to which clients can subscribe to and publisher can publish to. Publishers publish topic updates to the topics and clients subscribed to those topics receive the updates. Publishers can publish to various numbers of topics, including a single, dozens, hundreds, or even thousands of topics. Clients can subscribe to similar numbers of topics. Additionally, publishers can publish topic updates to topics very rapidly (e.g., on the order of milliseconds). Interpreting the structured data at high speeds within the topic tree can be very challenging for users because the structural hierarchy of topics may be very complex, publishers publish to large numbers of topics, and clients subscribe to large numbers of topics. To illustrate, consider an example where a publish-subscribe system includes a set of hierarchically structured topics relating the current exchange rates between every currency in the world. The publish-subscribe system, for example, includes a topic describing the exchange rate of the United States Dollar (USD) against every other world currency, a topic listing the exchange rate of the British Pound Sterling against every other world currency, etc. To receive updated information regarding the exchange rates for the USD, the client may subscribe to the appropriate topic. In doing so, the client would receive real-time information regarding the exchange rate of the USD against over 150 other currencies. If the client only cares about a handful of those currencies, relating such a large amount of irrelevant data to the client is computationally inefficient. For example, publishing conversion updates several times per second for hundreds of currencies would require more bandwidth and processing power than necessary for the handful of currencies the client cares about. The publish-subscribe system described herein allows a user to create a topic view that restructures complex structured content and information such that it is more efficiently transmitted to a client. That is, considering the previous example, a client may be able to create a topic view that restructures the USD conversion information to present only the handful of currencies the client cares about. In doing so, the publish-subscribe system can more efficiently convey pertinent information (e.g., requiring less bandwidth). II. Data Distribution Environment FIG.1is a block diagram of a distribution server environment, according to one example embodiment. The distribution server environment (“environment”)100includes a data distribution system server110, external systems120, client devices130, and a network140. The data distribution system server (“distribution server”)110hosts a topic tree, manages connections and sessions from clients112and external systems120, and pushes data to the clients112through message queues. That is, the distribution server110pushes (streams) and receives data and events, in real-time, both to and from clients112. Within this framework, the distribution server also maintains topic views that enable clients112to view structured information from the topic tree in a restructured manner. The distribution server110is described in greater detail in regard toFIG.2. One or more external systems120interact with the distribution server110to distribute data to multiple client applications over a network140. An external system120may be a server associated with a data source for distribution via the distribution server110. Example data sources include entities such as a stock exchange, an online game provider, a media outlet, or other sources that distribute topical data to users over the network140, such as the Internet. The external system120communicates with the distribution server110, which enables the external system120to create and maintain topics on the distribution server110for distribution to multiple clients112. Alternatively, topics may be maintained by a separate client process external to the distribution server110. Such clients112are referred to as control clients and sometimes they are maintained by an external system120(e.g., client112D). Systems and clients that maintain topics on the distribution server110are “publishers” (e.g., client device130D includes a publisher114). In various configurations of the environment100, publishers114may directly connect to the distribution server110, may connect to the distribution server110via the network140, or both. Client devices130communicate with the distribution server110through a network140. The client devices include also a client112. A client112can be an application that communicates with the distribution server110using one or more specified client protocols. Example client protocols include WebSocket (WS) and Hypertext Transfer Protocol (HTTP). Some clients112connect to the distribution server110to subscribe to topics and receive message data on those topics. Other clients112, which have different permissions, perform control actions such as creating and updating topics or handling events. In an embodiment, one permission grants a client112administrator status. Administrators are allowed to create and manage topic views within the distribution system. That is, an administrator client112has permissions to create and send a topic view creation request to the distribution server, such that the distribution server module implements the requested topic view. The clients112can include web clients112A, mobile clients112B, and enterprise clients112C. Web clients112A include browser applications that use JavaScript. Enterprise clients112B may be any application connecting to the distribution server110over a distribution server protocol for Transmission Control Protocol (TCP) over the network140using Java, .Net, Python, or C APIs. Mobile clients112C may be mobile applications that interact with the distribution server110using iOS or Android APIs. Generally, clients112interact with the distribution server110using an API116. The API116may include the libraries appropriate to the platform executing the client application. The category of client112depends on the language of the API and libraries used to implement it. Clients112may be implemented in one of a number of languages and use variety of protocols to communicate with the distribution server110. Clients may perform different types of actions depending on their permissions and the capabilities of the API they use. Example APIs include JavaScript API, Java API, .NET API, C API, iOS API, Android API, and Python API. Clients112used by data consumers typically subscribe to topics and receive from the distribution server110the updates that are published to these topics. Clients112used by data providers typically create, manage, and update topics. These clients112also take responsibility for control functions, for example authenticating and managing other client sessions. III. Distribution Server The distribution server110manages topic views for presenting restructured content and information (“restructured data”) to clients112. To expand, the distribution server110receives structured content and information (“structured data”), in real-time, from both clients112and external systems120. The distribution server110continuously pushes (i.e., streams) the structured data to clients112via the network140. In some cases, the distribution server110pushes the structured data to client devices as restructured data according a topic view, as described below. To enable this functionality, the distribution server110includes a management console210, a security module220, a topic tree230, a data management module240, a network layer250, a client sessions module260, and a topic view module270. The network layer250handles a high number of concurrent connections without the need for a separate thread per connection. For example, the network layer250may be able to handle hundred, thousands, hundreds of thousands, etc. simultaneous connections from clients112. Further, connectors handle connections from many different types of clients112and for various protocols. Connectors may be configured to listen on different ports and multiple clients112may connect to a single port. With this approach, tens of thousands of concurrent connections can be serviced by a small pool of threads. The security module220authenticates all connections from clients112, publishers114, and external systems120. The security module220also manages authorizations and permissions for those systems. The authorizations and permissions enable the systems to take various actions associated with their authorizations and permissions when they are connected to the distribution server110. The client sessions module260manages the sessions for the clients112that connect to the distribution server110. A session, in an example, is an interactive information interchange having a session state that can persist over multiple connections. Additionally, the client sessions module260stores information about each client112and their session(s). The information may include subscription information such as which topic a client112is subscribed to, topic selectors received from the client112, etc. If a client112disconnects, it can reconnect to the same session within a specified time period using the information stored in the client sessions module260. The data management module240performs operations on structured and restructured data to more efficiently deliver it to clients112. Example operations include structural conflation, merging, and replacing data to ensure that the latest data is received by the client112. Additional data operations can include creating binary and structural deltas to more efficiently stream structured and restructured data to clients. The management console210can manage (i.e., create, manipulate and publish data to) topic views within a topic tree. The management console210can also create topics and send data to those topics as a test environment. Within the environment100, publishers114publish data to a topic tree230. More generally, publishers114are systems (e.g., an appropriately configured client device130or external system120) within the environment100that manage the data for one or more topics and publish messages to any clients112that subscribe to the topics that the publisher manages. In one example, publishers114are written using the client API functionality. A publisher114creates, owns, and manages topics within a topic tree. A publisher114is a client112configured with publishing permissions. The publisher114initializes data in its topics as and updates it as a result of external events. For example, a publisher114can initialize a data item representing the betting odds of a sporting event and publish updates to the betting odds based on external information regarding the participating teams. When a client112first subscribes to a topic the distribution server110provides the client112with a snapshot of the current state of the data relating to that topic. This is referred to as a “current value.” For example, if a client112subscribes to the aforementioned sporting event, the distribution server110would provide a current value to the client112reflecting the current state of the odds. As time progresses, a client112can also request the current state of a topic, even if not subscribed to it, using the “fetch” command. For example, a client112not subscribed to the sporting event could still fetch the odds for the sporting event from the publisher114. The distribution server110maintains any changes to topic data states and publishes those changes to the topic as delta messages when a publisher114publishes a change to the topic. This results in a topic value being sent to every client112that is subscribed to the topic. For example, if a publisher114changes the odds of the sporting event, the distribution server110publishes the change in odds to the subscribing clients112. The distribution server110can send messages to individual clients112, or to groups of clients112, and can receive messages from clients112. In most configurations, the publisher114does not need to know or keep track of the clients112subscribed to its topics, rather the distribution server110maintains the subscription information. In this manner, a publisher114may publish updates to the distribution server110, which then publishes the updates to the subscribing clients112. Further, publishers114manage (see below) the topics they create. The topic tree230is a model representing the organizational structure of topics within the distribution server110. Publishers114can publish structured data to the topic tree230and clients can subscribe to, and receive information and data regarding, topics in the topic tree. The topic tree230may be maintained by the publisher114, client sessions module260, other software within the distribution server110, or another appropriately configured system within the environment. In a more distinct example, the topic tree230is maintained by clients with sufficient permissions to create and update topics within it (i.e., a publisher114). The topic tree230is structured hierarchically. That is, in an example, the topic tree230comprises top-level topics, each of which may include subordinate topics underneath those top-level topics. Subordinate topics themselves can have subordinate topics. A topic of any type can be bound to any node of the topic tree. For example, a top-level topic may be bound to a node and each of its subordinate topics bound to their own nodes. FIG.3is a diagram illustrating a topic tree, and the logical connections between publishers and clients based on topics, according to one example embodiment. The topic tree300is an example representation of the topic tree230inFIG.2. Additionally, the publishers310and clients320are example representations of the clients112and publishers114, respectively, inFIG.1 Topics in the topic tree300are created in the distribution server110by publishers310. A publisher310is a client with the appropriate publishing permissions. Here, the topics are structured hierarchically within the topic tree300according to the structure maintained by the distribution server110. A topic in the topic tree300is described by its topic path. A topic path is a description of the location of the topic within the topic tree300. In an example, the topic path describes the location of the topic within the hierarchy of the topic tree using slash characters ( ). Using the illustrated example, the topic tree includes the following topic paths describing their topics: (A), (A/B), (A/B/C), (A/B/C/D), (A/E), (F), (G), (H), and (WI). Topics at topic paths (A), (F), (G), and (H) are at the first level (i.e., highest level) of the tree structure. Topics at topic path (A/B) and (A/E) are subordinate to (A). Because (A/B) and (A/E) are subordinate to a first level topic, they are in the second level of the topic tree300. Additionally, the topics at topic path (H/I) is subordinate to the topic at topic path (H) in the second level of the topic tree300. Topic at topic path (A/C) is subordinate to the topic at topic path (A/B) and, due to its subordination, in the third level of the topic tree. The topic at topic path (A/B/C/D) is subordinate to the topic at topic path (A/B/C) and, due to its subordination, is in the fourth level of the topic tree. A topic path may also be used by a client320to send messages to the publisher310that receives messages on that topic path. The client320is not aware of the publisher, only of the topic path. Each topic corresponds to a node in the topic tree300. Generally, the node contains a corresponding topic value, but may not if no topic values have been published to the node. A topic is a data object describing a current value (i.e., topic value) of the topic. In other words, a publisher310publishes topic values to a node at its corresponding topic path, and clients320receive the topic values located in the nodes. Topic values can be various formats and represent many different types of information. For example, topic values may be a JSON object and therefore may contain maps and/or arrays of data values. In another example, topic values may be a binary data object, or a binary representation of different data object. Other data formats are also possible. The example also illustrates how publishers310and clients320interact with the topic tree300. In the illustrated example, clients320and publishers310are loosely coupled through logical links representing the topics. As shown, a publisher310publishes topic values to a topic located at a topic path and a client320subscribes to a topic and receives its topic values. Here, publisher310A is configured with permissions to manage (i.e., publish topic values to) the topic at topic path (A) and all of its subordinate topics. The appropriate permission is denoted by a dotted line. As such, publisher310A can publish information to any of the managed topics. For example, the publisher310A can publish information to the topic at topic path (A/E), or the topic at topic path (A/B/C). Similarly, publisher310B manages the topics at topic path (F) and (G) while publisher310C manages topics at topic path (H) and (H/I) and both can publish information to its owned topics. Moreover, each of the clients320are subscribed to at least one topic in the topic tree300. Subscriptions are denoted by dashed lines. Client320A is subscribed to topic paths (A), (A/B), and (G). As such, when a publisher310publishes topic values to one of its subscribed topics, client320A will receive the corresponding topic value updates. Similarly, client320B is subscribed to topic paths (A/B) and (H/I), client320C is subscribed to topic paths (A/B/C), and client320D is subscribed to topic paths (H) and (H/I). Clients320B,320C, and320D also receive topic value updates regarding their subscribed topics when a managing publisher310publishes information to the topic. Returning to the distribution server110inFIG.2, the client sessions module260maintains subscription information about the topics each client112is subscribed to. For each client112, the subscription information includes a list of topic selections. A topic selection includes data that identifies a subset of topics of the topic tree230and whether the client112is subscribed to that subset of topics of the topic tree230. A topic selection for a client112includes one or more topic selectors. A topic selector is an expression that identifies a subset of topics in the topic tree230to which a client112wishes to subscribe. Topic selectors generally correspond to one or more topic paths of a topic rather than a topic name for the topic. Each topic selection also includes a subscription operation type value indicating whether the topic selector is subscribing to or unsubscribing from the topic tree. In one configuration, the topic selector may be a hierarchical wild-card expression that enables multi-faceted selection of topics. To illustrate, referring to the topic tree300ofFIG.3, client320C may have subscribed to the topic at topic path (A/B/C) by transmitting a topic selector expression “A/B/C” corresponding to the topic path to the distribution server110. Upon receipt, the client sessions module260adds the topic selector expression to the topic selection for client320C. Thereafter, client320C is subscribed to the topic at topic path (A/B/C). In another example, client320D may have subscribed to topics at topic paths (H) and (H/I) by transmitting a topic selector expression “H*” to the distribution server110. The client session module260adds the topic selector expression to the topic selection for client320D. Because the topic selector is a hierarchical wild-card expression, client320D is subscribed to all topics and sub-topics on path H, i.e., (H) and (H/I). Other wildcard expressions are also possible. Returning again toFIG.2, the client sessions module260is configured to change topic selections for a client using subscribe and unsubscribe operations. Each operation adds a new topic selector to the client's topic selections. When a new topic is added by the publisher114, the topic's path is evaluated against every client session's unique topic selections. To reduce the cost of evaluation, topic selections for a client are conflated to remove redundant selectors. The topic view module270manages topic views within the distribution server110. A topic view presents structured data from the topic tree230as restructured data. In an example, restructured data is a subset of the structured data from the topic tree230placed in a different area of the topic tree230that allows for any of (1) publishing the subset of the structured data to a client112more efficiently than the totality of the structured data, (2) publishing the subset of structured data in a manner according to publication or presentation preferences of a client112, and/or (3) publishing the subset of structured data in a more granular fashion. Technical details of topic views are described in greater detail below in the section titled “Topic Views Overview” but many of their principles are introduced here. To introduce topics views, using a simple example, a topic view takes topic values from structured nodes in the topic tree230and presents the topic values in a restructured manner at different nodes in the topic tree230. Here, as described in greater detail below, the restructured data more efficiently publishes a small subset of topic values in the topic tree230according to administrator preferences prescribed by the syntax of the topic view creation request. That is, for example, an administrator creates a topic view the restructures data in a manner that more efficiently publishes subsets of information to clients112. In another illustration, consider a distribution server110that publishes structured data regarding a set of topics in the topic tree230to a client112. An administrator of the distribution system100determines that distributing the entirety of the information for the set of topics is too computationally and bandwidth expensive. The administrator employs a client112to create a topic view defining how to present a subset of the topics as restructured data in a more efficient manner. Now, rather than inefficiently receiving information regarding all of the topics, clients112more efficiently receive information regarding only the subset of topics. Returning to an in-depth explanation, the topic view module270maintains topic views as a persistent data object that is defined in terms of a domain specific language (i.e., a specific syntax). The domain specific language is configured to allow clients112to create topic views. The syntax of the data object defines how to restructure the structured data for more efficient publication. Within its syntax, a topic view includes three portions that define how to restructure data: a set of topic selectors for the topic view, a topic view mapping (“mapping”), and a set of topic view options (“options”). Topic view selectors, mappings, and options all may have syntax indicating their prescribed functionality within a created topic view. Now defining each syntax section of the topic view in turn, the set of topic selectors select a set of topics (“selected topics”) from the topic tree230that the client112wishes to include in the topic view. That is, each topic selector included in the topic view corresponds to one of the selected topics. The selected topics are a subset of the topics in the topic tree230that, when included in the topic view, allow for a more efficient publication of structured data as restructured data. The mapping maps the set of selected topics to reference topics. More specifically, the mapping maps the topic path of a selected topic to a reference path of a reference topic. A reference topic is a new topic in the topic tree230that maintains restructured data. A reference path is the location of the reference topic in the topic tree230that a client may subscribe to. Reference paths are functionally similar to topic paths within the topic tree230and distribution server110. In some cases, a reference topic at a reference path may be similar to selected topics at their topic paths, but just in a different location of the topic tree. Further, reference topic paths can be generated from a mixture of constants, parts of the selected topic's topic path, or field values within the selected topic value based on the syntax of the mapping, as described in greater detail below. To provide a brief example, a user (e.g., a system administrator) can define a mapping that maps selected topics at one or more nodes in a topic tree230to one or more reference topics at one or more new nodes in the topic tree230. In other words, the mapping redefines the structure of the topic tree230by creating new branches within the topic tree230that can more efficiently publish information to clients than the original structure. The set of topic view options defines options for the topic view. The options are maintained by the distribution server and allow an administrator to tailor how a topic view restructures information (i.e., how restructured topic view updates are published). Some options include inserting topic values into the topic view from other topics in the topic tree, or expanding topics having expandable data structures as a topic value to multiple reference topics. Additional options can include throttling published topic values, delaying published topic values, or changing the specification of the topic in some manner. Other examples also exist. Generally, options defined for a topic view apply to all reference topics created by the topic view, but higher levels of granularity is also possible. Once received from a client112, the topic view module270utilizes the syntax of the topic view to create reference topics at reference paths in the topic tree230. This process is described in greater detail below, but, generally, the topic view module270maps selected topics to reference topics according to the mapping and options in the topic view. Reference topics and reference paths (i.e., those created by a topic view) share many similarities to topics and topic paths in the topic tree230. For example, clients112can subscribe and unsubscribe to the reference topic at the reference path. Additionally, content and information is published to the reference topics such that subscribed clients112can receive (or fetch) that information (which is restructured according to the mapping). However, how content and information is published to a reference topic at a reference path differs substantially from that of a topic at a topic path. As described above, publishers114manage topics in a topic tree230. As such, they can publish structured data to topics in the topic tree230and the distribution server110distributes the published information to clients112subscribed to those topics. Here, however, the distribution server110employs the topic view module270to restructure and publish topic updates to reference topics according a topic view. To do so, the topic view module270evaluates a topic view each time a publisher publishes structured data to a selected topic referenced by the topic view. More explicitly, if a publisher114publishes a topic update to a selected topic from the topic view, the distribution server110restructures the topic update according to the topic view (i.e., evaluates the topic view) and publishes it to the reference topics as restructured data. To improve transmission and processing efficiency, the topic view module270only evaluates a topic view when selected topics for that topic view receive an updated topic value. To accomplish this, when a topic view is received by the data distribution server110the topic view module270“tags” all selected topics in the topic view. Tagging a topic indicates that its corresponding topic view will be evaluated when the tagged topic receives a topic update. Thus, when a publisher114publishes a topic to a tagged topic in the topic tree230, the topic view module270evaluates the topic view (i.e., restructures data according to the topic view) and publishes the restructured data to a reference topic in the topic tree230. Any clients112subscribed to the reference topic will receive the restructured data. The topic view module270can evaluate a topic view to restructure data in several ways. In a simple example, discussed briefly above, when a publisher114publishes a topic update to a topic, the topic view module270compares the previous topic value to the new topic value to determine whether to update a reference topic. If the topic view module270determines there is a difference between the previous topic and the new topic, it restructures the topic update according to the topic view and publishes the restructured data to the reference topic. In a more complex example, evaluating a topic view can further restructure the data. That is, when a publisher114publishes a topic update to a topic, the topic view module270may create additional reference topics, delete reference topics, or change the structure of a reference topic (i.e., change the reference topic path). For instance, consider a topic view that maps a topic with an n field array topic value to a corresponding n reference topics. If a publisher114publishes a topic update to the topic with an n+1 field array topic value, the topic view module270compares the new topic value (e.g., n+1 field array) to the previous topic value (e.g., n field array). Because there is a difference between topic values, the topic view module270will evaluate the topic view. In this instance, because the mapping maps each element of the array to a corresponding reference topic, the topic view module270evaluates the topic view and creates a new reference topic such that there are n+1 reference topics. The topic view module270may also publish any topic updates to the reference topics as restructured data according to the topic view. Similar functions occur for deleting topics and deleting topic views. These more complex cases are described in greater detail below. FIG.4is a diagram illustrating a topic tree including a topic view, and logical connections between publishers, clients, and the topic view module, according to one example embodiment. The topic tree400is substantially similar to the topic tree300inFIG.3. That is, the topic tree400includes an array of topics, each having a topic at a topic path. Further, publishers410can publish structured data to the topics in the topic tree400, and clients420can subscribe to the topics such that they can receive the structured data. However, the topic tree400ofFIG.4is different from the topic tree300ofFIG.3because it includes reference topics at reference paths created and managed by the topic view module270. Each reference path describes the location of the reference topic in the structural hierarchy of the topic tree400maintained by the distribution server110. As described herein, each reference topic path reflects a mapping from an original topic (e.g., a selected topic) to a reference topic included in a topic view maintained by the topic view module270. The example also illustrates how publishers410and clients420interact with the topic tree400, and how the topic view module270manages topic views based on those interactions. In the illustrated example, a publisher410manages topics at topic path (A) and all of its sub-topics. A client420A subscribes to a topic at (A/B/C), which is a sub-topic of the topic at topic path (A). The topic path at topic path (A/B/C) includes a topic value that includes 3 different elements: Element 1, Element 2, and Element 3. In order to more efficiently transmit the information in the topic at topic path (A/B/C) to other clients (i.e., rather than all of the elements), an administrator creates and submits a topic view to the topic view module270. Once received, the topic view module270creates reference topics based on the topic view. Here, the topic view maps the elements of the topic at topic path (A/B/C) to individual reference topics at distinct reference paths. Accordingly, the topic view module270creates reference topic 1 at reference path (B/B/C/1), reference topic 2 at reference path (B/B/C/2), and reference topic 3 at reference path (B/B/C/3), and each reference topic corresponds to an element of the topic at topic path (A/B/C). The reference topics at the reference topics paths are in the fourth level of the structural hierarchy because they are subordinate to three other topics in the topic tree400. As such, there are nodes in the topic tree400at the first, second, and third levels of the topic tree, but those nodes do not contain a reference topic value. Nodes at in the topic tree that have a topic path but do not include a topic having a topic value are also possible for the topic tree300inFIG.3, although they are not illustrated. Because the topic at reference path (A/B/C) is included in a topic view, the topic view module270tags that topic. As such, whenever the publisher changes the topic value of the topic at topic path (A/B/C), the topic view module270will evaluate the topic view (i.e., restructure the data according to the topic view). In this case, any change in the topic value at the topic at topic path (A/B/C) will cause the topic view module270to restructure the new data according to the topic view, as needed, and publish the restructured data to Reference Topic 1, Reference Topic 2, and Reference Topic 3, if necessary. Further, the topic view module may add or remove reference topics if the number of elements in the topic at topic path (A/B/B) changes. Furthermore, as described above, the topic view module270manages the reference topics within the topic tree400ofFIG.4. As such, the topic view module270, rather than a publisher410, will publish topic updates to a reference topic. The published topic values are restructured according to the topic view maintained by the topic view module270. To illustrate, consider a publisher that publishes430a topic update to the topic at topic path (A/B/C). The topic update changes element 2, and element 3. Because the topic at topic path (A/B/C) is tagged as being included in a topic view, the topic view module270evaluates440any topic view including the topic at topic path (A/B/C). Here, evaluating the topic view causes the topic view module270to determine if there is a difference between the previous topic value of the topic at topic path (A/B/C) and the current topic value of the topic at topic path (A/B/C). Because there is a difference, the topic view modules restructures440the topic update according to the topic view and publishes450the restructured topic updates to Reference Topic 2 and Reference Topic 3. Additionally, because the client420B is subscribed to Reference Topic 2, it will see the change to the topic value created by the publisher410, but according to the restructuring of the topic view. Notably, client420A will see all of the changes to the topic at topic path (A/B/C) (without restructuring), while client420B will more efficiently see only the changes to Element 2 of the topic at topic path (A/B/C). The topic view module270can also manage many topic views. That is, the topic view module270can manage topic views each having their own topic selectors, topic view mapping, and options. In this case, whenever any of the topics selected by the topic views are updated by a publisher410, the distribution server110evaluates all of the pertinent topic views. Additionally, topic views can reference other topic views. Thus, if the reference topics created by a first topic view are selected by a second topic view, the second topic view will be updated when the first topic view changes, which updates when one of its selected topics updates, etc. Finally, the topic view module270can publish restructured information as a delta stream. That is, determining a delta between a previously published restructured data and the most recent published restructured delta and publishing solely that delta. Clients112may be configured to interpret the delta such that its displayed delta accurately reflects the recently published restructured data. In other words, delta interpretation occurs within the client such that it is presented with a new topic value, even though only the delta is transmitted through the network140. Moreover, the topic view module270continuously manages topic views once they are created. That is, once a topic view and its corresponding reference topics at reference topic paths are created, the topic view module270dynamically updates topic values for the reference topics, removes reference topics when selected topics change, and adds reference topics when selected topics change. Management occurs in perpetuity until an administrator removes the topic view from the topic tree230. Thus, an administrator is able to create a topic view that robustly restructures data in a manner that provides it more efficiently (e.g., less bandwidth, less processing power) to clients112within the environment100. IV. Example Topic View Workflow FIG.5illustrates a first workflow diagram for operating the distribution server to restructure content and information, according to one example embodiment. The workflow shown inFIG.5can be performed by the distribution server110within the environment100but could be performed by other systems within other environments. Some or all of the steps may be performed by other entities or systems. In addition, the workflow may include different, additional, or fewer steps, and the steps may be performed in different orders. A distribution server (e.g., distribution server110) continuously publishes510structured data to a topic tree (e.g., topic tree230) as publishers (e.g., publisher114) publish topic updates to their managed topics. The topic tree is a hierarchically structured model storing content and information regarding the topics. Each topic corresponds to a topic path describing its location in the topic tree, and each topic corresponds to a node containing a topic value for the topic. Publishers publish structured data by providing updated topic values for their topics to the distribution server, and clients (e.g., client112) can receive the content and information via the distribution server. Within this example, an administrator of the topic tree within the distribution server desires to publish structured data to clients more efficiently as restructured data. To accomplish this, the administrator creates and sends a topic view creation request to the distribution server and, in response, the distribution server receives520the request. The topic view creation request includes syntax to create a topic view. The topic view restructures information received at a topic in the topic tree for more efficient transmission at one or more reference topics in the topic tree. The syntax of the topic view creation request defines a set of selected topics indicated by a set of topic selectors. Each topic selector corresponds to topic path(s) of the selected topics. The syntax of the topic view creation request also defines a topic view mapping. The topic view mapping maps the selected topics to a set of reference topics according to the syntax of the topic view mapping. In some cases, the syntax of the topic view creation request also defines a set of options that impart preferences to how topic updates are restructured. The distribution server creates530the set of reference topics in the topic tree according to the syntax of the topic view. Each of the reference topics has a reference path defined using the syntax of the topic view mapping, topic view selectors, and options in the topic view. Clients may subscribe to a reference topic by subscribing to its corresponding reference path. When creating the reference topics in the topic tree, the distribution server tags all of its selected topics. Because the selected topics are tagged, the distribution server will evaluate the topic view when the selected topics receive a topic update from a publisher. To provide additional context to the evaluation, recall that the publishers are continuously publishing structured data to the topic tree. After the topic view is created, when a publisher publishes new structured data (e.g., a topic update) to a selected topic, and the distribution server receives540the topic update. The distribution server restructures550the topic update according to the topic view because the selected topic is tagged. The distribution server publishes550the restructured topic update to the reference topic according to the topic view. Any client subscribed to the reference topic will receive the restructured topic update. V. Topic View Overview Topic View Mappings As described above, a topic view allows a client to view structured data as restructured data within a topic tree. In doing so the distribution server110employs the topic view module270to create and manage reference topics and reference paths. Reference paths can be derived from one or more topic paths, topic values, and/or constant values within the structured data. For clarity, originally structured data will be referred to as “source” topic paths, topic values, and constant values. As additionally described above, a topic view includes a topic view mapping that maps source topic paths to reference topic paths. How a topic view mapping maps a source topic path to a reference topic path can be dictated by the domain specific language for the distribution server110. For example, a topic view that maps a source topic path in one branch of the topic tree to a reference path in another branch of the topic tree is: map?a/tob/<path(1)>  (1) This specification of a topic view mappings all source topics under the source topic node a to matching topics under the reference node b. For example, a/x/y/z will be mapped to b/x/y/z, and so on. The directive path (1) indicates that one reads “all path elements from index 1 and beyond.” Furthermore, how a topic view mappings source topic paths to reference topic paths can depend on the data structures used within the topic tree. For example, in one implementation, the topic tree uses JSON topics and topic values withing the topic tree. When using JSON topics it is possible to extract a source topic value from the source topic and embed it in the path of the reference topic. It is also possible to map a subset of the source topic value to the reference value of the reference topic. The following example shows both of these features in use: map?accounts/to balances/<scalar(/account)> as <value(/balance)>  (2) In this case, a source topic with a source topic path of accounts/account1234 has a source topic value indicating the accounts remaining monetary value. The path and value may then be represented as follows: {“account” : “1234”,“balance” : { “amount” : 12.57, “currency” : “USD” }}(3) Now, consider an example where the source topic value path and source topic value of equation (3) are mapped to reference paths and reference topic values using the topic view mapping map in equation (2). That is, the source topic value is mapped to a reference topic with a reference path of balances/1234 and the corresponding mapped reference value is as follows: {“amount” : 12.57,“currency” : “USD”}(4) Other examples of a topic view mapping source topics at source topic paths to reference topics at reference paths are also possible. Topic View Expansion The topic view module270also allows for topic view expansion. That is, rather than every source topic selected by a topic view mapping mapping to exactly one reference topic, the topic view mapping can map source topic that is a JSON array or JSON object to multiple, separate reference topics. As an example, the syntax of a topic view including a topic view expansion is: <expand(sourcePointer,pathPointer)>  (5) The sourcePointer parameter is a JSON pointer to an array or object, and may, or may not, be used depending on the implementation of the system. If the source pointer is not specified, then the system points to the root of the JSON value by default. The optional pathPointer parameter is a JSON pointer indicating the reference topic path element, and may, or may not, be used depending on the implementation of the system. If the path pointer is not specified then the index (in the case of an array element), or the key (in the case of an object) is used instead. Example Expansion—Single Array To illustrate, using a single JSON array, assume a topic tree includes a JSON source topic called allCars containing an array including the plate number, make, and model of all the cars included in the array. As an example, allCars may be in the following format: {“cars”: [{ “reg”:“HY58XPA”, “type”:“Ford”, “model”:“Sierra” },{ “reg”:“PY59GCA”, “type”:“Fiat”, “model”:“Panda”},{ “reg”:“VA63ABC”, “type”:“Ford”, “model”:“Ka”}]}(6) The topic view module270then receives and implements the following topic view mapping: map allCars to cars/<expand(/cars,/reg)>  (7) Now consider an example where the topic view mapping in equation (7) is applied to the single array in equation (6). The topic view, when evaluated and applied, creates a set of reference topics. The reference topics have the following reference paths: cars/HY58XPA cars/PY59GCA cars/VA63ABC(8) The distribution server is also enabled with source pointers. The source pointer determines the reference value of the reference topic. For example, the reference value for reference topic at the reference path cars/HY58XPA is: {“reg”:“HY58XPA”,“type”:“Ford”,“model”:“Sierra”}(9) As described above, a topic view can include options. As an example, the second parameter in the topic view expansion of equation (7) in is an option: map allCars to cars/<expand(/cars)>  (10) In this case, the reference topic path is taken from the index of the source topic array element, giving us these reference topics paths: cars/0 cars/1 cars/2  (11) In another example, the second, option parameter can allow for the expansion by the type of cars in the array in equation (6). To illustrate, the distribution may receive and evaluate the following topic view expansion: map allCars to cars/<expand(/cars,/type)>/<scalar(/reg)>  (12) In this case, the mapping will result in reference topics named as follows: cars/Ford/HY58XPA cars/Fiat/PY59GCA cars/Ford/VA63ABC(13) Note how the scalar directive is relative to the root of the expanded element, not the source topic value. Example Expansion—Objects In another example, using a JSON object, assume a topic tree includes a JSON source topic called allCars containing an object including the owner of a car and the cars registration, make, and model. To illustrate, the topic tree may include a source topic people/jsmith. In this case, the source topic is an object with the source topic value: {“name” : “John Smith”,“car” {“reg”:“HY58XPA”,“type”:“Ford”“model”:“Sierra”}}(14) The distribution server then receives, evaluates and implements the following topic view mapping: map?people/to <scalar(“/name”)>/car/<expand(/car)>  (15) Now consider an example where the topic view mapping in equation (15) is applies to the source topic object in equation (14). In this case, evaluation of the topic view creates a reference topic with the corresponding values: The mapping of (15) applied to the object (14) creates a source topic with the corresponding reference paths and reference values: PathValueJohn⁢Smith/car/reg“HY58XPA”John⁢Smith/car/type“Ford”John⁢Smith/car/model“Sierra”(16) Example Expansion—Nested Expands The distribution server can implement the expand directive more than one, or even many, times to unpack complex source topic values that contain nested arrays or objects. Each expand directive focusses the evaluation context to successively smaller parts of the source value. Accordingly, the JSON pointers in each directive (e.g., expansion) are evaluated relative to the value produced by the previous directive (e.g., expansion). For example, a topic tree includes an array for all cars, but also includes the drivers of those cars. The array may have the following values: {“cars”: [{ “reg”: “HY58PXA”“drivers”: [{“name” : “Bill”}, {“name” : “Fred”}]},{ “reg”: “PY59GCA”,“drivers”: [{“name” : “Jane”}, {“name” : “Fred”}]},{ “reg”: “VA63ABC”,“drivers”: [{“name” : “Tom”}, {“name” : “John”}]}(17) The distribution server can expand both levels of the array hierarchy. For example, the system may receive and evaluate a topic value expansion: map allCars to cars/<expand(/cars,/reg)>/drivers/<expand(/drivers,/name)>  (18) In this case, applying the topic view mapping in equation (18) to the source topic including the nested array in equation (17) generates the following reference topic values at reference paths: Path_Value_cars/HY58XPA/drivers/Bill{“name”:“Bill”}cars/HY58XPA/drivers/Fred{“name”:“Fred”}cars/PY59GCA/drivers/Jane{“name”:“Jane”}cars/PY59GCA/drivers/Fred{“name”:“Fred”}cars/VA63ABC/drivers/Tom{“name”:“Tom”}cars/VA63ABC/drivers/John{“name”:“John”}(19) Topic view expansion is a powerful feature allowing real-time manipulation of how a user chooses to view topic values. That is, it enables a client to take structured data that is hard to interpret within the topic tree and generate restructured data withing the topic tree that is more readily accessible. Topic View Inserts The topic view module270also allows for topic view insertion. That is, rather than only the source topics from selected topics in a topic being included in reference topics, other source topics can be inserted into a reference topic. As an example, the syntax of a topic view including a topic view insert is: map?Some_Source_Topics/ to Mapped_Topics/<path(1)> insert Some_Other_Topic at/Some_JSON_Pointer This topic view mappings all source topics beneath the source path Some_Source_Topics to similarly named reference topics under the reference path Mapped_Topics. The topic view also inserts the complete value of a source topic Some_Other_Topic into the current data with the key named Some_JSON_Pointer. More broadly, the insert function adheres to the following form: insert path_specification key source_key at insertion key default constant  (21) The insert keyword introduces the clause and specifies the source path of the source topic from which data is to be obtained and merged with the current data value. The meaning of “current data value” can depend upon the other clauses in the specification and is defined in more detail below. The path_specification defines the source path of the source topic to insert data from and it is similar to the current target path mapping in that it can contain constants, <path( )> directives, and <scalar( )> directives. The path directives operate on the source path of the selected topic and the scalar directives operate on the current input data as defined above. To illustrate consider the following path_specification: Topic/<path(1,2)>/<scalar(/foo)>  (22) In this example, the path_specification would be specifying insertion from a source topic whose source topic path is Topic/ followed by elements 1 to 3 of the source topic path followed by / and the scalar value at the key /foo in the current input data. The key source_key clause in equation (21) optionally specifies the key (a JSON pointer) of an item within the topic indicated by path_specification. If not specified, then the topic view module270assumes that the whole of the data value of the selected topic will be inserted. The at insertion_key clause in equation (21) specifies a JSON pointer indicating the location of the insertion in the current data value. Typically, this would be an object key which would indicate the key of the value in the data. If the data already had an item with the same key it would be overwritten, otherwise a new item would be added to the parent indicated by the specified key. The parent would have to exist otherwise the insertion would not occur and a warning would be logged. This can also indicate an entry in an array. If an index key is provided, the existing entry at the specified index would be replaced. An index of one greater than the current number of entries could be used to append to an array, but it is much easier to use the special ‘-’ character instead. For example, to append to the end of an array at MyArray you can use an insertion key of /MyArray/-. Therefore, if the key was specified as at /Address/Street, it would indicate that the value from the selected topic to insert from would be inserted within the current data value within an object called Address at a key called Street. If the object currently had a key called street, it would be overwritten; otherwise, it would be inserted into the object. If the resolved key indicates a scalar item, then no insertion will take place. The default constant clause in equation (21) overrides a non-insert when a source topic to insert cannot be found, or the specified key within does not exist such that an insertion cannot occur. That is, if a default is specified and the source topic to insert from or the key within it are not found, the constant value will be inserted as a scalar value at the reference topic. Current Data Values The insert clause specifies the source path of the source topic from which structured data is to be obtained and merged with the current data value. The current data value is typically the value from the selected source topic; however, there are some situations where this is not the case. In a first example, if the topic view mapping includes one or more expand directives, the current data value will be the expanded data value. In a second example, if the insert is preceded by an as<value(key)> directive, the current data will be the data indicated by the key. In a third example, if the insert was preceded by another insert clause, the current data will be the output from that clause. Example Inserts In an example, consider the following topic mapping include an insert directive: map Topic1 to Topic2 insert AnotherTopic at /other  (23) Here, source Topic1 is mapped to reference Topic2 and the data within source AnotherTopic is inserted into it at the key named other. If AnotherTopic does not exist (or is not JSON or scalar), Topic2 will be created with the same value as Topic1 but with nothing inserted. Further, in this case, the value of Topic1 is an object, because if it were an array, then no insertion will occur. In another example, consider the following topic mapping including an insert directive: map Topic1 to Topic2 insert AnotherTopic at /other default “unknown”  (24) This example is largely similar to the previous example. However, in this example, if source topic AnotherTopic does not exist, then reference Topic2 will be created with key other inserted with a scalar value of unknown. In another example, consider the following topic mapping including an insert directive: map?Topics/ to Mapped/<path(1)> insert AnotherTopic at /other  (25) This example is largely similar to the previous example. However, in this case, case all of the topics under the source path Topics will be selected and mapped to topics with the same name under the reference path Mapped. Every selected topic will have the value of AnotherTopic inserted into it (assuming they are JSON objects). If AnotherTopic does not exist, no insertions would take place. In another example, consider the following topic mapping including an insert directive: map?Topics/ to Mapped/<path(1)> insert Others/<path(1)> at /other  (26) This example is more complex than those demonstrated previously. In this case, each selected topic has an insertion from a source topic with the same topic under the source path Others. For example, source Topics/A/B would generate a reference topic at reference path Mapped/A/B which has the value of Others/A/B inserted at the key other. In another example, consider the following topic mapping including an insert directive: map?Topics/ to Mapped/<path(1)> insert Others/<scalar(/foo)> at /other  (27) This example is largely similar to the previous example. However, in this case, the source path of the insertion topic will be derived from a topic value within the selected source topic. Thus, if source topic Topics/A/B has a value of “bar” at key “foo” then topic selected to insert from would be Others/bar. In another example, consider the following topic mapping including an insert directive: map?Topics/ to Mapped/<path(1)> insert Others/<path(1)> key /foo at /other  (28) All previous examples have shown the insertion of the whole value of another topic. Here the key keyword is used to select a specific item foo within the insertion topic value. If the insertion topic does not have the value with key foo then a reference topic will be created but no insertion will occur as we have specified no default. When expand directives are used, the insert will occur for every output from the expansion. Within this context, consider the following topic mapping including an insert directive with an expand directive: map Topic1 to Expanded/<expand( )> insert AnotherTopic at /other  (29) If we assume that the content of source topic Topic1 is an array of objects, then each array element will be expanded to produce a new topic at reference path Expanded/0, expanded/1 and so on, and each resulting reference topic will have the reference value from AnotherTopic inserted at the key /other. Furthermore, insert clauses can be chained. Within this context, consider the following topic mapping including an insert directive with an expand directive: map Topic1 to Topic2 insert AnotherTopic at /other insert YetAnotherTopic at /yetAnother  (30) In the above example, values from two different topics are inserted into the data to produce the reference topic. Finally, the insert clause can be used along with as <value( )> clauses. Within this context, consider the following topic mapping including an insert directive with an expand directive: map Topic1 to Topic2 insert AnotherTopic at /foo/bar as <value(/foo)>  (31) Here, the data from AnotherTopic is inserted at the key foo/bar, then the full value of foo is projected. Adding Topics to a Topic Tree In some examples, adding a topic to a topic tree can trigger an evaluation of a topic view and create new reference topics and topic paths. For example, consider a topic view including a topic selector for A* that publishes structured data as restructured data when a publisher updates the topic values at A. If the publisher creates a sub-topic of A, then the topic view mapping may create new reference paths depending on how the source topic paths are mapped to reference topic paths. In this case, the distribution server will reevaluate the topic view and create any new reference paths and reference topics based on the topic view mapping. In some examples this can include not adding new reference topics, while in others it may include adding new reference topics. Deleting Topics and Topic Views Deleting a selected topic from a topic tree also triggers an evaluation of the topic view to remove any pertinent reference topics and reference topic paths. In this case, the distribution server determines whether the removed topic is tagged as being a selected topic in any of the topic views. If so, the distribution server, performs an evaluation to remove any necessary reference paths. In this evaluation, the distribution server identifies any reference paths for removal by determining what (if any) reference topic path(s) would be created for the deleted topic based on the topic view mapping and removing those reference topic paths. Deleting a topic view is similar to the above example. That is, the topic view is revaluated to determine what reference paths would be created by the topic view mapping and removing those reference topic paths. VI. Additional Considerations The foregoing description has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the description to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a non-transitory computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Embodiments may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the embodiments be limited not by this detailed description, but rather by any claims that issue on an application based herein. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
64,190
11860860
While the present disclosure is subject to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. The present disclosure should be understood to not be limited to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure. DETAILED DESCRIPTION Methods and systems for execution of non-blocking transactions at a database are disclosed. It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the example embodiments described herein. However, it will be understood by those of ordinary skill in the art that the example embodiments described herein may be practiced without these specific details. Motivation for Some Embodiments As described above, it has proven difficult to implement solutions for serving consistent, present-time reads from a database, while also minimizing the effects of contending transactions (e.g., between read transactions and write transactions) at particular key-values and/or ranges of the database. Existing database topology techniques require for a single node (e.g., a leaseholder node) of a cluster to serve reads to client devices for a particular subset (e.g., range) of data, rather than allowing additional nodes (e.g., follower nodes) on which the data is replicated to serve reads. Adding an ability to the existing database topology to serve reads from follower nodes can beneficial to the database both because it can reduce read latencies by avoiding network hops (e.g., in geo-partitioned data table configurations) and because it can serve as a form of load-balancing for concentrated read traffic at the leaseholder node, thereby reducing tail latencies. Further, adding an ability to the existing database topology to serve consistent, present time reads from any node storing a replica of the data can make the data accessible to more read transactions and accessible to read-write transactions. Existing database topology techniques result in conflicting transactions when transactions overlap in time. Conflict between transactions is especially problematic for read-heavy data, where ongoing write transactions on the read-heavy data can cause subsequent read transactions to be blocked from the read-heavy data, thereby increasing read latencies. Adding an ability to perform writes on read-heavy data without causing conflicting read transactions to block would be beneficial for providing predictable read latencies. Such predictability would be especially important in reference (e.g., global) data table configurations, where read/write contention can significantly delay read transactions (e.g., for up to 100's of ms) as the read transactions are routed to navigate network latencies (e.g., wide area network latencies) in order to resolve conflicts. Thus, there is a pressing need for improved techniques for a database to serve consistent, low latency reads at a present time (e.g., non-stale data), while minimizing the disruption (e.g., blocking) from contending transactions. Terms “Cluster” generally refers to a deployment of computing devices that comprise a database. A cluster may be located in one or more geographic locations (e.g., data centers). The one or more geographic locations may be located within a single geographic region (e.g., eastern United States, central United States, etc.) or more than one geographic location. For example, a cluster may be located in both the eastern United States and western United States, with 2 data centers in the eastern United states and 4 data centers in the western United States. “Node” generally refers to an individual computing device that is a part of a cluster. A node may join with one or more other nodes to form a cluster. One or nodes that comprise a cluster may store data (e.g., tables, indexes, etc.) in a map of key-value pairs. A node may store a “range”, which can be a subset of the key-value pairs (or all of the key-value pairs depending on the size of the range) stored by the cluster. A table and its secondary indexes can be mapped to one or more ranges, where each key-value pair in a range may represent a single row in the table (which can also be known as the primary index because the table is sorted by the primary key) or a single row in a secondary index. Based on the range reaching or exceeding a threshold storage size, the range may split into two ranges. For example, based on reaching 512 mebibytes (MiB) in size, the range may split into two ranges. Successive ranges may split into one or more ranges based on reaching or exceeding a threshold storage size. “Replica” generally refers to a copy of a range. A range may be replicated a threshold number of times. For example, a range may be replicated 3 times into 3 distinct replicas. Each replica of a range may be stored on a distinct node of a cluster. For example, 3 replicas of a range may each be stored on a different node of a cluster. In some cases, a range may be required to be replicated a minimum of 3 times. “Leaseholder” or “leaseholder replica” generally refers to a replica of a range that is configured to hold the lease for the replicas of the range. The leaseholder may receive and/or coordinate read transactions and write transactions directed to one or more key-value pairs stored by the range. “Leaseholder node” may generally refer to the node of the cluster that stores the leaseholder replica. The leaseholder may receive read transactions and serve reads to client devices indicated by the read transactions. Other replicas of the range that are not the leaseholder may receive read transactions and route the read transactions to the leaseholder, such that the leaseholder can serve the read based on the read transaction. “Raft leader” or “leader” generally refers to a replica of the range that is a leader for managing write transactions for a range. In some cases, the leader and the leaseholder are the same replica for a range. In other cases, the leader and the leaseholder are not the same replica for a range. “Raft leader node” or “leader node” generally refers to a node of the cluster that stores the leader. The leader may determine that a threshold number of the replicas of a range agree to commit a write transaction prior to committing the write transaction. In some cases, the threshold number of the replicas of the range may be a majority of the replicas of the range. “Follower” generally refers to a replica of the range that is not the leader. “Follower node” may generally refer to a node of the cluster that stores the follower replica. Follower replicas may receive write transactions from the leader replica. “Raft log” generally refers to a time-ordered log of write transactions to a range, where the log of write transactions includes write transactions agreed to by a threshold number of the replicas of the range. Each replica of a range may include a raft log stored on the node that stores the replica. A raft log may be a source of truth for replication among nodes for a range. “Consistency” generally refers to causality and the ordering of transactions within a distributed system. Consistency defines rules for operations within the distributed system, such that data stored by the system will remain consistent with respect to read and write operations originating from different sources. “Consensus” generally refers to a threshold number of replicas for a range, based on receiving a write transaction, acknowledging a write transaction. In some cases, the threshold number of replicas may be a majority of replicas for a range. Consensus may be achieved even if one or more nodes storing replicas of a range are offline, such that the threshold number of replicas for the range can acknowledge the write transaction. Based on achieving consensus, data modified by the write transaction may be stored within the ranges targeted by the write transaction. “Replication” generally refers to creating and distributing copies (e.g., replicas) of the data stored by the cluster. In some cases, replication can ensure that replicas of a range remain consistent among the nodes that each comprise a replica of the range. In some cases, replication may be synchronous such that write transactions are acknowledged and/or otherwise propagated to a threshold number of replicas of a range before being considered committed to the range. Database Overview A database stored by a cluster of nodes may operate based on one or more remote procedure calls (RPCs). The database may be comprised of a key-value store distributed among the nodes of the cluster. In some cases, the RPCs may be SQL RPCs. In other cases, RPCs based on other programming languages may be used. Nodes of the cluster may receive SQL RPCs from client devices. After receiving SQL RPCs, nodes may convert the SQL RPCs into operations that may operate on the distributed key-value store. In some embodiments, as described herein, the key-value store of the database may be comprised of one or more ranges. A range may be a configured storage size. For example, a range may be 512 MiB. Each range may be replicated to more than one node to maintain data survivability. For example, each range may be replicated to at least 3 nodes. By replicating each range to more than one node, if a node fails, replica(s) of the range would still exist on other nodes such that the range can still be accessed by client devices and replicated to other nodes of the cluster. In some embodiments, a node may receive a read transaction from a client device. A node may receive a write transaction from a client device. In some cases, a node can receive a read transaction or a write transaction from another node of the cluster. For example, a leaseholder node may receive a read transaction from a node that originally received the read transaction from a client device. In some cases, a node can send a read transaction to another node of the cluster. For example, a node that received a read transaction, but cannot serve the read transaction may send the read transaction to the leaseholder node. In some cases, if a node receives a read or write transaction that it cannot directly serve, the node may send and/or otherwise route the transaction to the node that can serve the transaction. In some embodiments, modifications to the data of a range may rely on a consensus algorithm to ensure a threshold number of replicas of the range agree to commit the change. The threshold may be a majority of the replicas of the range. The consensus algorithm may enable consistent reads of data stored by a range. In some embodiments, data may be written to and/or read from a storage device of a node using a storage engine that tracks the timestamp associated with the data. By tracking the timestamp associated with the data, client devices may query for historical data from a specific period of time. Database Layers In some embodiments, database architecture for the cluster of nodes may be comprised of one or more layers. The one or more layers may process received SQL RPCs into actionable processes to access, modify, store, and return data to client devices, while providing for data replication and consistency among nodes of a cluster. The layers may comprise one or more of: a SQL layer, a transactional layer, a distribution layer, a replication layer, and a storage layer. SQL Layer In some embodiments, the database architecture for the cluster may include a SQL layer. In some cases, the database may operate using at least some American National Standards Institute (ANSI) defined SQL standards. The SQL layer may operate as an intermediary between client devices and nodes of the cluster. Client devices may interact with and/or otherwise access a database using SQL statements. Client devices may include a SQL application programming interface (API) to communicate with the cluster. SQL statements may reach a node of the cluster via a wire protocol. For example, SQL statements may be sent to a node of the cluster via a PostgreSQL wire protocol. The SQL layer may convert the SQL statements (received from the client devices) to a plan of key-value (KV) operations. The SQL layer may send the converted KV operations to another layer of the database. Based on receiving a SQL request from a client device at a node of the cluster, the SQL layer may parse the SQL request in view of the supported syntax of the database. Based on parsing the SQL request, the SQL layer may convert a query of the SQL request into an abstract syntax tree (AST) to create a query plan associated with the SQL request. The AST may be used to generate a query plan based on three phases. In phase 1, the AST may be transformed into a logical query plan, where the SQL layer may perform semantic analysis. In some cases, as a part of semantic analysis, the SQL layer may determine whether the query of the SQL request is valid, resolve names within the query, remove intermediate computations that are determined to be unnecessary, and/or determine data types for intermediate results of the query. In phase 2, the SQL layer may simplify the logical query plan using one or more transformation optimizations. In phase 3, the SQL layer may optimize the logical query plan using a search algorithm, wherein the search algorithm evaluates one or more methods of executing the query and selects the method having the lowest cost. In some cases, the cost may be measured in time. Cost may be determined based on estimating the time each node in the query plan will use to process all results of the query and modeling data flow through the query plan. The result of phase 3 may be an optimized logical query plan. In some embodiments, based on determining an optimized logical query plan for the SQL request, the SQL layer may determine which nodes of the cluster may be included in execution of the query to generate a physical plan. The SQL layer may determine the nodes to be included in the execution of the query based on locality (e.g., location) information for the range. For example, the SQL layer may distribute the query to nodes located close to the geographic location of the stored data. Based on generating the physical plan, the SQL layer may send the physical plan to one or more nodes for execution. On each node that received the physical plan, the SQL layer may determine a part of the query. One or more logical processors located at each node may communicate with each other over a logical flow of data to determine one or more results for the query. The results of the query may be combined and sent back to the node where the SQL request was received. Based on receiving the combined results of the query at the node where the SQL request was received, the SQL may send the combined results to the client device. To execute the query, each processor of a node may require encoded data for the scalar values manipulated by the query. The encoded data may be a binary data that is different from the string data used in the SQL layer. Based on requiring binary data, the contents of the SQL query may be encoded to binary form, such that the binary data may be communicated between logical processors and/or read from a storage device of the node. In some embodiments, the SQL layer may encode data for use by the lower layers of the database during query execution. The SQL layer may encode data by converting row data (e.g., from a SQL representation as strings) into bytes for use by lower layers of the database. Based on receiving data as bytes (e.g., returned from lower layers after query execution), the SQL layer may convert the bytes into string data, such that the string data may be sent to the client device. In some cases, such byte encoding may preserve the order of the received string data. By storing bytes in the same order as the string data as it was received, the database may efficiently scan for KV data stored in ranges. In some embodiments, for non-indexed columns of a range, the SQL layer may instead use an encoding method (e.g. value encoding) that requires less storage capacity. Value encoding may not preserve the ordering of the received string data of the SQL query. Transaction Layer In some embodiments, the database architecture for the cluster may include a transaction layer. The transaction layer may enable atomicity, consistency, isolation, and durability (ACID) semantics for transactions within the database. The transaction layer may receive binary KV operations from the SQL layer and control KV operations sent to a distribution layer. In some embodiments, for write transactions, the transaction layer may generate one or more locks. A lock may represent a provisional, uncommitted state. The lock may be written as part of the write transaction. The database architecture may include multiple lock types. In some cases, the transactional layer may generate unreplicated locks, which may be stored in an in-memory, lock table that is specific to the node on which the write transaction executes. An unreplicated lock may not be replicated based on the consensus algorithm as described herein. In other cases, the transactional layer may generate one or more replicated locks (or write intents). A replicated lock may operate as a provisional value and an exclusive lock on a node on which the write transaction executed. A replicated lock may be replicated to other nodes of the cluster comprising the range based on the consensus algorithm as described herein. In some cases, a replicated lock may be known as a “write intent”. In some embodiments, a transaction record may be stored in a replica of a range where a first write transaction occurs. A transaction record may include a state of the transaction. States for a transaction may include the following: pending, staging, committed, or aborted. A pending state may indicate that a write intent's transaction is in progress. A staging state may be used to enable parallel commits as to be described herein. A write transaction may or may not be in a committed state during a staging state. An aborted state may indicate the write transaction has been aborted and the values (e.g., values written to the range) associated with the write transaction may be discarded and/or otherwise dropped from the range. As write intents are generated by the transaction layer as a part of a write transaction, the transaction layer may check for newer (e.g., more recent) committed values at the KVs of the range on which the write transaction is operating. If newer committed values exist at the KVs of the range, the write transaction may be restarted. Alternately, if the write transaction identifies write intents at the KVs of the range, the write transaction may be resolved as a transaction conflict as to be described herein. In some embodiments, for read transactions, the transaction layer may execute a read transaction at KVs of a range indicated by the read transaction. The transaction layer may execute the read transaction if the read transaction is not aborted. The read transaction may read multi-version concurrency control (MVCC) values at the KVs of the range as to be described herein in “Storage Layer”. Alternately, the read transaction may read write intents at the KVs, such that the read transaction may be resolved as a transaction conflict as to be described herein. In some embodiments, to commit a write transaction, the transaction layer may determine the transaction record of the write transaction as it executes. The transaction layer may restart the write transaction based on determining the state of the write transaction indicated by the transaction record is aborted. Alternately, the transaction layer may determine the transaction record to indicate the state as pending or staging. Based on the transaction record indicating the write transaction is in a pending state, the transaction layer may set the transaction record to staging and determine whether the write intents of the write transaction have succeeded (i.e. been replicated to the other nodes of the cluster storing the range). If the write intents have succeeded, the transaction layer may report the commit of the transaction to the client device that initiated the write transaction. In some embodiments, based on committing a write transaction, the transaction layer may cleanup the committed write transaction. A coordinating node of the cluster of nodes to which the write transaction was directed may cleanup the committed write transaction via the transaction layer. A coordinating node may be a node that comprises the range that is the subject of the transaction. The coordinating node may track a record of the KVs that were the subject of the write transaction. To clean up the transaction, the coordinating node may modify the state of the transaction record for the write transaction from staging to committed. In some cases, the coordinating node may resolve the write intents of the write transaction to MVCC (i.e. committed) values by removing the pointer to the transaction record. Based on removing the pointer to the transaction record for the write transaction, the coordinating node may delete the write intents of the transaction. In some embodiments, the transaction layer may track timing of transactions (e.g., to maintain serializability). The transaction layer may implement hybrid-logical clocks (HLCs) to track time within the cluster. An HLC may be composed of a physical component (e.g., which may be close to local wall time) and a logical component (e.g., which is used to distinguish between events with the same physical component). HLC time may always be greater than or be equal to the wall time. Each node may include a local HLC. For a transaction, the gateway node (e.g., the node that initially receives a transaction) may determine a timestamp for the transaction based on HLC time for the node. The transaction layer may enable transaction timestamps based on HLC time. A timestamp within the cluster may be used to track versions of KVs (e.g., through MVCC as to be described herein) and provide guaranteed transactional isolation. For a transaction, based on a node sending a transaction to another node, the node may include the timestamp generated by the local HLC (i.e. the HLC of the node) with the transaction. Based on receiving a request from another node (i.e. sender node), a node (i.e. receiver node) may inform the local HLC of the timestamp supplied with the transaction by the sender node. In some cases, the receiver node may update the local HLC of the receiver node with the timestamp included in the received transaction. Such a process may ensure that all data read and/or written to a node has a timestamp less than the HLC time at the node. Accordingly, the leaseholder for a range may serve reads for data stored by the leaseholder, where the read transaction that reads the data includes an HLC time greater than HLC timestamp of the MVCC value read by the read transaction (i.e., the read occurs “after” the write). In some embodiments, to maintain data consistency, the transaction layer may cause a node to crash. A node may crash if the node detects that its local HLC is out of sync with at least half of the other nodes in the cluster. In some cases, out of sync may be defined as 80% of the maximum allowed offset. A maximum allowed offset may be the maximum allowed timestamp difference between nodes of the cluster. In an example, the maximum allowed offset may be 500 ms. To provide serializability within the cluster, based on a transaction reading a value, the transaction layer may store the operation's timestamp in a timestamp cache. The timestamp cache may include the transaction having the latest timestamp (i.e. the furthest ahead in time) for value(s) read by the transaction. Based on execution of a write transaction, the transaction layer may compare the timestamp of the write transaction to the timestamp cache. If the timestamp is less than the latest time of the timestamp cache, the transaction layer may attempt to advance the timestamp of the write transaction forward to a later time. In some cases, advancing the timestamp may cause the write transaction to restart in the second phase of the transaction as to be described herein with respect to read refreshing. As described herein, the SQL layer may convert SQL statements (e.g., received from client devices) to KV operations. KV operations generated from the SQL layer may use a Client Transaction (CT) transactional interface of the transaction layer to interact with the KVs stored by the cluster. The CT transactional interface may include a Transaction Coordination Sender (TCS). The TCS may perform one or more operations as a part of the transaction layer. Based on the execution of a transaction, the TCS may send (e.g., periodically send) “heartbeat” messages to a transaction record for the transaction record. These messages may indicate that the transaction should keep executing (i.e. be kept alive). If the TCS fails to send the “heartbeat” messages, the transaction layer may modify the transaction record to an aborted status. The TCS may track each written KV and/or KV range during the course of a transaction. In some embodiments, the TCS may clean and/or otherwise clear accumulated transaction operations. The TCS may clear an accumulated write intent for a write transaction based on the status of the transaction changing to committed or aborted. As described herein, to track the status of a transaction during execution, the transaction layer writes a value (known as a transaction record) to the KV store. Write intents of the transaction may route conflicting transactions to the transaction record, such that the conflicting transaction may determine a status for conflicting write intents. The transaction layer may write transaction records to the same range as the first KV indicated in a transaction. The TCS may track the first KV indicated in a transaction. The transaction layer may generate the transaction when one of the following occurs: the write operation commits; the TCS sends heartbeat messages for the transaction; or an operation forces the transaction to abort. As described herein, a transaction record may have one of the following states: pending, committed, staging, or aborted. In some cases, the transaction record may not exist. If a transaction encounters a write intent, where a transaction record corresponding to the write intent does not exist, the transaction may use the timestamp of the write intent to determine how to proceed. If the timestamp of the write intent is within a transaction liveness threshold, the write intent may be treated as pending. If the timestamp of the write intent is not within the transaction liveness threshold, the write intent may be treated as aborted. A transaction liveness threshold may be a duration based on a period for sending “heartbeat” messages. For example, the transaction liveness threshold may be a duration lasting for 5 “heartbeat” message periods, such that after 5 missed heartbeat messages, a transaction may be aborted. The transaction record for a committed transaction may remain until each of the write intents of the transaction are converted to MVCC values. As described herein, in the transaction layer, values may not be written directly to the storage layer during a write transaction. Values may be written in a provisional (i.e. uncommitted) state known as a write intent. Write intents may be MVCC values with a pointer to a transaction record to which the MVCC value belongs. Based on interacting with a write intent (instead of an MVCC value), an operation may determine the status of the transaction record, such that the operation may determine how to interpret the write intent. As described herein, if a transaction record is not found for a write intent, the operation may determine the timestamp of the write intent to evaluate whether or not the write intent may be considered to be expired. In some embodiments, based on encountering and/or otherwise interacting with a write intent, an operation may attempt to resolve the write intent. The operation may resolve the write intent based on the state of the write intent identified in the transaction record. For a committed state, the operation may read the write intent and convert the write intent to an MVCC value. The operation may convert the write intent to an MVCC value by removing the write intent's pointer to the transaction record. For an aborted state, the operation may ignore the write intent (e.g., the operation may not read the write intent) and the operation may delete the write intent. For a pending state, a transaction conflict may exist and the transaction conflict may be resolved as to be described herein. For a staging state, the operation may determine whether the staging transaction is still in progress. The operation may determine the transaction is still in progress by verifying that the TCS is still sending “heartbeat” messages to the transaction record. If the operation verifies the TCS is sending “heartbeat” messages to the record, the operation should wait. For a record that does not exist, the operation may determine the transaction state to be pending if the write intent was created within a transaction liveness threshold as described herein. If the write intent was not created within a transaction liveness threshold, the operation may determine the write intent to be aborted. In some embodiments, the transaction layer may include a concurrency manager for concurrency control. The concurrency manager may sequence incoming requests (e.g., from transactions) and may provide isolation between the transactions that issued those requests that intend to perform conflicting operations. This activity may be known as concurrency control. The concurrency manager may combine the operations of a latch manager and a lock table to accomplish this work. The latch manager may sequence the incoming requests and may provide isolation between those requests. The lock table may provide locking and sequencing of requests (in combination with the latch manager). The lock table may be a per-node, in-memory data structure. The lock table may hold a collection of locks acquired by transactions that are in-progress as to be described herein. As described herein, the concurrency manager may be a structure that sequences incoming requests and provides isolation between the transactions that issued those requests, where the requests intend to perform conflicting operations. During sequencing, the concurrency manager may identify conflicts. The concurrency manager may resolve conflicts based on passive queuing and/or active pushing. Once a request has been sequenced by the concurrency manager, the request may execute (e.g., without other conflicting requests/operations) based on the isolation provided by the concurrency manager. This isolation may last for the duration of the request. The isolation may terminate based on (e.g., after) completion of the request. Each request in a transaction may be isolated from other requests. Each request may be isolated during the duration of the request, after the request has completed (e.g., based on the request acquiring locks), and/or within the duration of the transaction comprising the request. The concurrency manager may allow transactional requests (i.e. requests originating from transactions) to acquire locks, where the locks may exist for durations longer than the duration of the requests themselves. The locks may extend the duration of the isolation provided over specific keys stored by the cluster to the duration of the transaction. The locks may be released when the transaction commits or aborts. Other requests that encounter and/or otherwise interact with the locks (e.g., while being sequenced) may wait in a queue for the locks to be released. Based on the locks being released, the other requests may proceed. The concurrency manager may include information for external locks (e.g., the write intents) In some embodiments, one or more locks may not be controlled by the concurrency manager, such that one or more locks may not be discovered during sequencing. As an example, write intents (i.e. replicated, exclusive locks) may be stored such that that may not be detected until request evaluation time. In most embodiments, fairness may be ensured between requests, such that if any two requests conflict, the request that arrived first will be sequenced first. Sequencing may guarantee first-in, first-out (FIFO) semantics. An exception to FIFO semantics is that a request that is part of a transaction which has already acquired a lock may not need to wait on that lock during sequencing. The request may disregard any queue that has formed on the lock. Lock tables as to be described herein may include one or more other exceptions to the FIFO semantics described herein. In some embodiments, as described herein, a lock table may be a per-node, in-memory data structure. The lock table may store a collection of locks acquired by in-progress transactions. Each lock in the lock table may have an associated lock wait-queue. Conflicting transactions can queue in the associated lock wait-queue based on waiting for the lock to be released. Items in the locally stored lock wait-queue may be propagated as necessary (e.g., via RPC) to an existing Transaction Wait Queue (TWQ). The TWQ may be stored on the leader replica of the range, where the leader replica may contain the transaction record. As described herein, databases stored by the cluster may be read and written using one or more “requests”. A transaction may be composed of one or more requests. Isolation may be needed to separate requests. Additionally, isolation may be needed to separate transactions. Isolation for requests and/or transactions may be accomplished by maintaining multiple versions and/or by allowing requests to acquire locks. Isolation based on multiple versions may require a form of mutual exclusion, such that a read and a conflicting lock acquisition do not occur concurrently. The lock table may provide locking and/or sequencing of requests (in combination with the use of latches). In some embodiments, locks may last for a longer duration than the requests associated with the locks. Locks may extend the duration of the isolation provided over specific KVs to the duration of the transaction associated with the lock. As described herein, may be released when the transaction commits or aborts. Other requests that encounter and/or otherwise interact with the locks (e.g., while being sequenced) may wait in a queue for the locks to be released. Based on the locks being released, the other requests may proceed. In some embodiments, the lock table may enable fairness between requests, such that if two requests conflict, then the request that arrived first may be sequenced first. In some cases, there may be exceptions to the FIFO semantics as described herein. A request that is part of a transaction that has acquired a lock may not need to wait on that lock during sequencing, such that the request may ignore a queue that has formed on the lock. In some embodiments, contending requests that encounter different levels of contention may be sequenced in a non-FIFO order. Such sequencing in a non-FIFO order may enable greater concurrency. As an example, if requests R1and R2contend on key K2, but R1is also waiting at key K1, R2may be determined to have priority over R1, such that R2may be executed on K2. In some embodiments, as described herein, a latch manager may sequence incoming requests and may provide isolation between those requests. The latch manager may sequence and provide isolation to requests under the supervision of the concurrency manager. A latch manager may operate as follows. As write requests occur for a range, a leaseholder of the range may serialize write requests for the range. Serializing the requests may group the requests into a consistent order. To enforce the serialization, the leaseholder may create a “latch” for the keys in the write value, such that a write request may be given uncontested access to the keys. If other requests access the leaseholder for the same set of keys as the previous write request, the other requests may wait for the latch to be released before proceeding. In some cases, read requests may generate latches. Multiple read latches over the same keys may be held concurrently. A read latch and a write latch over the same keys may not be held concurrently. In some embodiments, the transaction layer may execute transactions at a serializable transaction isolation level. A serializable isolation level may not prevent anomalies in data stored by the cluster. A serializable isolation level may be enforced by requiring the client device to retry transactions if serializability violations are possible. In some embodiments, the transaction layer may allow for one or more conflict types, where a conflict type may result from a transaction encountering and/or otherwise interacting with a write intent at a key. A write/write conflict may occur when two pending transactions create write intents for the same key. A write/read conflict may occur when a read transaction encounters an existing write intent with a timestamp less than the timestamp of the read transaction. To resolve the conflicts, the transaction layer may proceed through one or more operations. Based on a transaction within the conflicting transactions having a defined transaction priority (e.g., high priority, low priority, etc.), the transaction layer may abort the transaction with lower priority (in a write/write conflict) or advance the timestamp of the transaction having a lower priority. Based on a transaction within the conflicting transactions expired, the expired transaction may be aborted. A transaction may be considered to be expired if the transaction does not have a transaction record and the timestamp for the transaction is outside of the transaction liveness threshold. A transaction may be considered to be expired if the transaction record corresponding to the transaction has not received a “heartbeat” message from the TCS within the transaction liveness threshold. A transaction (e.g. a low priority transaction) that is required to wait on a conflicting transaction may enter the TWQ as described herein. In some embodiments, the transaction layer may allow for one or more additional conflict types that do not involve write intents. A write after read conflict may occur when a write transaction having a lower timestamp conflicts with a read transaction having a higher timestamp. The timestamp of the write transaction may advance past the timestamp of the read transaction, such that the write transaction may execute. A read within an uncertainty window may occur when a read transaction encounters a KV with a higher timestamp and there exists ambiguity whether the KV should be considered to be in the future or in the past of the read transaction. An uncertainty window may be configured based on the maximum allowed offset between the clocks (e.g., HLCs) of any two nodes within the cluster. In an example, the uncertainty window may be equivalent to the maximum allowed offset. A read within an uncertainty window may occur based on clock skew. The transaction layer may advance the timestamp of the read transaction past the timestamp of the KV according to read refreshing as to be described herein. If the read transaction associated with a read within an uncertainty window has to be restarted, the read transaction may never encounter an uncertainty window on any node which was previously visited by the read transaction. In some cases, there may not exist an uncertainty window for KVs read from the gateway node of the read transaction. In some embodiments, as described herein, the Transaction Wait Queue (TWQ) may track a transaction that could not advance another transaction corresponding to write intents encountered by the transaction. The transaction may wait for the blocking transaction to complete before it can execute. The structure of the TWQ may map a transaction to the one or more other transactions blocked by the transaction. The TWQ may operate on the leader replica of a range, where the leader replica includes the transaction record. Based on a blocking transaction (i.e. a transaction that blocks one or more other transactions) resolving (e.g., by committing or aborting), an indication may be sent to the TWQ that indicates the transactions blocked by the blocking transaction may begin to execute. A blocked transaction (i.e. a transaction blocked by a blocking transaction) may examine their transaction status to determine whether they are active. If the transaction status for the blocked transaction indicates the blocked transaction is aborted, the blocked transaction may be removed by the transaction layer. In some cases, deadlock may occur between transactions, where a first transaction may be blocked by write intents of a second transaction and the second transaction may be blocked by write intents of the first transaction. If transactions are deadlocked, one transaction of the deadlocked transactions may randomly abort, such that the active (i.e. alive) transaction may execute and the deadlock may be removed. In some embodiments, the transaction layer may enable read refreshing. When a timestamp of a transaction has been advanced, additional considerations may be required before the transaction may commit at the advanced timestamp. Considerations may include checking KVs previously read by the transaction to verify that other write transactions have not occurred at the KVs between the original transaction timestamp and the advanced transaction timestamp. This consideration may prevent serializability violations. The check may be executed by tracking each read using a Refresh Request (RR). If the check succeeds (e.g., write transactions have not occurred between the original transaction timestamp and the advanced transaction timestamp), the transaction may be allowed to commit. A transaction may perform the check at a commit time if the transaction was advanced by a different transaction or by the timestamp cache. A transaction may perform the check based on encountering a read within an uncertainty interval. If the check is unsuccessful, then the transaction may be retried at the advanced timestamp. In some embodiments, the transaction layer may enable transaction pipelining. Write transactions may be pipelined when being replicated to follower replicas and when being written to storage. Transaction pipelining may reduce the latency of transactions that perform multiple writes. In transaction pipelining, write intents may be replicated from leaseholders to follower replicas in parallel, such that waiting for a commit occurs at transaction commit time. Transaction pipelining may include one or more operations. In transaction pipelining, for each statement, the gateway node corresponding to the transaction may communicate with the leaseholders (L1, L2, L3, . . . , Li) for the ranges indicated by the transaction. Each leaseholder Limay receive the communication from the gateway node and may perform one or more operations in parallel. Each leaseholder Limay create write intents and may send the write intents to corresponding follower nodes for the leaseholder Li. Each Limay respond to the gateway node that the write intents have been sent. Note that replication of the intents is still in-flight at this stage. Before committing the transaction, the gateway node may wait for the write intents to be replicated in parallel to each of the follower nodes of the leaseholders. Based on receiving responses from the leaseholders that the write intents have propagated to the follower nodes, the gateway node may commit the transaction. In some embodiments, the transaction layer may enable parallel commits. Parallel commits may be an atomic commit protocol that reduces the commit latency of a transaction (e.g., in half, from two rounds of consensus down to one). In some cases, the latency incurred by transactions may be substantially close to the sum of all read latencies plus one round of consensus latency. For parallel commits, a transaction coordinator may return a commit acknowledgment to a client device based on determining the writes in the transaction have succeeded. Based on determining the writes in the transaction have succeeded, the transaction coordinator may set the state of the transaction record state to committed and resolve the write intents of the transaction (e.g., asynchronously). In some embodiments, a parallel commits protocol may occur based on one or more operations. A client device may initiate a write transaction. A transaction coordinator may be created by the transaction layer to manage the state of the write transaction. The client device may issue a write to a key “Alpha” of a range. The transaction coordinator may generate a write intent on the “Alpha” key where the data from the write will be written. The write intent may include a timestamp and a pointer to a currently nonexistent transaction record for the write. Each write intent in the write transaction may be assigned a unique sequence number. The unique sequence number may uniquely identify the write intent. The client device may issue a write to a key “Beta” of the range as a part of the same write transaction as the write to the “Alpha” key. The transaction coordinator may generate a write intent on the “Beta” key where the data from the write transaction will be written. The write intent may include a timestamp and a pointer to the same nonexistent transaction record as for the “Alpha” key, based on each write intent being a part of the same transaction. The client device may issue a request to commit the writes for the write transaction. The transaction coordinator may create the transaction record and may set the state of the transaction record to staging. The transaction coordinator may record the keys of each write being executed by replicas among the range. Based on receiving the commit request from the client device, the transaction coordinator may wait for the pending rights to be replicated across the cluster. Based on the pending writes being replicated, the transaction coordinator may return indication to the client device that the transaction was committed successfully. In some embodiments, the write transaction may be considered atomically committed while the state of the corresponding transaction record is staging. A transaction may be considered to be committed (e.g., atomically committed) based on one or more logically equivalent states. A logically equivalent state may include the state of the transaction record being staging and successful replication of writes across the cluster (e.g., according to consensus). Transactions in such a state may be considered implicitly committed. A logically committed state may include the state of the transaction record being committed. Transactions in such a state may be considered explicitly committed. For an implicitly committed state, the transaction coordinator may modify the state of the transaction record from staging to committed, such that other transactions do not encounter the transaction in the staging state (e.g., due to being time intensive). Distribution Layer In some embodiments, the database architecture for the cluster may include a distribution layer. The distribution layer may provide a unified view of the data stored by the cluster. To enable the ability to access data stored by the cluster from any single node of the cluster, the distribution layer may enable storage of data in a monolithic sorted map of KV pairs. As described herein, the key-space comprising the sorted map of KV pairs may be divided into one or more contiguous chunks, known as ranges, such that every key may be located in a single range of the sorted map. The sorted map may enable simple lookups and efficient scans for data stored by the cluster. Simple lookups may be enabled based on the ability to identify nodes responsible for certain portions (i.e. ranges) of data. Efficient scans may be enabled based on the defining the order of data within ranges. The distribution layer may receive requests (e.g., transactions) from the transaction layer on the same node. The distribution layer may identify which node should receive the request (from the transaction layer) and send the request to the replication layer of the node corresponding to the request. In some embodiments, the monolithic sorted map structure of the distribution layer may be comprised of two fundamental elements. A first fundamental element may be system data, where system data includes meta ranges that describe the location of user data (i.e. client data) within the cluster. A second fundamental element may be user data, where user data is the client data stored by the cluster for access via one or more client devices. In some embodiments, the location of each range stored by the cluster may be stored in one or more meta ranges. A meta range may be a two-level index at the beginning of the key-space, where the first level (known hereinafter as “meta1”) may address the second level, and the second (known as “meta2”) may address user data stored by the cluster. Each node of the cluster may include information indicative of the location of the meta1 range (known as a range descriptor for the cluster). In some cases, the meta range may not be split by exceeding a threshold storage size (e.g., in contrast to other ranges stored by the cluster). Otherwise, in most embodiments, meta ranges may be configured as ranges as described herein and may be replicated and/or otherwise accessed as KV data (i.e. user data) stored by the cluster. In some embodiments, to optimize data access, each node of the cluster may cache values of the meta2 range that were previously accessed by the node, which may optimize access to meta2 range data. Based on determining that a meta2 cache is invalid for a KV, the node may update the meta2 cache by performing a read transaction on the corresponding meta2 range. In some embodiments, user data may be stored after and/or otherwise below the meta ranges (e.g., the meta1 range and meta2 range) in each node of the cluster. User data may also be known as “table data”. Each table and secondary indexes (of user data) may initially be mapped to a single range. The single range may be the initial mapping for the user data based on the user data being below a threshold storage size for a range. In some cases, the threshold storage size may be 512 MiB as described herein. Each key in a range may represent a single row of a table or a single row of a secondary index. Each key in a range representing a single row of a table may be known as a “primary index” based on the table being sorted by a primary key. Based on exceeding a threshold storage size, a range may split into two ranges. Ranges as described herein may be replicated (by a replication layer as to be described herein), with addresses of each replicated range stored in a meta2 range. In some embodiments, based on receiving a request (e.g., a read transaction, a write transaction, etc.), a node may determine where the request should be routed (e.g., which node of the cluster the request should be routed to). The node may compare key(s) indicated by the request to keys stored by the meta2 range to determine a node to which the route the request. The node may route the request to a node that stores the keys indicated by the request. If the node has cached a subset of the meta2 range corresponding to the key(s) indicated by the request, the node may compare the key(s) indicated by the request to the cached meta2 range. Alternatively, If the node has not cached a subset of the meta2 range corresponding to the key(s) indicated by the request, the node may send an RPC to the node including the meta2 range. Based on determining the node storing the key(s) indicated by the request, the node may send the KV operations of the request to the node storing the key(s) indicated by the request. In some embodiments, the distribution layer may include communication software (e.g., gRPC) that enables communication between one or more nodes of the cluster. The communication software may require inputs and outputs to be formatted as protocol buffers. KV operation requests may be included and/or otherwise incorporated into protocol buffers, where a KV operation requests included in a protocol buffer may be known as Batch Request. The destination of the Batch Request may be identified in a header of the Batch Request and/or in a pointer to the transaction record corresponding to the request(s) included in the Batch Request. A Batch Request may be used to send requests between nodes of the cluster. A response to a Batch Request may be included in a protocol buffer known as a Batch Response. In some embodiments, the distribution layer may include a Distribution Sender (DistSender). A DistSender of a gateway and/or coordinating node may receive Batch Requests from a TCS of the same node. The DistSender may separate Batch Requests into one or more separated Batch Requests. The one or more separated Batch Requests may be routed by the DistSender to the nodes that contain the keys indicated by the separated Batch Requests. The DistSender may determine the nodes based on the meta2 ranges stored on the gateway node. The DistSender may send the Batch Requests to the leaseholder(s) for the keys indicated by the Batch Requests based on the cached meta2 ranges. In some cases, the DistSender may send the Batch Requests to other replicas of ranges for the keys indicated by the Batch Requests based on the proximity of the replicas to the gateway node. Batch Requests received by non-leaseholder replicas may reply to the Batch Requests with an error including an indication of the last-known leaseholder for the range known the replica. Based on received Batch Responses to Batch Requests, the DistSender may aggregate the responses (e.g., to prepare the responses for a return to the client). In some embodiments, as described herein, the meta ranges may be structured as KV pairs. The meta1 range and the meta2 range may be structurally similar. The meta1 range may include the addresses of nodes within the cluster that include replicas of the meta2 range. The meta2 range may include addresses for the nodes that include replicas of each range stored by the cluster. KV data stored by ranges may include a table identifier, an index identifier, and an indexed column value. Each range stored by a cluster may include metadata. The metadata for a particular range may be known as a range descriptor. Each range descriptor may include a sequential range identifier, the key space (i.e. the set of keys) included in the range, and addresses of nodes that store replicas of the range. The key space included in the range as described herein may determine the keys of the meta2 range. The addresses of nodes that store the replica of the range as described herein may determine the values for the keys of the meta2 range. A range descriptor may be updated based on one or more instances. The one or more instances may include a membership change to a consensus group for a range, a range merge, and/or a range split. Updates to a range descriptor may occur locally at a node and may propagate to the meta2 range. As described herein, a range split may occur when a range reaches and/or exceeds a threshold size. In an example, the threshold size for a range may be 512 MiB. Based on reaching or exceeding the threshold size, a range may be split into two ranges. The node that includes the split ranges may create a new consensus (i.e. Raft) group that include the nodes that were included in the previous consensus group before the range was split into two ranges. The distribution layer may generate a transaction for the meta2 range, where the transaction may be configured to update the meta2 range with the updated key space boundaries and the addresses of the nodes using the range descriptor. Replication Layer In some embodiments, the database architecture for the cluster may include a replication layer. The replication layer may copy data (e.g., ranges) between nodes of the cluster and enable consistency between the copied data based on a consensus algorithm as described herein. The replication layer may allow the cluster to tolerate a subset of nodes going offline and/or otherwise being unavailable, such that the range data stored by the cluster is still available to client devices. The replication layer may receive requests from the distribution layer (e.g., from the DistSender as described herein). The replication layer may send responses (e.g., Batch Responses) to the distribution layer (e.g., the DistSender). In the replication layer, if the node receiving a request is the leaseholder for the range, the node may accept the request. If the node receiving a request is not the leaseholder for the range, the node may return an error to the source of the request, where the error may include an indication of a pointer to the leaseholder (or node last known to be the leaseholder). The KV requests may be converted to Raft commands. The replication layer may write accepted requests to a storage layer as to be described herein. Committed Raft commands may be written to the Raft log and stored on a storage medium of a node via the storage layer. The leaseholder may serve reads from the storage layer. In some embodiments, the replication layer may apply a consensus algorithm. The consensus algorithm may require a threshold number (e.g., a quorum or a majority) of replicas of a range to confirm a modification (e.g., a write transaction) to the range prior to committing the modification. Based on the consensus algorithm, the replication layer may require at least 3 nodes to include replicas of a range, such that a threshold number of replicas may agree to a modification to the range. In some cases, if the threshold number of replicas required to confirm a modification is a majority of the replicas, the replication layer may enable the database to tolerate a number of node failures as described by Equation 1: Tolerable⁢Node⁢Failures=Replication⁢Factor-12Equation⁢1 As described in Equation 1, a “Replication Factor” may be a number of replicas of a range stored by the cluster. For example, based on a “Replication Factor” equal to 5, the replication layer may tolerate node failure for two nodes of a cluster, where the failed nodes each store a replica of a range and three other nodes that are online store replicas of the range. In some cases, the “Replication Factor” may be configured at the cluster, database, and/or table level, where a cluster may comprise one or more databases and a database may comprise one or more ranges distributed among the nodes of the cluster. In some embodiments, as described herein, the replication layer may include a consensus protocol (known as Raft). Raft may be an algorithm that stores data among one or more nodes of the cluster, such that the nodes may approve of the state of the data based on reaching consensus. Raft may organize the nodes storing a replica of a range in a group known as a Raft group as described herein. Each replica of a Raft group may be classified as a leader replica or a follower replica as described herein. The leader replica may coordinate writes to the follower replicas of the Raft group. The leader replica may send “heartbeat” messages to the follower replicas (e.g., periodically). The leader replica may be elected by follower replicas as to be described herein. Based on the absence of “heartbeat” messages from the leader replica, follower replicas may become candidates for the leader replica. Based on receiving a Batch Request for a range, a node may convert the KV operations indicated by the Batch Request into one or more Raft commands. The node may send the Raft commands to the Raft leader (e.g., if the node that received the Batch Request is not the leader replica). Based on receiving the Raft commands, the leader node may write the Raft commands to the Raft log as to be described herein. In some embodiments, based on a threshold (e.g., a majority) of nodes writing a transaction and the writes being committed by the leader replica, the writes may be appended to the Raft log as described herein. The Raft log may be an ordered set of commands agreed on by a threshold number of replicas of the range. The Raft log may be a source of truth for consistent replication among nodes of the cluster. In some cases, each replica can be “snapshotted”, such that a copy of the data stored by the replica may be generated for a specific applied log index. This copy of the data (i.e. a snapshot) may be sent to other nodes during a rebalance event to enable and/or expedite replication. A rebalance event may update data stored by a node to a specific log index based on the snapshot. Based on loading the snapshot, a node may be updated based on executing operations (e.g., indicated by the Raft log) that have occurred since the snapshot was taken. In some embodiments, as described herein, a single node in the Raft group may be configured as the leaseholder. The leaseholder may be the only node that can serve reads to a client device or propose writes to the Raft group leader (e.g., both actions may be received as Batch Requests from DistSender as described herein with respect to “Distribution Layer”). When serving reads, the leaseholder may bypass the Raft protocol. The leaseholder may bypass the Raft protocol based on the consensus previously achieved for the values stored by the range. In most embodiments, the leaseholder and the leader replica may be the same replica stored on a node of the range, such that write requests may be proposed directly to the leaseholder/leader replica. The replication layer may attempt to collocate the leaseholder and leader replica during each lease renewal or transfer. If a leaseholder is not configured for a range, any node receiving a request may send a request to become the leaseholder for the range. The request may be sent to each replica to reach consensus. A node that sends a request to become the leaseholder may include a copy of the last valid lease stored by the node. If the last valid lease is equivalent to the current configured leaseholder, the request may be granted by a replica in response to receiving the request. Alternately, if the last valid lease is not equivalent to the current configured leaseholder, the request may be ignored and/or otherwise denied by a replica. In some embodiments, to manage leases for table data, the replication layer may use “epochs”. An epoch may be a period between a node joining a cluster and a node disconnecting from a cluster. To extend a lease (e.g., to remain leaseholder for a range), each node must periodically update a liveness record corresponding to the node. The liveness record may be stored on a system range key. Based on disconnecting from the cluster, a node may fail to update the liveness record. An epoch may be considered to be changed based on a node disconnecting from the cluster and/or failing to update the liveness record. The replication layer may cause a leaseholder node to lose the lease for a range based on the leaseholder node disconnecting from the cluster. In some cases, a leaseholder may not be required to renew a lease for a range. The leaseholder may lose the lease for a range based on disconnecting from the cluster. In some embodiments, as described herein, meta ranges and/or system ranges may be stored as KV data. System ranges may be restricted from epoch-based leases. System ranges may use expiration-based leases. An expiration-based lease may expire at (or substantially close to) a timestamp. In some cases, a leaseholder for a system range may retain the expiration-based lease after the timestamp at which the expiration-based lease was configured to expire. The leaseholder for the system range may retain the expiration-based lease based on the leaseholder continuing to generate and/or otherwise propose Raft commands to a Raft group. In some embodiments, the replication layer may enable leaseholder rebalancing. Each leaseholder for a cluster may consider (e.g., periodically consider) whether to transfer the lease to another replica of the range. In an example, a leaseholder may periodically determine whether to transfer the lease to another replica of the range every 10 minutes. Each leaseholder may be configured to transfer the lease for a range based on the number of requests from each locality (i.e. region) for the range, the number of leases on each node comprising the range, and/or the latency between localities. If replicas for a range are distributed among different localities, the replication layer may determine which replica of the cluster is optimized to be the leaseholder. In some cases, a replica may be suited to be the leaseholder based on providing the lowest latency to requests from client devices. For leaseholder rebalancing, a leaseholder may track the number of requests received by the leaseholder from each locality of the cluster. The number of requests received by the leaseholder from each locality of the cluster may be tracked as an average (e.g., an exponentially weighted moving average). The average may determine the localities that most frequently send requests to the range. In some cases, for an exponentially weighted moving average, the locality that has recently requested the range most often may be assigned the greatest weight. Based on another locality requesting the range frequently, the moving average may cause the locality to be assigned the greatest weight. For leaseholder rebalancing, the leaseholder may correlate each requesting locality's weight (i.e., the proportion of recent requests) to the locality of each replica by determining a similarity (e.g., similarity between country and/or region) between localities. For example, if the leaseholder received requests from gateway nodes in a region defined as the Central United States (e.g., Country=United States, Region=Central), the replication layer (or leaseholder) may assign the following weights to replicas as described in Table 1 as follows: TABLE 1ReplicaEeaseholderReplicaRebalancing#Replica LocalityWeight1Country = United States, Region = Central100%2Country = United States, Region = East50%3Country = Australia, Region = Central0% As shown in Table 1, the “Replica #” 1, with a “Replica Locality” of the Central United States may be configured as 100% for “Replica Leaseholder Rebalancing Weight” based on having a match (e.g., a complete match) to the Country and the Region of the “Replica Locality”. The “Replica #” 2, with a “Replica Locality” of the East United States may be configured as 50% for “Replica Leaseholder Rebalancing Weight” based on having a match (e.g., a partial match) to the Country of the “Replica Locality”. The “Replica #” 3, with a “Replica Locality” of Central Australia may be configured as 0% for “Replica Leaseholder Rebalancing Weight” based on lacking a match with the Country and the Region of the “Replica Locality”. Based on the assignment of rebalancing weights to the replicas of the range, the leaseholder may determine a rebalancing weight and latency corresponding to the leaseholder. The rebalancing weight and latency may be compared to the rebalancing weight and latency corresponding to the other replicas (e.g., as shown in Table 1) to determine an adjustment factor for each replica. In an example, the greater the disparity between weights and the larger the latency between localities, the more the replication layer may favor the node including the replica from the locality with the larger weight. For leaseholder rebalancing, the leaseholder may evaluate each replica's rebalancing weight and adjustment factor for the localities with the largest weights. The leaseholder may transfer the lease to another replica (e.g., of the node having the largest weight and/or adjustment factor). The leaseholder may transfer the lease to the replica if transferring the lease is beneficial and/or viable. In some embodiments, based on a change to the number of nodes of a cluster, replicas for a range may require rebalancing. The replicas may require rebalancing based on changing of the members of a Raft group (e.g., due to the change to the number of nodes of a cluster). Rebalancing may enable optimal survivability and performance. Rebalancing may vary based on whether nodes are added to the cluster or removed from the cluster for the change to the number of nodes of the cluster. Based on nodes being added to the cluster, the added node(s) may communicate identifying information to the existing nodes of the cluster. The identifying information may include an indication that the added node(s) have available storage capacity. The cluster may rebalance replicas stored by the existing nodes to the added node(s). A node may be removed from a Raft group of a cluster based on a lack of a response to the Raft group after a period of time. In an example, the period of time may be 5 minutes. Based on nodes being removed from the cluster (e.g., due to a lack of a response to the Raft group), nodes of the cluster may rebalance data stored by the removed node(s) to the remaining nodes of the cluster. Rebalancing may be enabled based on using a snapshot of a replica from the leaseholder. The snapshot may be sent to another node (e.g., over gRPC as described herein). Based on receiving and/or replicating the snapshot, the node with a replica (e.g., a replicated replica from the snapshot) may join the Raft group of the range corresponding to the replica. The node may determine the index of the added replica to lag one or more entries (e.g., the most recent entries) in the Raft log. The node may execute the actions indicated in the Raft log to update the replica to the state indicated by the most recent index of the Raft log. In some cases, replicas may be rebalanced based on the relative load stored by the nodes within a cluster. Storage Layer In some embodiments, the database architecture for the cluster may include a storage layer. The storage layer may enable the cluster to read and write data to storage device(s) of each node. As described herein, data may be stored as KV pairs on the storage device(s) using a storage engine. In some cases, the storage engine may be a Pebble storage engine. The storage layer may serve successful read transactions and write transactions from the replication layer. In some embodiments, each node of the cluster may include at least one store, which may be specified when a node is activated and/or otherwise added to a cluster. Read transactions and write transactions may be processed from the store. Each store may contain two instances of the storage engine as described herein. A first instance of the storage engine may store temporary distributed SQL data. A second instance of the storage engine may store data other than the temporary distributed SQL data, including system data (e.g., meta ranges) and user data (i.e. table data, client data, etc.). For each node, a block cache may be shared between each store of the node. The store(s) of a node may store a collection of replicas of a range as described herein, where a particular replica may not be replicated among stores of the same node (or the same node), such that a replica may only exist once at a node. In some embodiments, as described herein, the storage layer may use an embedded KV data store (i.e. Pebble). The KV data store may be used with an application programming interface (API) to read and write data to storage devices (e.g., a disk) of nodes of the cluster. The KV data store may enable atomic write batches and snapshots. In some embodiments, the storage layer may use MVCC to enable concurrent requests. In some cases, the use of MVCC by the storage layer may guarantee consistency for the cluster. As described herein, HLC timestamp may be used to differentiate between different versions of data by tracking commit timestamps for data. HLC timestamps may be used to identify a garbage collection expiration for a value as to be described herein. In some cases, the storage layer may support time travel queries. Time travel queries may be enabled by MVCC. In some embodiments, the storage layer may aggregate MVCC values (i.e. garbage collect MVCC values) to reduce the storage size of the data stored by the storage (e.g., the disk) of nodes. The storage layer may compact MVCC values (e.g., old MVCC values) based on the existence of a newer MVCC value with a timestamp that is older than a garbage collection period. A garbage collection period may be configured for the cluster, database, and/or table. Garbage collection may be executed for MVCC values that are not configured with a protected timestamp. A protected timestamp subsystem may ensure safety for operations that rely on historical data. Operations that may rely on historical data may include imports, backups, streaming data using change feeds, and/or online schema changes. Protected timestamps may operate based on generation of protection records by the storage layer. Protection records may be stored in an internal system table. In an example, a long-running job (e.g., such as a backup) may protect data at a certain timestamp from being garbage collected by generating a protection record associated with that data and timestamp. Based on successful creation of a protection record, the MVCC values for the specified data at timestamps less than or equal to the protected timestamp may not be garbage collected. When the job (e.g., the backup) that generated the protection record is complete, the job may remove the protection record from the data. Based on removal of the protection record, the garbage collector may operate on the formerly protected data. Database Architecture Referring toFIG.1, an illustrative distributed computing system100is presented. The computing system100may include a cluster102. The cluster102may include one or more nodes120distributed among one or more geographic regions110. A node120may be a computing device, including the computing system as described herein with respect toFIG.4. As an example, a node120may be a server computing device. A region110may correspond to a particular building (e.g., a data center), city, state/province, country, and/or a subset of any one of the above. A region110may include multiple elements, such as a country and a geographic identifier for the country. For example, a region110may be indicated by Country=United States and Region=Central (e.g., as shown in Table 1), which may indicate a region110as the Central United States. As shown inFIG.1, the cluster102may include regions110a,110b, and110c. In some cases, the cluster102may include one region110. In an example, the region110amay be the Eastern United States, the region110bmay be the Central United States, and the region110cmay be the Western United States. Each region110of the cluster102may include one or more of the nodes120. The region110amay include nodes120a,120b, and120c. The region110bmay include the nodes120d,120e, and120fThe region110cmay include nodes120g,120h, and120i. Each node120of the cluster102may be communicatively coupled via one or more networks112and114. In some cases, the cluster102may include networks112a,112b, and112c, as well as networks114a,114b,114c, and114d. The networks112may include a local area network (LAN) and/or a wide area network (WAN). In some cases, the one or more networks112may connect nodes120of different regions110. The nodes120of region110amay be connected to the nodes120of region110bvia a network112a. The nodes120of region110amay be connected to the nodes120of region110cvia a network112b. The nodes120of region110bmay be connected to the nodes120of region110cvia a network112c. The networks114may include a LAN and/or a WAN. In some cases, the networks114may connect nodes120within a region110. The nodes120a,120b, and120cof the region110amay be interconnected via a network114a. The nodes120d,120e, and120fof the region110bmay be interconnected via a network114b. In some cases, the nodes120within a region110may be connected via one or more different networks114. The node120gof the region110cmay be connected to nodes120hand120ivia a network114c, while nodes120hand120imay be connected via a network114d. In some cases, the nodes120of a region110may be located in different geographic locations within the region110. For example, if region110ais the Eastern United States, nodes120aand120bmay be located in New York, while node120cmay be located in Massachusetts. In some embodiments, the computing system100may include one or more client devices106. The one or more client devices106may include one or more computing devices, including the computing system as described herein with respect toFIG.4. In an example, the one or more client devices106may include laptop computing devices, desktop computing devices, mobile computing devices, tablet computing devices, and/or server computing device. As shown inFIG.1, the computing system100may include client devices106a,106b, and one or more client devices106up to client device106N, where N is a number of client devices106included in the computing system100. The client devices106may be communicatively coupled to the cluster102, such that the client devices106may access and/or otherwise communicate with the nodes120. One or more networks111may couple the client devices106the nodes120. The one or more networks111may include a LAN or a WAN as described herein. Transaction Execution In some embodiments, as described herein, distributed transactional databases stored by the cluster of nodes may enable one or more transactions. Each transaction may include one or more requests and/or queries. A query may traverse one or more nodes of a cluster to execute the request. A request may interact with (e.g., sequentially interact with) one or more of the following: a SQL client, a load balancer, a gateway, a leaseholder, and/or a Raft Leader as described herein. A SQL client may send a query to a cluster. A load balancer may route the request from the SQL client to the nodes of the cluster. A gateway may be a node that processes the request and/or responds to the SQL client. A leaseholder may be a node that serves reads and coordinates writes for a range of keys (e.g., keys indicated in the query) as described herein. A Raft leader may be a node that maintains consensus among the replicas for a range. A SQL client (e.g., operating at a client device106a) may send a request (e.g., a SQL request to a cluster (e.g., cluster102). The request may be sent over a network (e.g., the network111). A load balancer may determine a node of the cluster to which to send the request. The node may be a node of the cluster having the lowest latency and/or having the closest geographic location to the computing device on which the SQL client is operating. A gateway node (e.g., node120a) may receive the request from the load balancer. The gateway node may parse the request to determine whether the request is valid. The request may be valid based on conforming to the SQL syntax of the database(s) stored by the cluster. The gateway node may generate a logical SQL plan based on the request. The logical plan may be converted to a physical plan to traverse the nodes indicated by the request. Based on the completion of request parsing, a SQL executor may execute the logical SQL plan and/or physical plan using the TCS as described herein. The TCS may perform KV operations on a database stored by the cluster. The TCS may account for keys indicated and/or otherwise involved in a transaction. The TCS may package KV operations into a Batch Request as described herein, where the Batch Request may be forwarded on to the DistSender of the gateway node. The DistSender of the gateway node may receive the Batch Request from the TCS. The DistSender may determine the operations indicated by the Batch Request and may determine the node(s) (i.e. the leaseholder node(s)) that should receive requests corresponding to the operations for the range. The DistSender may generate one or more Batch Requests based on determining the operations and the node(s) as described herein. The DistSender may send a first Batch Request for each range in parallel. Based on receiving a provisional acknowledgment from a leaseholder node's evaluator (as to be described herein), the DistSender may send the next Batch Request for the range corresponding to the provisional acknowledgement. The DistSender may wait to receive acknowledgments for write operations and values for read operations corresponding to the sent Batch Requests. As described herein, the DistSender of the gateway node may send Batch Requests to leaseholders (or other replicas) for data indicated by the Batch Request. In some cases, the DistSender may send Batch Requests to nodes that are not the leaseholder for the range (e.g., based on out of date leaseholder information). Nodes may or may not store the replica indicated by the Batch Request. Nodes may respond to a Batch Request with one or more responses. A response may indicate the node is no longer a leaseholder for the range. The response may indicate the last known address of the leaseholder for the range. A response may indicate the node does not include a replica for the range. A response may indicate the Batch Request was successful if the node that received the Batch Request is the leaseholder. The leaseholder may process the Batch Request. As a part of processing of the Batch Request, each write operation in the Batch Request may compare a timestamp of the write operation to the timestamp cache. A timestamp cache may track the highest timestamp (i.e., most recent) for any read operation that a given range has served. The comparison may ensure that the write operation has a higher timestamp than the timestamp cache. If a write operation has a lower timestamp than the timestamp cache, the write operation may be restarted at a timestamp higher than the value of the timestamp cache. In some embodiments, operations indicated in the Batch Request may be serialized by a latch manager of a leaseholder. For serialization, each write operation may be given a latch on a row. Any read and/or write operations that arrive after the latch has been granted on the row may be required to wait for the write to complete. Based on completion of the write, the latch may be released and the subsequent operations can continue. In some cases, a batch evaluator may ensure that write operations are valid. The batch evaluator may determine whether the write is valid based on the leaseholder's data. The leaseholder's data may be evaluated by the batch evaluator based on the leaseholder coordinating writes to the range. If the batch evaluator determines the write to be valid, the leaseholder may send a provisional acknowledgement to the DistSender of the gateway node, such that the DistSender may begin to send subsequent Batch Requests for the range to the leaseholder. In some embodiments, operations may read from the local instance of the storage engine as described herein to determine whether write intents are present at a key. If write intents are present, an operation may resolve write intents as described herein. If the operation is a read operation and write intents are not present at the key, the read operation may read the value at the key of the leaseholder's storage engine. Read responses corresponding to a transaction may be aggregated into a Batch Response by the leaseholder. The Batch Response may be sent to the DistSender of the gateway node. If the operation is a write operation and write intents are not present at the key, the KV operations included in the Batch Request that correspond to the write operation may be converted to Raft operations and write intents, such that the write operation may be replicated to the replicas of the range. The leaseholder may propose the Raft operations to the leader replica of the Raft group (e.g., which is typically the leaseholder). Based on the received Raft operations, the leader replica may send the Raft operations to the follower replicas of the Raft group. If a threshold number of the replicas acknowledge the Raft operations (e.g., the write operations), consensus may be achieved such that the Raft operations may be committed to the Raft log of the leader replica and written to the storage engine. The leader replica may send a command to the follower replicas to write the Raft operations the Raft log corresponding to each of the follower replicas. Based on the leader replica committing the Raft operations to the Raft log, the Raft operations (e.g., the write transaction) may be considered to be committed (e.g., implicitly committed as described herein). The gateway node may configure the status transaction record for the transaction corresponding to the Raft operations to committed (e.g., explicitly committed as described herein). In some embodiments, based on the leader replica appending the Raft operations to the Raft log, the leader replica may send a commit acknowledgement to the DistSender of the gateway node. The DistSender of the gateway node may aggregate commit acknowledgements from each write operation included in the Batch Request. In some cases, the DistSender of the gateway node may aggregate read values for each read operation included in the Batch Request. Based on completion of the operations of the Batch Request, the DistSender may record the success of each transaction in a corresponding transaction record. To record the success of a transaction, the DistSender may check the timestamp cache of the range where the first write transaction occurred to determine whether the timestamp for the write transaction was advanced. If the timestamp was advanced, the transaction may perform a read refresh to determine whether values associated with the transaction had changed. If the read refresh is successful (e.g., no values associated with the transaction had changed), the transaction may commit at the advanced timestamp. If the read refresh fails (e.g., at least some value associated with the transaction had changed), the transaction may be restarted. Based on determining the read refresh was successful and/or that the timestamp was not advanced for a write transaction, the DistSender may change the status of the corresponding transaction record to committed as described herein. The DistSender may send values (e.g., read values) to the TCS. The TCS may send the values to the SQL layer. In some cases, the TCS may also send a request to the DistSender, wherein the request includes an indication for the DistSender to convert write intents to committed values (e.g., MVCC values). The SQL layer may send the values as described herein to the SQL client that initiated the query. Read Transaction Execution Referring toFIG.2A, an example of execution of a read transaction at the computing system100is presented. In some cases, the nodes120a,120b, and120c, of region110amay include one or more replicas of ranges160. The node120amay include replicas of ranges160a,160b, and160c, wherein ranges160a,160b, and160care different ranges. The node120amay include the leaseholder replica for range160a(as indicated by “Leaseholder” inFIG.2A). The node120bmay include replicas of ranges160a,160b, and160c. The node120bmay include the leaseholder replica for range160b(as indicated by “Leaseholder” inFIG.2A). The node120cmay include replicas of ranges160a,160b, and160c. The node120cmay include the leaseholder replica for range160c(as indicated by “Leaseholder” inFIG.2A). In some embodiments, a client device106may initiate a read transaction at a node120of the cluster102. Based on the KVs indicated by the read transaction, the node120that initially receives the read transaction (i.e. the gateway node) from the client device106may route the read transaction to a leaseholder of the range160comprising the KVs indicated by the read transaction. The leaseholder of the range160may serve the read transaction and send the read data to the gateway node. The gateway node may send the read data to the client device106. As shown inFIG.2A, at step201, the client device106may send a read transaction to the cluster102. The read transaction may be received by node120bas the gateway node. The read transaction may be directed to data stored by the range160c. At step202, the node120bmay route the received read transaction to node120c. The read transaction may be routed to node120cbased on the node120cbeing the leaseholder of the range160c. The node120cmay receive the read transaction from node120band serve the read transaction from the range160c. At step203, the node120cmay send the read data to the node120b. The node120cmay send the read data to node120bbased on the node120bbeing the gateway node for the read transaction. The node120bmay receive the read data from node120c. At step204, the node120bmay send the read data to the client device106ato complete the read transaction. If node120bhad been configured to include the leaseholder for the range160c, the node120bmay have served the read data to the client device directly after step201, without routing the read transaction to the node120c. Write Transaction Execution Referring toFIG.2B, an example of execution of a write transaction at the computing system100is presented. In some cases, as described herein, the nodes120a,120b, and120c, of region110amay include one or more replicas of ranges160. The node120amay include replicas of ranges160a,160b, and160c, wherein ranges160a,160b, and160care different ranges. The node120amay include the leaseholder replica and the leader replica for range160a(as indicated by “Leaseholder” inFIG.2Aand “Leader” inFIG.2B). The node120bmay include replicas of ranges160a,160b, and160c. The node120bmay include the leader replica for range160b(as indicated by “Leader” inFIG.2B). The node120cmay include replicas of ranges160a,160b, and160c. The node120cmay include the leader replica for range160c(as indicated by “Leader” inFIG.2B). In some embodiments, a client device106may initiate a write transaction at a node120of the cluster102. Based on the KVs indicated by the write transaction, the node120that initially receives the write transaction (i.e. the gateway node) from the client device106may route the write transaction to a leaseholder of the range160comprising the KVs indicated by the write transaction. The leaseholder of the range160may route the write request to the leader replica of the range160. In most cases, the leaseholder of the range160and the leader replica of the range160are the same. The leader replica may append the write transaction to a Raft log of the leader replica and may send the write transaction to the corresponding follower replicas of the range160for replication. Follower replicas of the range may append the write transaction to their corresponding Raft logs and send an indication to the leader replica that the write transaction was appended. Based on a threshold number (e.g., a majority) of the replicas indicating and/or sending an indication to the leader replica that the write transaction was appended, the write transaction may be committed by the leader replica. The leader replica may send an indication to the follower replicas to commit the write transaction. The leader replica may send an acknowledgement of a commit of the write transaction to the gateway node. The gateway node may send the acknowledgement to the client device106. As shown inFIG.2B, at step211, the client device106may send a write transaction to the cluster102. The write transaction may be received by node120cas the gateway node. The write transaction may be directed to data stored by the range160a. At step212, the node120cmay route the received write transaction to node120a. The write transaction may be routed to node120abased on the node120abeing the leaseholder of the range160a. Based on the node120aincluding the leader replica for the range160a, the leader replica of range160amay append the write transaction to a Raft log at node120a. At step213, the leader replica may simultaneously send the write transaction to the follower replicas of range160aon the node120band the node120c. The node120band the node120cmay append the write transaction to their respective Raft logs. At step214, the follower replicas of the range160a(at nodes120band120c) may send an indication to the leader replica of the range160athat the write transaction was appended to their Raft logs. Based on a threshold number of replicas indicating the write transaction was appended to their Raft logs, the leader replica and follower replicas of the range160amay commit the write transaction. At step215, the node120amay send an acknowledgement of the committed write transaction to the node120c. At step216, the node120cmay send the acknowledgement of the committed write transaction to the client device106ato complete the write transaction. Non-Blocking Transactions Overview In some embodiments, the cluster may include one or more non-blocking ranges. A transaction (e.g., a read transaction, a write transaction, etc.) that encounters and/or otherwise interacts with a non-blocking range may be converted to a non-blocking transaction. A non-blocking range may propagate closed timestamps, where the closed timestamps may lead the present time (e.g., indicated by one or more HLCs of the cluster) by a configured duration (i.e. a non-blocking duration as to be described herein). A closed timestamp may be a timestamp, where prior to the timestamp, follower replicas may serve read transactions for KVs stored prior to the timestamp (e.g., as historical reads). In some cases, a leader replica and non-leader replicas (i.e. follower replicas) of a non-blocking range may serve reads at time(s) before a closed timestamp (e.g., a synthetic timestamp) as to be described herein. A non-blocking range may enable an ability to serve reads from each (or a subset) of the replicas of the non-blocking range, such that reads may not be required to be served from the leaseholder node. For a non-blocking range, each replica (e.g., including follower replicas) may serve reads, such that read requests may not be required to be routed to the leaseholder. In some embodiments, non-leader replicas (i.e. follower replicas) may be made available to serve historical reads. Historical reads may include transactions with a read timestamp that is sufficiently in the past (e.g., such that write transactions have completed propagating to follower replicas). Accordingly, follower reads may be consistent reads at historical timestamps from follower replicas, which may be enabled by closed timestamp updates. A closed timestamp update may be a data store-wide timestamp, where the timestamp can include per-range information indicative of Raft (i.e. consensus) progress among leader and follower replicas. Based on received closed timestamp updates, a follower replica may determine it has the necessary information to serve consistent reads for times that are at and below the received closed timestamp from the leader replica. As such, a follower replica may serve reads at any timestamp below the most recent closed timestamp. For a non-blocking range, follower replicas may receive closed timestamp updates with a synthetic timestamp that leads the present time as to be described herein. Accordingly, a follower replica may serve follower reads for timestamps below the synthetic timestamp. In some embodiments, as described herein, a transaction may select a provisional commit timestamp. The transaction may select a provisional commit timestamp from the HLC of the gateway node from which the transaction originates. The provisional commit timestamp may be a timestamp for when the transaction performs a read operation or when the transaction initially performs a write operation. In some cases, as described herein, a transaction may be required to advance the timestamp (e.g., due to transaction contention). But, the provisional commit timestamp (and the advanced timestamp if applicable) typically lags the present time. The present time may be defined as the time observed on a node of the cluster with the fastest (e.g., most recent or highest) clock. As described herein with respect to the transaction layer, a maximum allowed offset may be the maximum time offset between nodes within the cluster. Accordingly, the present time may not be more than the maximum time offset ahead of the node having the slowest timestamp. In some embodiments, a non-blocking transaction may perform locking such that contending read transactions may not be required to wait on the locks (e.g., the write intents) of the non-blocking transaction. In an example, the values written by a non-blocking write transaction may be committed with write intents resolved by the time that a read transaction attempts to read the values of the keys written by the non-blocking write transaction. In some cases, as described herein, a read transaction that observes write intents would need to determine the status of the write transaction via the transaction record, which may cause the read transaction to wait for the write intents to be resolved (e.g., committed, aborted, etc.). Such a process may increase transaction latencies within the cluster due to the read transaction's need to wait for the write intents to be resolved (and locks removed), as well as a need to traverse networks (e.g., switch from the node120ato the node120dvia the network112a) to access and/or otherwise determine the status of the transaction record. For a non-blocking write transaction, a conflicting read may not observe write intents of the non-blocking transaction, as the write intent of the non-blocking write transaction may be scheduled to committed at a specific timestamp in advance of the present time. As such, a conflicting read transaction that occurs after a non-blocking transaction may read the contents of the KV at which the non-blocking transaction is operating. In some embodiments, non-blocking transactions and/or non-blocking ranges may use synthetic timestamps. A synthetic timestamp may be a timestamp that may be disconnected from the HLC timestamps (i.e. real timestamps) derived from nodes of the cluster. A synthetic timestamp may be a 64-bit physical value and a 32-bit logical value. A synthetic timestamp may be differentiated from a timestamp derived from an HLC via a bit difference (e.g., a higher order bit difference). The bit that indicates a timestamp as synthetic or real may be known as the indicator bit. In some cases, a synthetic timestamp and a real timestamp may be merged based on one or more rules. If a synthetic timestamp and a real timestamp are merged, the indicator bit from the timestamp having the larger value may be included in the merged timestamp. If the synthetic timestamp and the real timestamp are equivalent in value, the indicator bit from the real timestamp may be included in the merged timestamp. In some embodiments, as described herein, a node may update the timestamp of the local HLC based on receiving a transaction from another node, where the transaction includes a timestamp greater than the timestamp of the local HLC. For a synthetic timestamp, the local HLC may not be updated with the synthetic timestamp. The local HLC may not be updated with the synthetic timestamp until the timestamp of the HLC exceeds the synthetic timestamp or the local HLC receives an update from a real timestamp (e.g., a real timestamp derived from a transaction received at the node). In some embodiments, as described herein, the transaction layer may use uncertainty intervals. The use of uncertainty intervals for transactions may enable linearizability, as nodes of the cluster may be required to have timestamps that exceed a commit timestamp for a transaction minus the maximum allowed offset. For non-blocking transactions, a committed transaction may be required to wait for up to a non-blocking duration before acknowledging the commit to the SQL client (e.g., to ensure linearizability). An uncertainty interval may be an interval, where the interval is defined between a timestamp−a maximum allowed offset and the timestamp+the maximum allowed offset. In practice, the uncertainty interval may be an interval defined within a timestamp and the timestamp+a maximum allowed offset. In some embodiments, for conflicting transactions involving a non-blocking write transaction and a read transaction, the read transaction may be required to wait on an uncertainty interval. Typically, as described herein, a read transaction that encounters a write transaction within the uncertainty interval may have the timestamp for the read transaction advanced past the completion of the write transaction (e.g., using a read refresh operation). But, because of the synthetic timestamp associated with the non-blocking write transaction, the read transaction may be required to wait for the timestamp associated with the read transaction to exceed the synthetic timestamp of the non-blocking transaction. The read transaction may wait for a duration of time. The duration may be the maximum allowed offset or a non-blocking duration as described herein. Based on the timestamp of the read transaction exceeding the synthetic timestamp, the read transaction may execute and read the value at the key(s) written to by the non-blocking write transaction (e.g., without the read refresh operation). Non-Blocking Duration In some embodiments, as described herein, one or more ranges stored by the cluster (e.g., the cluster102) may be configured as non-blocking ranges. A non-blocking range may use a closed timestamp tracker, wherein the closed timestamp tracker may send (i.e. publish) closed timestamp updates from the leader replica to the follower replicas of the non-blocking range. In some cases, the closed timestamp tracker may prevent write transactions at timestamps equal or prior to a published closed timestamp. A leaseholder or leader for the non-blocking range may send a closed timestamp update to the follower replicas, where the timestamp included in the closed timestamp update leads the present time (e.g., local HLC time of the leaseholder) by a configured duration. In an example, the timestamp indicated by the closed timestamp update may be a synthetic timestamp. Based on the received closed timestamp update, follower replicas may serve follower reads at times less than or equal to the timestamp included in the closed timestamp updates. In an example, follower replicas may serve follower reads at a present time based on receiving a closed timestamp update with a synthetic timestamp, where the synthetic timestamp leads the timestamp of the leaseholder node by the non-blocking duration. The closed timestamp tracker may be independent of the HLC timestamps for each node that stores a replica of a range. In some cases, the closed timestamp tracker may lead the present time within the cluster (e.g., the HLC timestamps at each node) by a configured non-blocking duration (e.g., derived from or based on a synthetic timestamp). The non-blocking duration may be based on the latency between nodes and/or regions of the cluster, as well as the maximum allowed offset between nodes of the cluster. For example, the non-blocking duration may be configured based on the round trip time between the region110aand the region110bvia the network112a. Additionally, the non-blocking duration may be configured based on the round trip time between node120aand node120bvia the network114a. In some cases, the non-blocking duration may be defined by Equation 2 as follows: Non-blocking⁢Duration=Latency2+Maximum⁢Offset+Clock⁢SkewEquation⁢2 As described in Equation 2, the non-blocking duration may be configured as a function of “Latency”, “Maximum Offset”, and “Clock Skew”. “Latency” as described herein with respect to Equation 2 may be a configured round trip time between nodes and/or regions of the cluster. Accordingly, “Latency/2” as described in Equation 2 may be representative of a one-way latency (i.e. round-trip time/2) between nodes and/or regions of the cluster. The “Latency” may vary based on the nodes and/or regions corresponding to the “Latency” configuration. The “Maximum Offset” may be the configured maximum allowed timestamp difference (e.g., HLC timestamp difference) between timestamps of nodes in the cluster as described herein. The “Clock Skew” parameter may be a constant added to the non-blocking duration to account for differences in timestamps observed at nodes. Any suitable configuration for the non-blocking duration may be selected, such that the non-blocking duration may be configured as a constant or a function of one or more parameters. Equation 2 may be one example of a configuration of the non-blocking duration. Based on the closed timestamp tracker, a non-blocking transaction may generate locks on KVs (e.g., for write intents as a part of a write transaction) at a synthetic timestamp that leads the present time by the non-blocking duration. The non-blocking transaction may exhibit non-blocking properties to conflicting transactions based on the non-blocking duration being sufficiently large. The non-blocking duration may be sufficiently large based on an ability for the non-blocking transaction to execute operations corresponding to the transaction, commit the operations, and/or resolve intents corresponding to the committed operations before a commit timestamp for the transaction is exceeded by a combination of the present time and maximum allowable offset (e.g., the timestamp determined based on combining the present time and maximum allowable offset). Non-Blocking Transaction Pushing In some embodiments, a synthetic timestamp of a non-blocking transaction may be pushed and/or otherwise advanced. A synthetic timestamp of a non-blocking transaction may be pushed and/or otherwise advanced based on a combination of the present time and maximum allowable offset becoming sufficiently close to the synthetic timestamp. In some cases, a non-blocking duration corresponding to the synthetic timestamp may be advanced. A range monitor may monitor intents (e.g., write intents) associated with the non-blocking transaction. If the intents associated with the non-blocking transaction have not been resolved by the time at which the combination of the present time and maximum allowable offset are sufficiently close to the synthetic timestamp, the range monitor may cause the synthetic timestamp to advance by a configured duration. If the intents associated with the non-blocking transaction have not been resolved by the time at which the combination of the present time and maximum allowable offset are sufficiently close to the synthetic timestamp, the non-blocking duration may advance by the configured duration. As an example, if a timestamp determined from adding the present time and the maximum allowable offset is within 5 ms of a synthetic timestamp associated with a non-blocking write transaction that does not have resolved write intents, the range monitor may advance the synthetic timestamp by 100 ms, such that the determined timestamp does not exceed the synthetic timestamp. In some cases, in place of and/or in addition to the range monitor, a TCS may buffer the writes of the non-blocking transaction. The TCS may buffer (i.e. delay) the non-blocking transaction until the non-blocking transaction may be committed. Non-Blocking Read Transaction Execution and Interactions In some embodiments, a client device may initiate a non-blocking read transaction at the cluster. The non-blocking read transaction may be initiated via a SQL client as described herein. A non-blocking read transaction may adhere to one or more of requirements as described herein with respect to any and/or all of the database layers. The non-blocking read transaction may be a read transaction directed to a non-blocking range. Based on the KVs indicated by the read transaction, the node that initially receives the read transaction (i.e. the gateway node) from the client device may identify the read transaction as directed to a non-blocking range. The gateway node may receive the read transaction from the SQL client. The gateway node may route the read transaction to any one of the replicas of the non-blocking range. In some cases, the gateway node may route the read transaction to the replica having the lowest latency to the gateway node. The read transaction may commit to read the data stored at one or more KVs of the replica. The commit timestamp may be added to a timestamp cache as described herein. The node may send the KV data (i.e. read data) read by the read transaction to the gateway node. The gateway node may wait for a remaining subset of the non-blocking duration before returning the read data to the client device. The gateway node may send the read data to the client device. In some embodiments, one or more transactions may conflict with the non-blocking read transaction. In some cases, a read transaction may conflict with the non-blocking read transaction. The read transaction may conflict with the non-blocking write transaction such that each of the transactions do not interact (e.g., the read transaction follows the requirements set forth in each of the database layers as described herein). In some cases, a write transaction may conflict with the non-blocking read. Based on a write transaction conflicting with an existing non-blocking read transaction, the write transaction may be converted to the non-blocking write transaction. A provisional commit timestamp for the non-blocking write transaction may be determined, where the provisional commit timestamp may be a synthetic timestamp. The synthetic timestamp may be the timestamp (i.e. the local HLC timestamp) at the leaseholder for a range corresponding to the non-blocking write transaction advanced by the non-blocking duration. Accordingly, the synthetic timestamp for the non-blocking write transaction may be greater than the commit timestamp for the non-blocking read transaction, causing the non-blocking read transaction to commit prior to the non-blocking write transaction. The non-blocking write transaction may commit at a synthetic timestamp later than the commit timestamp of the non-blocking read transaction. Non-Blocking Write Transaction Execution and Interactions In some embodiments, a client device may initiate a non-blocking write transaction at the cluster. The non-blocking write transaction may be initiated via a SQL client as described herein. A non-blocking write transaction may adhere to one or more of requirements as described herein with respect to any and/or all of the database layers. The non-blocking write transaction may be a write transaction directed to a non-blocking range. Based on the KVs indicated by the write transaction, the node (i.e. the gateway node) that initially receives the write transaction from the client device may identify the write transaction as directed to a non-blocking range. The gateway node may receive the write transaction from the SQL client. The gateway node may route the write transaction to a leaseholder of the non-blocking range. The leaseholder may determine a synthetic timestamp (i.e. provisional commit timestamp) for the write transaction. The synthetic timestamp may be a provisional commit timestamp for the write transaction. The leaseholder may route the write transaction to the leader replica of the non-blocking range. As described herein, in most cases, the leaseholder and the leader replica for the non-blocking range may be the same. Based on determining the synthetic timestamp (e.g., at the leaseholder), the non-blocking duration may begin. The leaseholder of the non-blocking range may track the non-blocking duration. In an example, based on the leaseholder determining the synthetic timestamp, the non-blocking duration may begin (e.g., begin to elapse or commence) at the HLC timestamp comprised in the synthetic timestamp. Beginning at the synthetic timestamp (e.g., the synthetic timestamp at the leaseholder replica of the non-blocking range), to initiate execution of the write transaction on the non-blocking range, the leader replica may generate write intents corresponding to the write transaction. The leader replica may write the write intents to the one or more KVs indicated by the write transaction. The leader replica may append the write transaction to a Raft log of the leader replica and may send the write transaction to the corresponding follower replicas of the range for replication. The follower replicas of the range may append the write transaction to their corresponding Raft logs and send an indication to the leader replica that the write transaction was appended. Based on a threshold number (e.g., a majority) of the replicas indicating and/or sending an indication to the leader replica that the write transaction was appended, the write transaction may be committed by the leader replica. The leader replica may send an indication to the follower replicas to commit the write transaction. The leader replica may send an acknowledgement of a commit of the write transaction to the gateway node. The acknowledgement may include the synthetic timestamp determined by the leaseholder. The gateway node may wait for the non-blocking duration (e.g., that began at the synthetic timestamp) to expire. Accordingly, the gateway node may wait for a timestamp of the clock (e.g., HLC) of the gateway node to exceed the synthetic timestamp. The non-blocking duration may expire as the timestamp of the gateway node exceeds the synthetic timestamp. The gateway node may send the acknowledgement of the write transaction to the client device106. The gateway node may send the acknowledgement based on an expiry of the non-blocking duration. The gateway node may send the acknowledgement based on the timestamp at the gateway node exceeding the synthetic timestamp. In some embodiments, one or more transactions may conflict with the non-blocking write transaction. In some cases, a read transaction may conflict with the non-blocking write transaction. The read transaction may conflict within an existing write transaction inside an uncertainty interval or external to the uncertainty interval. If the read transaction conflicts with the non-blocking write transaction inside the uncertainty interval, the read transaction may be required to wait for the uncertainty interval to expire as described herein before reading the KV data corresponding to the conflicting non-blocking write transaction. If the read transaction conflicts with the non-blocking write transaction external to the uncertainty interval, the read transaction may execute as described herein without interaction with the non-blocking write transaction due to waiting to commit after the non-blocking duration. In some cases, a write transaction may conflict with the non-blocking write transaction. A provisional commit timestamp for the write transaction may be determined. The write transaction that conflicts with an existing non-blocking transaction may be required to wait for the non-blocking transaction to commit. Based on the write transaction conflicting with the existing non-blocking write transaction, the write transaction may be converted to a second non-blocking write transaction. A provisional commit timestamp for the second non-blocking write transaction may be determined. In some cases, the provisional commit timestamp for the second non-blocking write transaction may update the provisional timestamp of the original write transaction. The provisional commit timestamp for the second non-blocking write transaction may be a synthetic timestamp. The synthetic timestamp may be combination of a timestamp (i.e. the local HLC timestamp) at the leaseholder node corresponding to the non-blocking write transaction and a non-blocking duration, where the synthetic timestamp is approximately equivalent to the timestamp advanced by the non-blocking duration. Accordingly, the synthetic timestamp for the second non-blocking write transaction may be greater than the synthetic timestamp for the non-blocking write transaction. Based on a commit of the non-blocking write transaction, the second non-blocking transaction may execute as described herein with respect to the non-blocking transaction. Referring toFIGS.3A and3B, an example flowchart for an execution method300of a non-blocking write transaction at the computing system100is presented. The method300corresponding to a transaction involving a single range, but the method300may be executed for any suitable number of ranges corresponding to a write transaction. In an example, a write transaction may be directed to three ranges, where the method300may be executed for each of the three ranges. For ranges having different leaseholders, one or more synthetic timestamps may be determined for the non-blocking write transaction. Operations of the non-blocking write transaction may occur in parallel for each range that is subject to the non-blocking write transaction. Based on receiving acknowledgements committing operations of the write transaction from one or more leader replicas, the gateway node may wait on its clock (e.g., HLC) to exceed the synthetic timestamp having the latest (i.e. maximum) time. Referring toFIG.3A, a client device106amay initiate a non-blocking write transaction at the cluster102. The client device106amay include a client application (e.g., a SQL client application) to interact with the cluster102. The client device106may send the write transaction to the cluster102. The non-blocking write transaction may be a write transaction directed to a non-blocking range. At step302, a gateway node (e.g., node120c) may receive the write transaction. The gateway node may receive the write transaction via a load balancer as described herein. At step304, the gateway node may send the write transaction to the leaseholder of the range (e.g., the non-blocking range) indicated by the write transaction. The gateway node may send the write transaction to the leaseholder of the range based on determining the range corresponding to the write transaction. A range may correspond to a write transaction if the range includes one or more KVs that are the subject of the write transaction. At step306, the leaseholder may receive the write transaction. At step308, the leaseholder may determine a synthetic timestamp for the write transaction. Based on determining the synthetic timestamp, a time period corresponding to the non-blocking duration may be begin. The synthetic timestamp may be a timestamp of the local HLC at the leaseholder node advanced by a non-blocking duration. At step310, the leaseholder may send the write transaction to the leader replica of the non-blocking range. The write transaction may include the synthetic timestamp determined from the clock (e.g., HLC) of the leaseholder node. As described herein, in most cases, the leaseholder and the leader replica may be the same. In some cases, a closed timestamp update may be sent to the follower replicas, where the closed timestamp update may include the synthetic timestamp. In some cases, the closed timestamp update that includes the synthetic timestamp update may be included with the write transaction. In some cases, the closed timestamp update that includes the synthetic timestamp update may be sent with the write transaction simultaneously. Accordingly, follower replicas may serve reads for timestamps prior to the synthetic timestamp, such that follower replicas may serve present time reads. In some embodiments, at step312, the leader replica may receive the write transaction. In some cases, the leader replica may receive the write transaction from the leaseholder if the leaseholder and the leader replica are not the same. In some cases, the leader replica may receive the write transaction from the gateway node if the leaseholder and the leader replica are not the same. At step314, the leader replica may execute the contents of the write transaction at the non-blocking range. To execute the contents of the write transaction, the leader replica may generate write intents corresponding to the write transaction. The leader replica may write the write intents to the one or more KVs indicated by the write transaction. The leader replica may append the write transaction to a Raft log of the leader replica. At step316, the leader replica may send the write transaction to the follower replicas of the non-blocking range. At step318, one or more of the follower replicas may receive the write transaction. Referring toFIG.3B, at step320, one or more of the follower replicas of the range may execute operations of the write transaction and send an acknowledgement of the write transaction to the leader replica. To execute the write transaction, the one or more follower replicas may append the write transaction to their corresponding Raft logs, generate write intents, and/or write the write intents to one or more KVs. At step322, the leader node may determine whether a threshold number of replicas (including the leader replica) have acknowledged the write transaction. A replica may acknowledge the write transaction by sending an indication to the leader replica that the write transaction was appended. At step324, the leader node may abort the transaction based on determining a threshold number of replicas did not acknowledge the write transaction. At step326, the leader replica may commit the transaction. One or more follower replicas may commit the transaction based on receiving an indication from the leader replica that the transaction was committed. At step328, the leader replica may send an acknowledgement of a commit of operations of the write transaction to the gateway node. The leader replica may send the acknowledgement based on committing the write transaction. At step330, the gateway node may receive the acknowledgement of the commit of the write transaction from the leader replica. At step332, the gateway node may wait for a remaining subset of the non-blocking duration to expire. As described herein, the non-blocking duration may have started at the determination of the synthetic timestamp (e.g., at step308). Accordingly, the gateway node may wait for the clock (e.g., HLC) of the gateway node to exceed the synthetic timestamp. For a non-blocking write transaction directed to more than one range, more than one synthetic timestamp may be determined. Accordingly, the gateway node may wait for the remaining subset of the non-blocking duration to expire, where the non-blocking duration corresponds to the synthetic timestamp having the latest timestamp. At step334, the gateway node may send the acknowledgement of the commit of the write transaction to the client device106. The gateway node may send the acknowledgement based on the expiry of the non-blocking duration. The gateway node may send the acknowledgment based on a timestamp at the gateway node exceeding or otherwise surpassing the synthetic timestamp. For a non-blocking write transaction directed to more than one range, the gateway node may send the acknowledgment based on a timestamp at the gateway node exceeding or otherwise surpassing the synthetic timestamp having the latest (i.e. maximum) timestamp. One or more steps of the method300as described herein may be combined, removed, and/or rearranged without departing from the scope of the present disclosure. Further Description of Some Embodiments FIG.4is a block diagram of an example computer system400that may be used in implementing the technology described in this document. General-purpose computers, network appliances, mobile devices, or other electronic systems may also include at least portions of the system400. The system400includes a processor410, a memory420, a storage device430, and an input/output device440. Each of the components410,420,430, and440may be interconnected, for example, using a system bus450. The processor410is capable of processing instructions for execution within the system400. In some implementations, the processor410is a single-threaded processor. In some implementations, the processor410is a multi-threaded processor. The processor410is capable of processing instructions stored in the memory420or on the storage device430. The memory420stores information within the system400. In some implementations, the memory420is a non-transitory computer-readable medium. In some implementations, the memory420is a volatile memory unit. In some implementations, the memory420is a nonvolatile memory unit. The storage device430is capable of providing mass storage for the system400. In some implementations, the storage device430is a non-transitory computer-readable medium. In various different implementations, the storage device430may include, for example, a hard disk device, an optical disk device, a solid-date drive, a flash drive, or some other large capacity storage device. For example, the storage device may store long-term data (e.g., database data, file system data, etc.). The input/output device440provides input/output operations for the system400. In some implementations, the input/output device440may include one or more of a network interface devices, e.g., an Ethernet card, a serial communication device, e.g., an RS-232 port, and/or a wireless interface device, e.g., an 802.11 card, a 3G wireless modem, or a 4G wireless modem. In some implementations, the input/output device may include driver devices configured to receive input data and send output data to other input/output devices, e.g., keyboard, printer and display devices460. In some examples, mobile computing devices, mobile communication devices, and other devices may be used. In some implementations, at least a portion of the approaches described above may be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions may include, for example, interpreted instructions such as script instructions, or executable code, or other instructions stored in a non-transitory computer readable medium. The storage device430may be implemented in a distributed way over a network, for example as a server farm or a set of widely distributed servers, or may be implemented in a single computing device. Although an example processing system has been described inFIG.4, embodiments of the subject matter, functional operations and processes described in this specification can be implemented in other types of digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible nonvolatile program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. The term “system” may encompass all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. A processing system may include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). A processing system may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Computers suitable for the execution of a computer program can include, by way of example, general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. A computer generally includes a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's user device in response to requests received from the web browser. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Other steps or stages may be provided, or steps or stages may be eliminated, from the described processes. Accordingly, other implementations are within the scope of the following claims. Terminology The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The term “approximately”, the phrase “approximately equal to”, and other similar phrases, as used in the specification and the claims (e.g., “X has a value of approximately Y” or “X is approximately equal to Y”), should be understood to mean that one value (X) is within a predetermined range of another value (Y). The predetermined range may be plus or minus 20%, 10%, 5%, 3%, 1%, 0.1%, or less than 0.1%, unless otherwise indicated. The indefinite articles “a” and “an,” as used in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of” “only one of” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law. As used in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof, is meant to encompass the items listed thereafter and additional items. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Ordinal terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term), to distinguish the claim elements. Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.
142,453
11860861
DETAILED DESCRIPTION A method and apparatus of a device that grows and/or shrinks a shared table that is shared between a writer and a plurality of readers is described. In the following description, numerous specific details are set forth to provide thorough explanation of embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments of the present invention may be practiced without these specific details. In other instances, well-known components, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other. The processes depicted in the figures that follow, are performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system or a dedicated machine), or a combination of both. Although the processes are described below in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in different order. Moreover, some operations may be performed in parallel rather than sequentially. The terms “server,” “client,” and “device” are intended to refer generally to data processing systems rather than specifically to a particular form factor for the server, client, and/or device. A method and apparatus of a device that grows and/or shrinks a shared table that is shared between a writer and a plurality of readers is described. In one embodiment, the dynamic shared table with notification provides a mechanism for stateful sharing of tabular data between a writer and multiple readers in a network element. In addition, this dynamic shared table can grow and shrink as the amount of data to be stored increases and/or decreases. This shared table is intended to accelerate data collections (e.g., routing tables, address tables, etc.) with high frequency update rates. In addition, the shared memory hash table can provide high availability and fault tolerance. While in one embodiment, the dynamic shared table is stored in memory as a shared memory table, in alternate embodiments, the dynamic shared table is stored in another medium. In one embodiment, the dynamic shared table with notification can accelerate a targeted number of collections that are very large, have high update rates, and a relatively large numbers of readers (e.g., a routing table with 1 million entries, a Media Access Control (MAC) address table with 288 k entries and 16 to 32 readers). In one embodiment, the dynamic shared table with notifications operates on the principle of coalescing the notification. In this embodiment, the writers and readers operate independently by running at their own speed, within bounded memory and with an O(1) complexity. In addition, concurrency is handled via wait-free and lock-free data protocols by using 64-bit atomic load/store operations. In this embodiment, atomic read-modify-write variants are not needed. Furthermore, the dynamic shared table does not utilize shared locks, which allows linear scaling of throughput over multiple CPUs as more readers and writers are added. In addition, the dynamic shared table can grow and shrink as needed, depending on the state of the running network element. In one embodiment, the shared table with notifications does not have a central controlling process. Instead, each writer manages a corresponding shared table, independent of other shared tables. If a writer restarts, the writer state is validated and reconciled from shared memory and the execution of the writer resumes. In one embodiment, the throughput of modified values from a writer to multiple readers scales linearly as readers and writers are added. In this embodiment, there is no blocking synchronization required by the participants, and the threads of execution are lock-free and wait-free. In one embodiment, writes to the shared table are coalesced in-place. In this embodiment, a fast writer does not block or consume unbounded memory because of slow or stuck readers. In addition, writers operate independently of the progress or the state of the readers, and vice versa. In one embodiment, the granularity of change notification is a compound value type consisting of multiple individual attributes value type rather than individual attributes. Thus, the maximum number of notifications that can ever be queued at once is bounded to the number of elements in the table. In one embodiment, the dynamic shared table mechanism includes three main components: the shared table, the notification queue, and the reader's local shadow table. The writer modifies an entry in the hash table and puts a notification in the notification queue. Readers pull the notification from the queue and populate their local shadow table. In turn, each reader modifies a corresponding process's value collection. In one embodiment, the hash table notification mechanism is based on the notification of slot identifiers (“slot-ids”), not keys. In one embodiment, a slot is a placeholder for a (key, value) pair. In this embodiment, the (key, value) pairs can come and go in a slot, but the slot-id remains the same. Thus, a notification on a slot indicates to a reader that something in this slot changed and it is up to the reader to figure out the change. Using this slot analogy for the shared table, each entry in the shared table is assigned a slot. So to deliver a notification that a table entry has changed, the writer that modified the table entry delivers the slot identifier. When a reader receives the slot identifier, the slot entry for this slot identifier indexes directly into the shared table to see what changed. In one embodiment, the use of slots to index the shared table is space and cycle efficient, because slot identifiers are simple 32-bit data, compared to an arbitrary size for the key. In one embodiment, given that each shared table entry corresponds to a slot identifier, the writer can build a notification queue containing slot identifier notifications. In one embodiment, this is the notification queue as described below. In this embodiment, the reader follows this queue and consumes slot identifier notifications, reading the value from the corresponding slot and updating a reader-local shadow copy of the shared memory hash table. These key notifications can then be delivered to the process. If the shared memory hash table and/or notification queue are fixed sized tables, a network operator would tend to configure these tables for a worst-case scenario (e.g., configure a maximum sized table), so that these tables do not run out of capacity during the running of network element. Configuring these tables to be a maximum size can waste memory, as a network element running with a small forwarding table or other information will not need a shared table that is this large. Furthermore, if the shared table is full, a static sized table cannot grow to store more information. This problem is further compounded if the network element is configured to have multiple virtual routing and forwarding (VRF) instances, because a network operator may allocate multiple maximally sized shared memory hash tables for the multiple VRF instances. In one embodiment, the dynamic shared table can grow and shrink as needed, depending on the state of running network element. In one embodiment, the bucket and slot tables of the dynamic shared table grow in-place. By growing the tables in-place, a reader does not need to release a reference to the table in order to complete the change in size of these tables. In this embodiment, each of these tables is a memory-mapped file that can be remapped as needed to either grow or shrink that table. For example and in one embodiment, the bucket and slots tables can grow if the number of slots is too small for the amount of (key, value) pairs being stored in the dynamic shared table. In this example, the writer can grow the buckets and slots table in-place without having to notify the readers that these table sizes have changed. As another example and embodiment, the bucket and slots tables can be shrunk if the number of active slots being used falls below a threshold (e.g., less than 25%). In this example, these tables are shrunk in place without having to notify the readers that these tables have changed in size. In addition, and in one embodiment, the notification queue can also grow and shrink, in-place, as needed. In this embodiment, the notification queue is split into two different queues, a primary and a secondary. These two queues are used when a queue is compacted to recover notification entries for invalidated entries. For example and in one embodiment, if the notification queue becomes full, the secondary queue is grown in-place. The size of the primary can change at the next compaction or growth (as this primary will now be the secondary and visa versa). Alternatively, if the notification queue is too large (e.g., the number of active entries in the queues is below a certain threshold), the queue is compacted and remapped to be smaller. FIG.1is a block diagram of one embodiment of a network element100that includes a dynamic shared table with notifications to readers for updates. InFIG.1, the network element100includes a data plane102and a control plane104. In one embodiment, the data plane102receives, processes, and forwards network data using various configuration data (e.g., forwarding, security, quality of service (QoS), and other network traffic processing information). For example, for each received packet of the network traffic, the data plane determines a destination address of that packet, looks up the requisite information for that destination in one or more tables stored in the data plane, and forwards the packet out the proper outgoing interface. The data plane102includes multiple switches106A-C, where each switch106A-C receives, processes, and/or forwards network traffic. In one embodiment, each switch includes an ASIC that is coupled to one or more ports. For example and in one embodiment, the network element100is a single logical switch that includes multiple ASICs, where each ASIC is coupled to multiple ports. In this example, each switch106A-C includes one ASIC and multiple ports (e.g.,24ports/ASIC). In one embodiment, each switch106A-C includes a reader112A-C, co-processor114A-C, ports116A-C, and process(s)118A-C, respectively. In one embodiment, the reader112A-C reads the data in the tables110and stores the data in a local buffer (not illustrated) of the respective switch106A-C. In this embodiment, each reader112A-C is notified of new data modification, and the corresponding reader112A-C performs lock and wait free reads of the data so as to not read data that is in the middle of being modified. Performing a read of a table as a result of being notified is further described inFIG.11Abelow. In one embodiment, the co-processor114A-C is a processor for each switch106A-C that can be used to accelerate various functions of the switch114A-C. For example and in one embodiment, the co-processor114A-C can accelerate bulk reads and write from memory in the control plane104to the local buffers. In one embodiment, the ports116A-C are used to receive and transmit network traffic. The ports116A-C can be the same or different physical media (e.g., copper, optical, wireless and/or another physical media). In one embodiment, each of the process(s)118A-C is a component of software that reads the configuration database, interacts with some resource (hardware or a network protocol or some other software component or process, e.g. the operating system kernel), and produces a status of that resource. In one embodiment, the control plane104gathers the configuration data from different sources (e.g., locally stored configuration data, via a command line interface, or other management channel (e.g., SNMP, Simple Object Access Protocol (SOAP), Representational State Transfer type Application Programming Interface (RESTful API), Hypertext Transfer Protocol (HTTP), HTTP over Secure Sockets layer (HTTPs), Network Configuration Protocol (NetConf), Secure Shell (SSH), and/or another management protocol) and writes this configuration data to one or more tables110. In one embodiment, the control plane104includes a writer108that writes configuration data to the table(s)110by performing wait-free writes and reader notifications, such that a reader reading the data can read data that is not in the middle of being modified. Performing a wait-free write of a table with reader notification is further described inFIGS.8-10below. In one embodiment, each of the one or more tables110is a table that is shared between the writer108and the readers112A-C. In this embodiment, the table(s)110are stored in memory that is shared between the data plane102and the control plane104. In one embodiment, the tables110store configuration data (e.g., forwarding, security, quality of service (QoS), and other network traffic processing information). In this embodiment, the writer108adds, deletes, or updates the data stored in the tables110and, in addition, notifies the readers112A-C that there is new data in the tables110to be read. The reader112A-C receives the notification, determines which data has been modified from the notification, and reads this data from the tables110. In addition, the reader112A-C updates the corresponding process118A-C with the modified data. In one embodiment, the writer108notifies the reader using a notification queue. In one embodiment, the writer108stores the notification at the head of the notification queue for a particular piece of data (e.g., a routing table entry) and invalidates previous notifications in this queue for this particular piece of data. In one embodiment, the shared tables110are each a dynamic shared memory hash table. In another embodiment, the shared tables110are a different type shared table. In one embodiment, the network element100can include multiple virtual routing and forwarding (VRF) instances. In this embodiment, each VRF instance has distinct routing and/or forwarding information that is different and/or separate from other VRFs. In addition, this further allows a network operator to segment network paths without using multiple devices. If the table(s)110are statically configured, there are two problems. One, the table(s) cannot grow so as to store the forwarding information needed for the network element to run. Second, because the table(s)110are static, a network operator may configure the table(s)110to be sized for a worst-case scenario (e.g., set the table(s)110to be a maximum size), even though much of the time, the network element100does not take advantage of maximally sized tables. This leads to an inefficient allocation of resources for the network element100. Having multiple VRF instances further compounds the problem because a network operator may allocate multiple maximally sized tables for the multiple VRF instances. In order to overcome the inefficiencies of statically-sized forwarding tables, a network element can have dynamically sized table(s)110, where these table(s)110are shared memory hash tables that includes notifications for readers.FIG.2is a block diagram of one embodiment of a dynamic shared memory hash table system200with notifications to one or more readers. InFIG.2, the shared memory hash table system includes a writer202, one or more readers204, and the shared memory hash table206. In one embodiment, the writer202writes values to the shared memory hash table206using a wait-free write, where each of the values is a (key, value) pair. The shared memory hash table206is a data structure used to implement an associative array of entries, which is a structure that can map the data keys to the data values. A hash table uses a hash function to compute an index into an array of entries, from which the correct value can be stored or retrieved. In one embodiment, the shared memory hash table206includes a notification queue208, a bucket table214, a slots table212, and values table210. The shared memory hash table206is further described inFIGS.3-7below. In one embodiment, the readers204each read the values stored in the shared memory hash table206. In one embodiment, the shared memory hash table206is dynamic because the shared tables and the notification queue of the shared memory hash table206can independently grow and/or shrink as needed. In one embodiment, these tables (e.g., the shared tables and/or notification queue) grow and/or shrink in-place without allocating a new table and copying over the contents from the old table to a new table. An “allocated-and-copy” scheme for a table to grow or shrink, creates a new table, copies over the data from the old table, and de-allocates the old table. The “allocate-and-copy” mechanism can consume extra memory, which is further compounded by the “lazy reader” problem. Because each of the tables is written to by a writer and read from multiple readers, de-allocating the old table cannot occur until each of the readers has switched over to the new table. By waiting to de-allocate the table, extra memory for the old table is still being used by the network element. For example, if a table has 100 slots and is doubled, allocating a new 200 slot table before de-allocating the old table will consume 300 slots worth of memory. This extra consumption of memory is further compounded by the “lazy reader” problem. In a notification-based mechanism, a reader does not read the table until notified that an entry in the table is ready to be read by the reader. Certain readers do not read the table very often (e.g., a reader for a command line interface.). Thus, a reader may not give up a reference to an old table for quite a while. During this time the old table may grow one, two, three, or more times in which the old table would not be de-allocated. This would lead to an inefficient growth of memory consumption for unneeded tables. Instead of growing or shrinking these tables using the “allocate-and-copy”, in one embodiment, the network element grows these tables in-place without allocating a new table. In this embodiment, if a table needs to be grown, the network element remaps the table to include a new segment for the table. By remapping the table, the network element provides a contiguous range of memory for the writer and/or readers to access the grown table. In addition, the network element updates characteristics of the table, so as to indicate that the table has changed. By indicating that the table has changed, a reader can access these characteristics to determine if the reader needs to updates the information the reader uses to access this table. For example and in one embodiment, the network element updates the number of slots that the table currently holds and a version of the table. By dynamically growing the table in-place, the network element makes more efficient use of the memory usage of the table. In addition, growing the table in place allows for a reader to update the reader's information on the table as needed and the table growing mechanism is not dependent on a reader action to complete the table growth. Growing a table is further described inFIG.8Abelow. In one embodiment, the network element can shrink a table (e.g., the shared tables and/or notification queue). In this embodiment, if the number of active entries in a table falls below a certain threshold, the network element shrinks the table in-place. As with growing a table in-place, shrinking the table in-place make more efficient use of the memory for the table and also is not dependent on a reader action to complete the table shrinkage. While in one embodiment, the network element shrinks the table if the number of active entries is less than 25%, in alternate embodiments, the network element shrinks the table using a different threshold (greater than or less than 25%). In one embodiment, the network element shrinks the table by identifying a segment of the table to shrink and copying active entries from the identified segment to slots in the active segment of the table. Because the location of the active entries has been moved, the network element issues notifications for each entry that was moved. The network element remaps the table so as to shrink the table in-place. In addition, the network element updates characteristics of the table, so as to indicate that the table has changed. By indicating that the table has changed, a reader can access these characteristics to determine if the reader needs to updates the information the reader uses to access this table. For example and in one embodiment, the network element updates the number of slots that the table currently holds and a version of the table. Shrinking a table is further described inFIGS.9A-Bbelow. In one embodiment, the shared tables224include bucket table214and slot table212. In one embodiment, the bucket table214serves as the hash function range: the hashing function will hash a key into a position in the bucket table214. The bucket table entry contains a versioned offset, linking the bucket to a chain in the slot table212. The bucket table is further described inFIG.3below. In one embodiment, the slot table212is an array of slot entries, each entry containing a versioned offset to the key/value data in shared memory, plus a versioned link. The versioned link is used for building hash chains on occupied entries, and for free list management on unoccupied entries. The slot table212is further described inFIG.4below. In one embodiment, the value table210is the region where the value data is stored in shared memory. Each of the versioned offsets in the Slot table reference an entry in the values table210. In one embodiment, a writer202further includes a positions table216, which is used to locate a slot's position in the notification queue208. In this embodiment, the positions table216is a slot identifier to position table that is maintained privately by the writer to provide a direct lookup of the slot identifier to notification queue mapping. While in this embodiment, the slot table212and value table210are illustrated as separate tables, in alternative embodiments, the slot table212and the value table210may be combined into a single “SlotValue” table. In this embodiment, the slot and value are stored in a single table and a lookaside buffer is used to modify the contents of the SlotValue table without allowing readers to see intermediate states of a partially-written value. For example and in one embodiment, the lookaside buffer can be a lookaside buffer as described in U.S. patent application Ser. No. 14/270,122, entitled “System and Method for Reading and Writing Data with a Shared Memory Hash Table”, filed on May 5, 2014. The benefit of this embodiment is a reduction in code complexity, cache footprint, and a consummate improvement in runtime speed as there are fewer pointers to maintain, less code to execute, and better cache locality. In one embodiment, the reader(s)204read the data stored in the values table210and use this data to update the corresponding process. Each reader204includes local values table218, shadow table220, and shadow bucket table222. In one embodiment, the local values table218, shadow table220, and shadow bucket table222are local snapshots of the value table210, slot table212, and bucket table214, respectively. In one embodiment, a snapshot table is a snapshot of the shared table. In this embodiment, whereas a reader may need to take care when accessing a shared table, the snapshot does not change until the reader specifically copies data from the shared table into the “snapshot” table. In this embodiment, the snapshot tables allow software (e.g., the readers) that is unaware or unable to deal with the constraints of shared tables to run unmodified within the reading process. For example and in one embodiment, an unsophisticated bit of software may expect that if it reads key K and retrieves value V that if it reads K again immediately it will get value V again. Due to the concurrent operation of the shared memory hash table, repeated reading of this key may not guarantee a retrieval of the same value. In one embodiment, handling with this concurrent operation can require changes to the reader software if, for instance, it was originally written without the shared memory approach in mind. For example and in one embodiment, one approach to sending notifications for a hash table between processes is to send a stream of key-to-value updates (insertion, deletion, or changes) over a network socket. In this embodiment, the local copy within the reader's address space does not changes except when the reader intentionally de-queues updates from the socket. In another embodiment, the hash table in shared memory can change asynchronously, requiring either change in the reader software or some code to produce a snapshot version of the table that does not change asynchronously. In one embodiment, the local values table218is the region where the sanitized version of the value data are stored in shared memory. In one embodiment, the shadow table220is a reader-local “shadow” of the shared memory slot table212. It represents the reader's sanitized copy of the constantly changing slot table212state, as updated exclusively by the received slot identifier (“slot-id”) notifications. In one embodiment, the shadow table220is sized with the same number of N entries, and has matching slot-id indexes. The shadow table220is further described inFIG.5below. In one embodiment, the shadow bucket table222is similar to the bucket table214and the shadow bucket table222provides a hash index into the shadow slot table220, so that the reader(s)204can perform lookups on their local sanitized state. The shadow bucket table222is further described inFIG.6below. In one embodiment, to notify each reader204of the changes to the values stored in the values table, the writer202stores notifications in the notification queue208. In one embodiment, the notification queue208is a dual shared notification queue for any number of readers, and writers are unaware of any reader state. The notification queue208is further described inFIG.7below. As described above, the shared memory hash table206is dynamic because the shared tables224and notification queue208can grow and/or shrink as needed, independently, according to the running state of the network element. In one embodiment, the local tables of the readers, such as the local values table218, shadow table220, and shadow bucket table222can also dynamically grow and/or shrink, as needed, to correspond to the change of the shared tables224. Growing and/or shrinking of these tables is further described inFIGS.3-7and11below. In one embodiment, each of the shared tables224and the notification queue208can grow in-place. In this embodiment, each of the shared tables224and the notification queue208includes multiple components. When the shared tables224or the notification queue208grows in-place, each of those components grows in-place as well. In one embodiment, if the shared tables224grow, each of the bucket table214and slots table212, grows in-place as well. In this embodiment, the values table212is dynamic and grows/shrinks using a different mechanism (as described below). Likewise, if the notification queue208grows, the primary and secondary queues of the notification queue208grow in-place. In one embodiment, each of the shared fixed-224and the notification queue208includes table characteristics. In one embodiment, the table characteristics include a numslots and a version. In this embodiment, the numslots is the number of slots available in the table. This can give a measure of the current size of the table. In one embodiment, each of the shared tables224and the notification queue208can be independently grown in page-size increments (e.g. grown in 4096 byte increments). In this embodiment, each of the shared tables224and the notification queue208can start out with an initial size of one system page total, and be grown in page increments as needed (e.g., one page→two pages→four pages→etc.). For example and in one embodiment, there is one page of memory that is allocated among the bucket table214and slots table212. In one embodiment, an amount by which one of the tables is grown can be in a fixed size (e.g., double the size, grow in minimum 50% increments, or some other fixed size increment) or size can be adjustable (e.g., increasing the number of slots in the table by successively larger powers of 2). For each size, numslots is the number of slots that an entry will fit in the current size of the table. In addition, the table characteristics further include a version. In one embodiment, the version is a monotonically increasing integer. In this embodiment, the version is changed upon a growth or shrinkage of one of the tables. The version and, in some embodiments, the numslots value can be used by a reader to determine when either of the shared tables224or the notification queue208has changed. In another embodiment, each of the shared tables224and the notification queue208can be independently shrunk in-place by deleting a segment from each of the component tables. In this embodiment, the deleted segment is accomplished by remapping that component table, as will be discussed below inFIGS.3-7. After the table is shrunk in-place, the table characteristics306are updated. In one embodiment, the numslots is updated and the version is incremented. In a further embodiment, the reader local tables can grow and/or shrink in-place as well. In this embodiment, if the reader local need to grow and/or shrink the shadow220and shadow bucket222grow and/or shrink commensurately. In one embodiment, the reader local table is remapped as described inFIG.11Abelow. As described above, the shared memory hash table includes a dynamic bucket table.FIG.3is a block diagram of one embodiment of a dynamic bucket table300of the shared memory hash table. The bucket table300serves as the hash function range: the hashing function will hash a key into a position in the bucket table300. The bucket table entry contains a versioned offset, linking the bucket to a chain in the slot table. In one embodiment, versioned offsets are used in the shared memory hash table data structures. The versioned offsets allow for a lock and wait free mechanism for writers and readers to safely access shared state. For example and in one embodiment, writer and reader use a lock and wait free mechanism as described in in U.S. patent application Ser. No. 14/270,226, entitled “System and Method of a Shared Memory Hash Table with Notifications”, filed on May 5, 2014. In one embodiment, the bucket table300can grow in-place by adding a segment at the end of the table300in response to the shared tables growing in-place. In this embodiment, a bucket add segment306is added to the bucket table300. This additional segment306can be used to store additional entries. In another embodiment, the bucket table300can shrink in-place by deleting a segment at the end of the table300by remapping the table300. In this embodiment, a segment is identified in the end of the table, such as bucket delete segment308. Before the segment is deleted, the network element copies the active entries in this bucket delete segment308into an active segment of the bucket table300. Furthermore, because the copied entry locations are changing, the network element issues notifications for the changes to these entries. Similar to the bucket table growth, the bucket table300shrinks in increments of page sizes. While in one embodiment, the bucket add and delete segments306are illustrated as having the same size, in alternate embodiment, the bucket add and growth segments306can be different and have different sizes. Each of the bucket entries can reference a slot entry in a slot table.FIG.4is a block diagram of one embodiment of a dynamic slot table400for the shared memory hash table. InFIG.4, the slot table400is an array of slot entries, where each entry containing a versioned offset to the key/value data in shared memory, plus a versioned link. The versioned link is used for building hash chains on occupied entries, and for free list management on unoccupied entries. typedef struct {uint32_t valueOffset;uint32_t valueVersion;uint32_t next;uint32_t nextVersion;} In one embodiment, the slot table400includes a header that stores the number of slots and a version. In this embodiment, the header is updated atomically upon a growing or shrinking of the slot table400. Initially, the slot table400has the entries linked onto a writer-owned freelist. When a new key/value is inserted into the table, a slot entry is allocated from the freelist, and the index of the entry being the slot identifier. This automatic allocation and mapping of slot identifiers, used in the notification mechanism, is a feature of this coalesced hashing algorithm. If the newly inserted key/value has collided with an existing slot linked to the bucket, the new allocation is linked to the existing chain in key order. Ordering the chains by key helps preserve important iteration properties (such as no duplicates) and allow for faster key lookup. In one embodiment, the slots table400can grow in-place by adding a segment at the end of the slots table400in response to the shared tables growing in-place. In this embodiment, a slots add segment410is added to the slots table400. This additional segment410can be used to store additional entries. In another embodiment, the slots table400can shrink in-place by deleting a segment at the end of the table400by remapping the slots table400. In this embodiment, a segment is identified in the end of the table, such as slots delete segment410. Before the segment is deleted, the network element copies the active entries in this slots delete segment410into an active segment of the slots table400. Furthermore, because the copied entry locations are changing, the network element issues notifications for the changes to these entries. Similar to the bucket table growth, the slots table400shrinks in increments of page sizes. While in one embodiment, the slots add and delete segments410are illustrated as having the same size, in alternate embodiment, the slots add and growth segments410can be different and have different sizes. In one embodiment, the values table is the region where the value data are stored in shared memory. In this embodiment, the versioned offsets in the slot table references the values stored in the values table. In one embodiment, the value types are statically sized, and thus, the values table is a dynamically sized table that can grow and/or shrink depending on the state of the network element. In addition, each entry has a link for a freelist, making entry allocation and deallocation easy. In another embodiment, a dynamically sized value types are used and a dynamic memory allocator is used. In this embodiment, the allocator need not worry about concurrency issues as the readers are not aware of allocator metadata. FIG.5is a block diagram of one embodiment of a dynamic shadow table500for the shared memory hash table. InFIG.5, the shadow table500is a reader-local “shadow” of the shared memory Slot table. It represents the reader's sanitized copy of the constantly changing Slot table state, as updated exclusively by the received slot-id notifications. In one embodiment, the shadow table is sized with the same number of N entries, and has matching slot-id indexes. In one embodiment, each of the entries includes a pointer502to a value for the slot, a previous pointer504to the previous slot, and a next pointer506to the next slot. When a slot-id notification for slot S is first processed by the reader, the reader compares its shadow slot key in slot S with the slot key in shared memory in slot S:If the values of the two keys are the same, or if the shadow table entry for slot S is empty, then the key A can be delivered to the process as an update.If the keys are different, say if key B occupies the shadow slot, the reader knows key B is being deleted and key A is being created. So both keys B and A are delivered to the process as updates (separately, of course). In either case, prior to delivering any updates to the process, the shadow table is updated to the current state: that is, key A now occupies the shadow slot. The following pseudocode illustrates this algorithm: // retrieve next slot notificationuint32_t slot = conquer.getSlot () ;VALUE value;uint32_t version;// performs a lockfreeretrieval of key/value at a given slotdo {version = slots [ slot ].version;value = slots [ slot ].getValue () ;} while ( version != slots [ slot ].version ) ;// retrieve old shadow table keyKEY shadowKey = shadow [ slot ].getKey() ;// is entry deleted?if ( value.isEmpty() ) {// yes, also delete from shadow indexdeleteShadowEntry( shadowKey ) ;// tell Process about possibly deleted keydeliverProcessUpdateFor( shadowKey ) ;} else {// is the old shadow key and new key different ?if ( shadowKey != value.getKey() ) {// delete old shadow key from tabledeleteShadowEntry( shadowKey ) ;// yes, deliver old (possibly deleted) key update tothe ProcessdeliverProcessUpdateFor( shadowKey ) ;}// insert new key into shadow at given slotinsertShadowEntry( value.getKey(), slot ) ;// tell Process about changed key/valuedeliverProcessUpdateFor( value ) ;} If, as part of the update notification, the process wishes to lookup keys A, B, or any other key in the table, the infrastructure restricts lookups to be local, and not to the shared memory hash table. If the shadow lookup succeeds, then a subsequent lookup into the shared memory hash table can proceed to retrieve the most up-to-date Value. Otherwise the reader risks the “lost delete” race condition. This is one of the reasons why the shadow table maintains a snapshot copy of the keys. For example and in one embodiment, a reader compares the shadow slot with the writer slot and copies the writer copy if different. In this embodiment, readers do local lookups into the shadow table to avoid the ‘lost delete’ race condition. Since the shadow table is local to the reader and is accessed by that reader, this shadow table does not need to use versioned offsets. Instead, the shadow table can use local 32-bit pointers to the local key buffer. In one embodiment, the shadow table500can grow in-place by adding a segment at the end of the shadow table500in response to a slot notification of an unmapped segment. In this embodiment, a shadow add segment510is added to the shadow table500. This additional segment510can be used to store additional entries. In another embodiment, the shadow table500can shrink in-place by deleting a segment at the end of the table500by remapping the shadow table500. In this embodiment, a segment is identified in the end of the table, such as shadow delete segment510. Before the segment is deleted, the network element copies the active entries in this shadow delete segment510into an active segment of the shadow table500. Furthermore, because the copied entry locations are changing, the network element issues notifications for the changes to these entries. Similar to the bucket table growth, the shadow table500shrinks in increments of page sizes. While in one embodiment, the shadow add and delete segments510are illustrated as having the same size, in alternate embodiment, the shadow add and growth segments510can be different and have different sizes. In one embodiment, and in addition to the shadow table, each reader includes a shadow bucket table.FIG.6is a block diagram of one embodiment of a dynamic shadow bucket table600for the shared memory hash table. InFIG.6, shadow bucket table600provides a hash index into the shadow slot table so that readers can perform lookups on their local sanitized state. The hash function indexes into this table, allowing the lookup to follow the chain. In one embodiment, the shadow table size is dynamically grows and/or shrink depending on the state of the network element. In one embodiment, this table is private to the reader and it does not reside in shared memory. In this embodiment, because each shadow table600corresponds to one reader, the shadow entries do not need a versioned offset. In one embodiment, the shadow bucket table600can grow in-place by adding a segment at the end of the shadow bucket table600in response to a reader local table growing in-place. In this embodiment, a shadow bucket add segment606is added to the shadow bucket table600. This additional segment606can be used to store additional entries. In another embodiment, the shadow bucket table600can shrink in-place by deleting a segment at the end of the table600by remapping the shadow bucket table600. In this embodiment, a segment is identified in the end of the table, such as shadow bucket delete segment606. Before the segment is deleted, the network element copies the active entries in this shadow bucket delete segment606into an active segment of the shadow bucket table600. Furthermore, because the copied entry locations are changing, the network element issues notifications for the changes to these entries. Similar to the bucket table growth, the shadow bucket table600shrinks in increments of page sizes. While in one embodiment, the shadow bucket add and delete segments606are illustrated as having the same size, in alternate embodiment, the shadow bucket add and growth segments606can be different and have different sizes. FIG.7is a block diagram of one embodiment of a dynamic notification queue700for the shared memory hash table. InFIG.7, the notification queue700is a dual shared notification queue700for any number of readers, with writers being unaware of any reader state. The notification queue700includes a primary and secondary queue702A-B. In one embodiment, a writer publishes slot changes to the primary notification queue700. In one embodiment, each entry in the queue is a uint32_t slot-id plus a uint64_t sequence. The sequence is a virtual timer that increments each time the writer inserts something in the queue. On every slot identifiers insertion to the notification queue, the writer invalidates the old entries occupied by the same slot. This is part of the coalescing mechanism: old, prior entries are wiped out, while new recent entries are in the front. To locate a slot's position in the queue, a slot identifier to position table is maintained privately by the writer to provide direct lookup. In one embodiment, the notification queue700can fill up with invalidated entries and slot identifiers, at which time the writer initiates a compaction phase to sweep out the invalidated entries. To notify sleeping readers that a new slot is available for consumption, the writer employs an out-of-band “reader-kick” mechanism. A single byte is sent over a Unix domain socket, giving the reader a hint that notification data is available. In one embodiment, the notification queue700can grow and shrink as needed, depending on the running state of the network element. In this embodiment, if the notification queue700runs out of empty entries, the notification queue700can be increased. In this embodiment, the notification queue700is grown by growing the primary and secondary queues in-place. In one embodiment, the notification queue700can be increased in page-sized increments. Growing the notification queue is further described inFIG.8Bbelow. In addition, the notification queue700can be shrunk if the number of active entries falls in the primary queue below a threshold (e.g., 25%), the primary and secondary queues702A-B are shrunk. In one embodiment, each of the queues702A-B is shrunk in-place. Shrinking the notification queue700is further described inFIG.10below. In one embodiment, the notification queue can be compacted when the end of the queue is reached by removing the invalidated entries. In this embodiment, the secondary notification queue702B of the same size as the primary notification queue702A is maintained. The secondary notification queue702B allows the writer to sweep the queue, copy over the active slot identifier entries, while readers continue to concurrently process entries. When the writer finishes compacting to the secondary queue702B, the writer flips a master versioned pointer and the secondary queue702B becomes the primary queue702A. Readers that are concurrently looking at the notification queue700should validate that the master notification queue pointer has not changed before and after reading from the queue. If a reader discovers that the notification queue is compacted while a slot/sequence was being read, the reader repositions itself. In one embodiment, the reader repositions itself by doing a binary search on the queue sequences to find the new position. The reader finds its position in the newly-compacted queue by searching for the first sequence that is larger than the sequence of the entry that it last processed. Once the new position is found, the reader can continue consuming slot notifications. Similar to the other tables, the notification queue700is dynamically sized, which can grow and/or shrink depending on the state of the network element. In one embodiment, each of primary and secondary queues702A-B can grow in-place by adding a segment to the end of the respective queue by remapping the queue. The notification queue further includes a numslots and a version. Similar to the numslots and version for the shared tables, the numslots and the version is the number of slots in the notification queue700and the version is a version of the notification queue700. The notification queue700can also shrink as needed. Shrinking the notification queue is further described inFIG.10below. In one embodiment, the size of each notification slot is smaller than the entries for the shared table. With a notification queue700allocated in page size increments, the number of slots available for the notification queue700can be greater than the number of slots available in the shared table above. Thus, the notification queue does not need to be grown and/or shrunk on the same schedule as the tables in the shared table. In one embodiment, the notification queue700grows and/or shrinks as needed and not at the same time as the tables in the shared table. As described above, the shared tables can be grown in-place, depending on the running state of the network element.FIG.8Ais a flow diagram of one embodiment of a process800to grow the shared tables. In one embodiment, a writer performs process800to grow the shared tables, such as writer108as described inFIG.1above. InFIG.8A, process800begins by receiving an entry to be stored in the shared tables at block802. At block804, process800determines if process800is to grow the table. In one embodiment, process grows the table if the table is full and without an available entry for a new slot or within a threshold of being full. If the table does not need to grow, at block806, process800adds the entry. For example and in one embodiment, if the table is the shared tables, process800adds an entry to the bucket and slots table, so as to add an entry to the values table using a lock and wait free mechanism as described in U.S. patent application Ser. No. 14/270,226, entitled “System and Method of a Shared Memory Hash Table with Notifications”, filed on May 5, 2014. If the table does need to grow, process800initializes a new segment for the table growth. In one embodiment, process800determines size for the new segment. In this embodiment, the segment size can be a static increase (e.g., double existing size, add 50%, add one ore page size increments) or an adjustable increase (e.g., double existing size initially, and use smaller size as the size of the table gets closer to a maximum size). In one embodiment, the table growth is done in page size increments. With this segment size, process800increases that size of the table by remapping each of the component tables. In one embodiment, if the shared tables grow by the segment size, process800grows each of the bucket and the slots tables based on the segment size. For example and in one embodiment, for shared tables growth, if each entry in the bucket and slots table is 64-bits and 128 bits, respectively, increasing the shared tables by one page of memory (4096 bytes) would increase the shared tables by approximately 170 new slots. In this example, the bucket table and slots table would be grown in a 1:2 ratio. Process800grows the bucket and slots tables by approximately 170 slots by remapping each of these tables. In one embodiment, each of the bucket and slots table is a memory mapped files and remapping is done via a system call. In this embodiment, by remapping the table, each of the component tables appears as a contiguous range of memory to the writer and each of the readers with the same starting reference. Thus, neither the writer nor the reader needs to reset a reference to access the remapped table. At block810, process800adds the entry to the shared tables or the notification queue. If the entry was a new value in the shared tables, process800allocates one of the bucket and slots entries and adds the value as described above with reference toFIGS.3and4. Process800updates the numslots and version of the table at block812. In one embodiment, process800updates these values atomically. In addition to growing the shared memory hash table, the notification queue can grow as needed as well.FIG.8Bis a flow diagram of one embodiment of a process850to grow the notification queues. In one embodiment, a writer performs process850to grow the shared tables, such as writer108as described inFIG.1above. InFIG.8B, process850begins by receiving an entry to be stored in the notification queue at block852. At block854, process850determines if process850is to grow the notification queue. In one embodiment, process850grows the notification queue if the queue is full and without an available entry for a new entry or within a threshold of being full. If the notification queue does not need to grow, at block856, process850adds the entry into the notification queue. In another example and embodiment, process850adds an entry in the notification queue as described inFIG.7above. If the notification queue does need to grow, process850initializes a new segment for the notification queue growth at block858. In one embodiment, process850determines size for the new segment. In this embodiment, the segment size can be a static increase (e.g., double existing size, add 50%, add one ore page size increments) or an adjustable increase (e.g., double existing size initially, and use larger or smaller sizes as the size of the table gets larger). In one embodiment, the notification queue growth is done in page size increments. With this segment size, process850increases that size of the notification queue by remapping each of the notification queues (e.g., primary and secondary queues). In one embodiment, if the notification queue grows by the segment size, process850grows each of the primary and secondary queues based on the segment size. In one embodiment, each of the primary and secondary queues is a memory mapped file and remapping is done via a system call. In this embodiment, by remapping the notification queues, each of the queues appears as a contiguous range of memory to the writer and each of the readers with the same starting reference. Thus, neither the writer nor the reader needs to reset a reference to access the remapped queue. At block860, process850prunes the notification queue to remove the notification entries that have been read by the readers. In one embodiment, each of the readers maintains a position in the notification queue. This reader position is the last position in the notification queue that this reader has found no more notifications. In this embodiment, each of the readers further communicates this position to the writer. Since the writer knows each of the reader's last accessed position, the writer would know which of the notifications have been processed by all of the readers and which notifications have been processed by some or none of the readers. Thus, process850can prune the notification entries that have been processed by all of the readers. For example and in one embodiment, if the readers have positions113,150, and200, process850can prune the notification entries1-112. By pruning the notification entries that have been processed by all the readers, process850makes sure that no notification is pruned before a reader has a chance to access it. Process850compacts the notification queue at block860. In one embodiment, process850compacts the notification primary queue by reading each queue entry starting from the lowest reader position as described above and copies over the live entries to the secondary queue. In one embodiment, the compaction of the notification queue occurs without disruption of read access for a reader. In this embodiment, a reader can still have access to the notification entries while the compaction of the notification queue occurs. In addition, process850updates the number of slots and version information for the notification queue. Furthermore, process850switches the active pointer for the notification queues, making the secondary queue the primary and the primary queue the secondary. Process850adds the notification entry at block862. In addition, to growing the shared memory hash table, the shared memory hash table can be shrunken.FIG.9Ais a flow diagram of one embodiment of a process900to shrink the shared tables. In one embodiment, a writer performs process900to shrink the shared tables, such as writer108as described inFIG.1above. InFIG.9A, process900begins by deleting an entry in the table at block902. In one embodiment, process900deletes an entry in the shared tables. For example and in one embodiment, process900deletes an entry as described in U.S. patent application Ser. No. 14/270,226, entitled “System and Method of a Shared Memory Hash Table with Notifications”, filed on May 5, 2014. At block904, process900determines if the table should be shrunk. In one embodiment, process900shrinks the shared table is the number of active entries is less than a threshold (e.g., the threshold is 25%). If the table is not to be shrunk, process900returns at block906. If the table is to be shrunk, process900identifies a segment to be shrunk for each of the fixed sized tables at block908. In one embodiment, the segment is a contiguous segment at the end of the table. For example and in one embodiment, In addition, process900determines to reduce the bucket and slots tables in half. At block910, process900copies active entries in the segment of the table that is to be deleted to empty slots an active segment of the table. In one embodiment, process900copies entries for a delete segment of the bucket and entries of the delete segment of the slots. For example and in one embodiment, if process900is to delete slot-ids26-50of the shared fixed sized table, and there are active entries in slot-ids30and45, process900copies the bucket and slot entries for slot-ids30and45to a slot-id that is less than 26 in the bucket and slots table. Process900remaps the table to reduce the table size by the identified segment at block916. In one embodiment, process900makes a systems call to remap the table. Process updates the numslots and version of table at block916. In one embodiment, process900updates these values atomically. In this embodiment, process900increments the version of the table and updates the numslots value based on the new table size. In addition to shrinking the shared memory hash tables, the local shadow table can be shrunk. In one embodiment, the local shadow is not shrunk until a reader determines that a key is missing from the shadow.FIG.9Bis a flow diagram of one embodiment of a process950to shrink local tables. In one embodiment, a reader performs process950to shrink the local tables, such as readers112A-C as described inFIG.1above. InFIG.9A, process900begins by determining that a key is missing from the shared tables. In one embodiment, the key can be missing because the shared memory hash table has shrunk and this key corresponds to a slot that is at a position larger than the number of slots the shared memory hash table currently holds. At block954, process950determines if the number of slots in the shadow is greater than the number of slots in the shared memory hash table. If so, process950will resize the shadow. If there is no resizing the shadow, process950moves or removes the missing key from the shadow table at block956. Execution proceeds to block974, where process950updates the segment counters for the shadow. In one embodiment, segment counters are counters that are used to track the number of entries in the shadow table for each segment of the shadow table. In this embodiment, the shadow table includes one or more segments that are used to grow or shrink the shadow table. In one embodiment, the shadow table includes segments that are sized in increasing powers of two. For example under one embodiment, the shadow table could be sized with segments of one page, two pages, four pages, and so on. In this example, by using successively larger segment sizes, the amount of growing or shrinking can be reduced. If the shadow table is to be resized, process950executes a processing loop (blocks958-970) to move each entry in the shadow table that has a slot position that is greater than the number of slots in the shared memory hash table. At block960, process950looks up the slot position for an entry in the shared memory hash table. In one embodiment, that entry may have been moved because the shared memory hash table shrunk or that entry may have been deleted. At block962, process950determines that an entry exists in the slots table. In one embodiment, process950looks up the key for that entry in the slots table. If there is a result for that key, process950will receive the new slot position for that key. If there is no entry, then that key is likely deleted from the slots table. If there is no entry, at block966, process950knows the key has been deleted from the slots table and proceeds to remove the entry from the shadow. In addition, process950sends a notification to the corresponding to process950that this key has been deleted. If an entry in the slots table does exist, process950moves the entry in the shadow to the new slots position. For example and in one embodiment, if the key K is moved from the slots position110to25, process950moves the for key K from the slots position110in the shadow table to the new position of 25. Because this key possibly has been moved from one segment to another of the shadow table, process950updates the segment counters accordingly at block968. For example and in one embodiment, process950decrements the segment counter for the segment corresponding to shadow position110and increments the segment counter corresponding to the shadow position25. Process950ends the processing loop at block970. As a result of the entries being moved in the shadow table, some of the segments of the shadow table may have no entries. This is reflected in the segment counters for the segments of the shadow table. At block972, process950removes the segments with zero entries. In one embodiment, this shrinks the size of the shadow table to have the same number of slots as the slot table. In one embodiment, the shadow is a memory mapped file, which can be remapped to a smaller size using a system call. FIG.10is a flow diagram of one embodiment of a process to shrink (or grow) a notification queue. Shrinking a notification queue is different than shrinking the fixed-sized tables, because shrinking a notification queue involved compacting the notification queue prior to shrinking the notification queue. In addition, there are no reader notifications used in shrinking the notification queue as used in shrinking the shared tables. In one embodiment, a writer performs process1000to shrink the notification queue, such as writer108as described inFIG.1above. InFIG.10, process1000begins by receiving an entry, which can be an add, modify, or delete entry. At block1004, process1000generates this notification for the notification queue. In one embodiment, process1000generates the notification as described inFIG.8B, block856described above. Process1000determines if primary queue of the notification queue is full at block1006. In one embodiment, if the primary queue is full, process1000determines if the notification queue can be pruned by removing invalidated notifications and/or process1000determines if the size of the notification queue should be resized. If the primary queue is not full, process1000writes the notification in the notification queue at block1024. If the primary queue is full, process1000determines if the secondary queue is to be resized at block1008. In one embodiment, by counting the number of valid notifications in the primary queue. If the number of valid notifications in the primary queue is greater than an upper threshold (e.g. 50% of the size of the secondary queue) or smaller than a lower threshold (e.g. 25% of the size of the secondary queue), the queue will be resized. However, if the number of validations is between these two thresholds, the queue will not be resized. If the secondary queue is not to be resized, execution proceeds to block1014below. If the secondary queue is to be resized, process1000resizes the secondary queue by remapping the secondary to an increased size or a decreased size. In one embodiment, if process1000is to reduce the size of the secondary queue, process1000determines a segment size to reduce the secondary queue. For example and in one embodiment, the segment size can be reduced in a constant percentage or size (e.g., 50%, 25%, or another percentage; or particular size, such as in page increments). Alternatively, the segment size can be variable, depending on the current size of the secondary queue (e.g., reduce the secondary queue more when the secondary queue is larger and reduce the secondary queue less when the secondary queue is smaller). For example and in one embodiment, process1000can reduce the secondary queue such that the secondary queue has at least twice the number of valid notifications in the primary queue. Using the segment size, process1000remaps the secondary queue to be smaller. In one embodiment, the secondary queue is a memory mapped file, which can be remapped to a smaller size using a system call. In another embodiment, if process1000is to increase the size of the secondary queue, process1000can grow the secondary queue in-place by adding a segment to the end of this queue by remapping the queue. For example and in one embodiment, process1000increases the size as described inFIG.7above. Process1000updates the number of notifications and the version of the queue at block1012. Execution proceeds to block1014below. In one embodiment, the number of active notifications in the primary queue may indicate to process1000that the secondary queue is a candidate for pruning. At block1014, process1000determines if the primary queue is to be pruned. In one embodiment, process1000prunes the notification queue pruning is decided based on the slowest reader, e.g., the lowest sequence number read among the sequence numbers received from readers. In this embodiment, pruning is skipping valid notifications for notifications that have been read by all of the readers. In one embodiment, process1000does not delete the notifications read by all of the readers because readers may be in the process of reading them. Instead, process1000does not copy these notifications over to the secondary when compacting, opting to update the internal count of valid notifications for the primary queue after pruning. This may also which might affect resizing decisions. If the table is not to be pruned, execution proceeds to block1018below. If the queue is to be pruned, process1000compacts the secondary queue. In one embodiment, process1000compacts the secondary queue by removing the invalidated entries as described inFIG.7above. In addition, process1000prunes the secondary queue by remapping the secondary queue to be smaller at block1016. At block1018, process1000copies the valid notifications from the primary to the secondary queue. At block1020, process1000makes the secondary queue the primary queue and the primary queue the secondary queue by swapping the primary and secondary queues. In one embodiment, by swapping the queues, the network element now uses the smaller queue for the notifications. Process1000updates the current queue number and version at block1022. Process1000writes the notification at block1024. As the tables grow or shrink, the reader will periodically need to update its view of these tables.FIG.11Ais a flow diagram of one embodiment of a process1100to remap a table for a reader. In one embodiment, the table to remap is the local table the reader maintains. For example and in one embodiment, the local table is the local values218, shadow220, and shadow bucket222tables as described inFIG.2above. In this embodiment, if the shared memory changes in a way that needs to be reflected in the local reader table, process1100remaps the local table. In one embodiment, a reader performs process1100to remap a table for the reader, such as readers112A-C as described inFIG.1above. InFIG.11A, process1100begins by receiving a notification to read an entry at block1102. In one embodiment, the notification entry is used to publish slot changes to the reader. Alternatively, process1100can begin by receiving a request to read a value for a key at block1103. Process1100checks the numslots and/or version of the shared table so determine if these values have changed since the last time process1100accessed the shared table at block1104. In one embodiment, the shared table may have grown since the last access by process1100, may have shrunk, and/or a combination thereof. Each time the shared table grows or shrinks, the number of slots and the version of the shared table changes. In one embodiment, the number of slots may not have changed, but the version of the shared table changed. In this embodiment, the shared table may have grown and shrunk by the same amount since the last time process1100accessed the shared table. In one embodiment, using the numslots and version number information, process1100determines if the table should be remapped. In one embodiment, process1100remaps the local table if the number of slots in the shared table is less than the number of slots in the local table, if the number of slots is the same and the version number is different, or if notification entry references a slot number that is greater than the number of slots in the local table. For example and in one embodiment, if the number of slots in the shared table is less than the number of slots in the local table, this means that the shared table has been shrunk and one or more of the entries in a delete segment of the shared table have been copied into different slots. In this embodiment, the reader will need to update the local reader table by remapping the local table. In another embodiment, if the number of slots is equal, but the version has changed, this means that the shared table has shrunk at some point, with entries being copied from one slot to another. Similar to above, the reader will need to update the local reader table by remapping the local table. In a further embodiment, if process1100receives a notification for a slot number that is greater than the number of slots that the local table has, process1100does not have a corresponding slot for that value corresponding to the local table. In this embodiment, process1100will need to remap the local table so as to expand the local table so that the value can be stored in the slot indicated by the notification entry. For example and in one embodiment, if the notification entry is for 300 and the number of slots in the local table is 200, process1100remaps the local table so as to grow the local table to have the same number of slots as the shared table. Furthermore, in this embodiment, if the notification or read request that references a slot that is smaller than the number of slots known to the reader in the shadow (even thought the number of slots in the slots table is greater), then the shadow does not need to grow, as this slot is still accessible. This is an example of a “lazy” reader growth, where the reader grows the shadow table when the reader attempts to access a slot that is greater than the number of slot the reader knows about. If the table needs to be remapped, process remaps the table at block1106. In one embodiment, process1100remaps the tables by calling a system function to remap the table. In addition, process1100updates the table header, including the number of slots and the version. In one embodiment, process1100remaps the local table by determining the number of slots of the shared table and remapping the local table to have the same number of slots. In this embodiment, process1100remaps the local table by remapping the shadow table and shadow bucket tables, so as to grow or shrink these tables as needed. In one embodiment, these tables are remapped in pages size increments. In addition, process1100saves the version of the shared table for later version number comparisons. Execution proceeds to block1104. In one embodiment, if the number of slots in the shared table is greater than the number of slots in the local table, but the notification entry indicates a slot that is less than or equal to the number of slots in the local table, process1100does not have to remap the local table. In this embodiment, since the slot in the notification entry is a slot process1100knows about, process1100can simply read the value from the slot indicated in the notification entry without having to remap the local table. This makes the maintaining of the local table more efficient as the local table does not always have to have the same size as the shared table. In this embodiment, process1100maintains the local table as needed. For example and in one embodiment, if the reader local table has 200 slots and process1100receives a notification for slot150, process1100checks the numslots value of the dynamic shared memory hash table. In this example, process1100determines that the numslots is 300, which means that the shared memory hash table is larger than the reader local table. However, because the value of the notification (150) is less than the number of slots in the reader local table, the reader does not need to remap the local table so as to grow that table. In this example, process1100just reads the value corresponding to slot150. If the local table does not need to be remapped, at block1108, process1100reads the value. In one embodiment, process1100reads the value from the local table using the slot value in the notification. In one embodiment, because the shared memory table could change during the read, process1100re-checks the numslots and version at block1110. If the numslots and version have not changed, the read is successful, otherwise the value should re-read. At block1112, process1100determines if the numslots and version have changed. If there is no change, execution proceeds to block1114. If the numslots and version have changed, execution proceeds to block1118where process1100remaps the table at block1118. If there is no change in the numslots and version, process1100detects if there is corruption in the table at block1114. In one embodiment, process1100detects corruption in the table. In this embodiment, corruption can be detected if a reader tries to read from a slot in shared memory that does not exist. In one embodiment, process1100knows the table is corrupted if on this out of bounds condition the table does not need to be resized. If there is no corruption (e.g., table not remapped or table remapped and no corruption detected), process1100stores the value in the slot indicated by the notification or read request at block1120. If there is detected corruption, process1100preforms the corruption recovery at block1116. FIG.11Bis a flow diagram of one embodiment of a process1150to remap a notification queue view for a reader. In one embodiment, a reader performs process1150to remap a notification queue view for the reader, such as readers112A-C as described inFIG.1above. InFIG.11B, process1150begins by receiving a notification entry for a slot at block1152. In one embodiment, the notification entry is used to publish slot changes to the reader. At block1154, process1150check if the primary notification queue should be swapped. In one embodiment, process1150checks the primary queue status by reading, atomically, a 32-bit number that gets incremented every time the table gets swapped, odd numbers mean secondary queue is active, even numbers is the primary. If the primary queue has been swapped, process1150swaps the primary notification queue pointer for the reader and finds the position for the reader in the new primary queue. Execution proceeds to block1154above. If the primary queue does not needs to be swapped, process1150retrieves and processes the notification at block1160. In addition, process1150increments the position for the reader. In one embodiment, the position is the last entry in the notification queue that the reader has read. At block1162, process1150performs a range check on the position. In one embodiment, process1150compares the position with the total number of entries that are in the reader's view for the notification queue. For example and in one embodiment, if the position is one less than the number of total entries in the reader's view of the notification queue, process1150should attempt to resize the reader's view of the notification queue. At block1164, process1150determines if the reader's view of the notification queue should be resized at block1164. If not, process1150returns at block1166. If the reader's view of the notification queue is to be resized, process1150resizes this view at block1168. In one embodiment, process1150determines the total number of entries in the notification queue and resizes the reader's view to be this size. In this embodiment, the notification queue is a memory mapped file and the remapping is done via a system call. At block1170, process1150detects if there is corruption in the notification queue. In one embodiment, process1150detects corruption in the table. In one embodiment, there is a maximum slot identifier that can be notified stored in the header. For example, a reader validates that the slot-id pulled from the queue does not exceed that number. In one embodiment, process1150performs corruption detection after remapping the notification queue. If there is no corruption, process1150returns at block1174. In one embodiment, the reader will perform a retry on resize loop so that the reader can learn and adjust the mapping of notification table if the number of slots in the notification queue has shrunk. For example and in one embodiment, the reader will optimistically try running a function, such as reading a slot in the notification queue. If the function fails due to shared memory growth or shrinkage, the reader catches the exception, tries to resize the notification queue, and attempts to run the function again. As another example, below is pseudo-code below: /*** Convenient function to perform an operation andretry if the slot table* has shrunk.*/void retryOnResize( std::function< void() > func )const {bool retry;do {try {retry = false;func();} catch ( const CorruptionDetectedException&e ) {if ( doResize() ) {retry = true;continue;}throw;}} while ( retry | | doResize() );} In this example pseudo-code, when the retryOnResize( ) function is called, a function pointer is passed to a function invoked inside the retryOnResize( ) function. For example and in one embodiment, when the retryOnResize( ) function is invoked, the function passed in (e.g., reading a notification slot) is tried. If the function return cleanly, the retryOnResize( ) function returns without an error. If there is an exception (e.g., due to shared memory growth or shrinkage), the doResize( ) function is called to try and resize the reader notification queue. If the resize is successful, the variable retry is set to true and the function is called again. If the retry fails, the retryOnResize( ) function fails and throws an exception. In general, a reader can start up anytime during the lifetime of the writer. During the startup, the reader starts up and copies all the slots from the shared memory table to the reader local table. For example and in one embodiment, the reader copies the slots from the shared memory table to the local shadow as illustrated inFIG.2above. In one embodiment, a problem can occur is a reader synchronizing, where the writer prunes the notification queue while the reader is starting up or getting the copy of the notification queue. If this happens, the copy of the local table the newly started reader has and the shared memory table maintained by the writer can be inconsistent. For example and in one embodiment, at initialization time, the reader attempts to establish a connection to the writer. The reader proceeds with retrieving the latest sequence number from the notification queue header in shared memory table, and copying the slots from the shared memory table into the local table of the reader. At that point, the reader is synchronized up to at least that sequence number, which is saved internally by the reader so as to let the reader know where to start consuming notifications in the notification queue. However, while the reader is copying slots, the writer may be pruning slots that the reader has not copied over. Thus, the reader will be actually more up to date than the tentative synchronization sequence number. If the attempt to establish a connection during initialization was unsuccessful, the reader waits until this reader receives a trigger from the writer, and then tries reconnecting. When activities eventually run and there is an active connection, the writer handles that new reader connection and sends a first kick to the reader. Upon processing this first kick, the reader consumes the available notifications in the notification queue and sends the sequence number of the last consumed notification to the writer. The writer handles this data by kicking the reader again if the received sequence number is not equal to the latest sequence number. In one embodiment, the sequence numbers that the writer collects from each reader can serve a dual purpose: (1) determine which readers need to be kicked to process new notifications and (2) determine the sequence number of the slowest connected reader. In one embodiment, at any time, the writer may need to compact the notification queue by skipping invalidated when copying over to the other queue. In addition, the writer can optimize the compaction by looking at its collection of connected readers to find out what is the sequence of the slowest sequence. What about readers trying to initialize or connect? There are some windows of opportunity for the writer to put such readers in an inconsistent state, because the writer may be unaware of the readers at the moment the writer decides to prune notifications underneath the readers. These readers must be able to detect and recover from such events. In one embodiment, the reader just needs to make sure that (1) the writer is aware of the sequence number of the reader and (2) the writer has not pruned any notification higher than that sequence number. Thus, when the writer receives the initial sequence number of a reader, the writer sends an acknowledgement kick. Upon receiving this acknowledgement kick, the reader checks what was the sequence number of the slowest connected reader at the time of last pruning. To that end, the writer updates a shared minimum sequence number in the primary notification queue header every time the writer prunes notifications. At any time, this number represents the shared minimum sequence number required to process the conquer queue. If the reader's sequence number is lower than the shared minimum sequence number, the reader resynchronizes the shared memory table. In one embodiment, that at this point, the writer is aware of the reader position. Thus, a resynchronization may be needed at most once in a connection's lifetime. The resynchronization, similarly to the synchronization, copies all the slots from shared memory table into the local table except that it avoids notifying unchanged values in the case of a value shadow. FIG.12Ais a flow diagram of an embodiment of a process1200to synchronize a shared memory table for a reader. In one embodiment, a reader performs process1200to synchronize a shared memory table, such as readers112A-C as described inFIG.1above. InFIG.12A, process1200begins by starting the reader at block1202. In one embodiment, the reader establishes a connection to the writer. Process1200sends the last sequence number processed by the reader to the writer at block1204. In this embodiment, the writer receives the last sequence number from this reader (and the other readers) and determines the lowest sequence number. This lowest sequence number is used by the writer to determine whether to prune the notification queue during a compaction. If there is a pruning, the writer will subsequently advertise the shared minimum sequence number to the readers. At block1206, process1200synchronizes the reader local memory with the shared memory table maintained by the writer. In one embodiment, process1200synchronizes the local table by copying the all of the slots from the shared memory table into the local table of the reader. With the synchronized shared memory table, the reader can start to process the notifications stored in the notification queue. At block1208, process1200receives an acknowledgement kick from the writer. Process1200reads the shared minimum sequence number at block1210. Process1200determines if the shared minimum sequence number is greater than the last sequence number at block1212. If the shared minimum sequence number is greater than the last sequence number, then the local table being processed by the reader is out of consistency with the shared memory table being maintained by the writer. In this case, the reader will need to resynchronize the reader local table with the shared memory table. If the shared minimum sequence number is greater than the last sequence number, process1200resynchronizes the reader copy of the local table at block1214. In one embodiment, process1200resynchronizes the reader local table by copying the slots from the shared memory tables into the local table of the reader. By resynchronizing, the reader will have an updated local table that is likely to be consistent with the shared memory table maintained by the writer. Execution proceeds to block1206above, where process1200rechecks if the local table is consistent with the shared memory table. If the lowest sequence number is greater than or equal to last sequence number of the reader, process1200processes the notification queue at block1216. As per above, the reader communicates with the writer to determine if the local table maintained by the reader is consistent with the shared memory table of the writer.FIG.12Bis a flow diagram of an embodiment of a process1250to synchronize the shared memory table for a reader by a writer. In one embodiment, a writer performs process1250to synchronize the shared memory table, such as writer108as described inFIG.1above. InFIG.12B, process1250begins by handling a new reader starting up at block1252. In one embodiment, process1250registers this new reader as reader to be notified for new additions to the shared memory. At block1254, process1250receives the reader sequence data. In one embodiment, the reader sequence data is the last sequence number that the reader has processed in the notification queue. Process1250determines the lowest sequence number at block1256and advertises this lowest sequence number to the reader. In one embodiment, the lowest sequence number is smallest sequence number that has been processed by the readers known to the writer. For example and in one embodiment, if the writer knows the reader 1 has a last sequence number of 26, reader 2 has a last sequence number of 32, and reader 3 has a last sequence number of 20, then the lowest sequence number is 20. In one embodiment, the writer uses the lowest sequence number to prune the notification queue. In one embodiment, when a reader starts up, the reader synchronizes the local table and checks to determine if the local table is consistent with the shared memory table.FIGS.13A-Cillustrate the behavior of the reader startup under various timing conditions. For example and in one embodiment, inFIG.13A, the reader starts and connects with the writer. In this example, the writer has an initial minimum sequence of 20. In one embodiment, the lowest sequence number is maintained by the writer internally. In addition, the writer can update the shared minimum sequence number after a pruning of the notification in the notification queue, which represents the sequence number of notifications available in the notification queue. After attempting to connect with the writer, the reader has synchronized slots up to sequence number22. Concurrently, the writer determines that the minimum sequence is 24 and prunes notification entries up to the sequence number. After pruning, the writer handles the reader connection. At this point, local table is not consistent with the shared memory table. The writer could decide to compact and compute the lowest sequence based on its current set of connected readers, before handling the connection of the new reader. As soon as the writer handles the reader's connection, the writer instantiates a reader client state machine which has a sequence of 0 initially. The reader sends the last sequence number to the writer, which, in this embodiment, is 22. The writer handles the reader's data and sends an acknowledgement. The reader receives this acknowledgement and reads the shared minimum sequence number is 24. The reader compares the shared minimum sequence number received from the writer with the last sequence of the reader. Since, in this example, the shared minimum sequence number is larger than the last sequence number, the local table of the reader is not consistent with the shared memory table maintained by the writer. In this case, the reader resynchronizes the local table with the shared memory table. If the reader's last sequence number is greater than the shared minimum sequence number, the reader local table is consistent with the shared memory table and the reader can process the notification queue. InFIG.13B, the reader connects with the writer as above. The writer handles the reader connections and determines that the lowest sequence number is 0. At this point, a notification queue compaction would compute a lowest sequence of 0 and postpone any pruning. The reader sends the last sequence number of the reader (in this case, sequence number of 26) and sends this sequence number. The writer receives this sequence number from the reader and determines the lowest sequence number (e.g., 20). The writer will advertise the shared minimum sequence number to the reader, where the reader will determine that the reader's last sequence number is greater than the shared minimum sequence number. In this case, the local table is consistent with the shared memory table and the reader proceeds with processing the notification queue. Alternatively, if the reader's last sequence number equal to the shared minimum sequence number, the local table is consistent with the shared memory table and the reader can process the notification queue. InFIG.13C, the reader connects with the writer as above. The writer handles the reader connections and determines that the lowest sequence number is 0 because the writer has not received the reader's sequence number. At this point, a notification queue compaction would compute a lowest sequence of 0 and postpone any pruning. The reader sends the last sequence number of the reader (in this case, sequence number of 26) and sends this sequence number. The writer receives this sequence number from the reader and determines the lowest sequence number (e.g., 26). The writer will advertise the shared minimum sequence number to the reader, where the reader will determine that the reader's last sequence number is greater than the shared minimum sequence number. In this case, the local table is consistent with the shared memory table and the reader proceeds with processing the notification queue. While in one embodiment, the reader synchronization mechanism above is described in reference to the shared memory hash table as illustrated inFIG.2. In alternate embodiments, this reader synchronization mechanism can be applied to another type of shared data structure where notifications are used to signal changes made by one or more writers to the data structure for different readers that wish read up to date values in the data structure. For example and in one embodiment, this reader synchronization mechanism can be applied to different types of data structures such as dictionaries, linked lists, trees, vector, and/or other types of data structures. FIG.14is a block diagram of one embodiment of a grow table module1400that grows shared tables or a notification queue. In one embodiment, the grow table module1400is part of the writer, such as the writer108as described inFIG.1above. In one embodiment, the grow table module1400includes a receive entry module1402, a grow table decision module1404, an initialize new segment module1406, an add entry module1408, and an update module1410. In one embodiment, the receive entry module1402receive an entry to be stored in the shared table as described inFIG.8, block802above. The grow table decision module1404determines whether to grow the table as described inFIG.8, block804above. The initialize new segment module1406initializes a new segment for the table as described inFIG.8, block808above. The an add entry module1408adds an entry to the table as described inFIG.8, blocks806and810above. The update module1410updates the table characteristics as described inFIG.8, block812above. FIG.15is a block diagram of one embodiment of a shrink table module1500that shrinks shared tables. In one embodiment, the shrink table module1500is part of the writer, such as the writer108as described inFIG.1above. In one embodiment, the shrink table module1500includes a delete entry module1502, shrink table decision module1504, identify module1506, copy entries module1508, issue notifications module1510, remap table module1512, and update module1514. In one embodiment, the delete entry module1502deletes an entry as described inFIG.9, block902above. The shrink table decision module1504determines whether to shrink the table as described as described inFIG.9, block904above. The identify module1506identifies a segment to be shrunk as described inFIG.9, block908above. The copy entries module1508copies entries as described inFIG.9, block910above. The issue notifications module1510issue notifications as described inFIG.9, block912above. The remap table module1512remaps the table as described inFIG.9, block914above. The update module1514updates the table as described inFIG.9, block916above. FIG.16is a block diagram of one embodiment of a shrink notification queue module1600to shrink a notification queue. In one embodiment, the shrink notification queue module1600is part of the writer, such as the writer108as described inFIG.1above. In one embodiment, the shrink notification queue module1600includes a receive entry module1602, generate notification module1604, primary queue full module1606, resize secondary queue module1608, prune queue module1610, copy slots module1612, swap queue module1614, write notification module1616, and update queue1618. In one embodiment, the receive entry module1602receives the entry as described inFIG.10, block1002above. The generate notification module1604generates the notification as described inFIG.10, block1004above. The primary queue full module1606determines if the primary queue is full as described inFIG.10, block1006above. The resize secondary queue module1608resizes the secondary queue as described inFIG.10, block1010above. The prune queue module1610prunes the primary queue as described inFIG.10, block1016above. The copy slots module1612copies the slots as described inFIG.10, block1018above. The swap queue module1614swaps the queue as described inFIG.10, block1020above. The write notification module1616writes the notification as described inFIG.10, block1024above. The update queue1618updates the queue as described inFIG.10, block1022above. FIG.17is a block diagram of one embodiment of a reader remap module1700to remap a table for a reader. In one embodiment, the reader remap module1700is part of the reader, such as the reader(s)112A-C as described inFIG.1above. In one embodiment, the reader remap module1700includes a receive notification module1702, check module1704, remap decision module1706, remap table module1708, read module1710, numslot/version difference module1712, corruption detection module1714, and store value module1716. In one embodiment, the receive notification module1702receives a notification as described inFIG.11, block1102above. The check module1704checks the number of slots and version as described inFIG.11, blocks1104and1110above. The remap decision module1706determines if the table should be remapped as described inFIG.11, block1106above. The remap table module1708remaps the table as described inFIG.11, block1118above. The read module1710reads the value as described inFIG.11, block1108above. The numslot/version difference module1712determines if the numslot/version is different as described inFIG.11, block1112above. The corruption detection module detects if there is corruption as described inFIG.11, block1114above. The store value module stores the value as described inFIG.11, block1120above. FIG.18shows one example of a data processing system1800, which may be used with one embodiment of the present invention. For example, the system1800may be implemented including a network element100as shown inFIG.1. Note that whileFIG.18illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components as such details are not germane to the present invention. It will also be appreciated that network computers and other data processing systems or other consumer electronic devices, which have fewer components or perhaps more components, may also be used with the present invention. As shown inFIG.18, the computer system1800, which is a form of a data processing system, includes a bus1803which is coupled to a microprocessor(s)1805and a ROM (Read Only Memory)1809and volatile RAM1809and a non-volatile memory1811. The microprocessor1805may retrieve the instructions from the memories1807,1809,1811and execute the instructions to perform operations described above. The bus1803interconnects these various components together and also interconnects these components1805,1807,1809, and1811to a display controller and display device1815and to peripheral devices such as input/output (I/O) devices which may be mice, keyboards, modems, network interfaces, printers and other devices which are well known in the art. In one embodiment, the system1800includes a plurality of network interfaces of the same or different type (e.g., Ethernet copper interface, Ethernet fiber interfaces, wireless, and/or other types of network interfaces). In this embodiment, the system1800can include a forwarding engine to forward network date received on one interface out another interface. Typically, the input/output devices1815are coupled to the system through input/output controllers1819. The volatile RAM (Random Access Memory)1809is typically implemented as dynamic RAM (DRAM), which requires power continually in order to refresh or maintain the data in the memory. The mass storage1811is typically a magnetic hard drive or a magnetic optical drive or an optical drive or a DVD RAM or a flash memory or other types of memory systems, which maintain data (e.g. large amounts of data) even after power is removed from the system. Typically, the mass storage1811will also be a random access memory although this is not required. WhileFIG.18shows that the mass storage1811is a local device coupled directly to the rest of the components in the data processing system, it will be appreciated that the present invention may utilize a non-volatile memory which is remote from the system, such as a network storage device which is coupled to the data processing system through a network interface such as a modem, an Ethernet interface or a wireless network. The bus1803may include one or more buses connected to each other through various bridges, controllers and/or adapters as is well known in the art. Portions of what was described above may be implemented with logic circuitry such as a dedicated logic circuit or with a microcontroller or other form of processing core that executes program code instructions. Thus processes taught by the discussion above may be performed with program code such as machine-executable instructions that cause a machine that executes these instructions to perform certain functions. In this context, a “machine” may be a machine that converts intermediate form (or “abstract”) instructions into processor specific instructions (e.g., an abstract execution environment such as a “process virtual machine” (e.g., a Java Virtual Machine), an interpreter, a Common Language Runtime, a high-level language virtual machine, etc.), and/or, electronic circuitry disposed on a semiconductor chip (e.g., “logic circuitry” implemented with transistors) designed to execute instructions such as a general-purpose processor and/or a special-purpose processor. Processes taught by the discussion above may also be performed by (in the alternative to a machine or in combination with a machine) electronic circuitry designed to perform the processes (or a portion thereof) without the execution of program code. The present invention also relates to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; etc. An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)). FIG.19is a block diagram of one embodiment of an exemplary network element1900that reads and writes data with a dynamic shared memory hash table using notifications. InFIG.19, the midplane1906couples to the line cards1902A-N and controller cards1904A-B. While in one embodiment, the controller cards1904A-B control the processing of the traffic by the line cards1902A-N, in alternate embodiments, the controller cards1904A-B, perform the same and/or different functions (e.g., writing data with a dynamic shared memory hash table using reader notifications, etc.). In one embodiment, the line cards1902A-N process and forward traffic according to the network policies received from controller cards the1904A-B. In one embodiment, the controller cards1904A-B write data to the dynamic shared memory hash table using reader notifications as described inFIGS.8-10above. In this embodiment, one or both of the controller cards include a writer hash module to write data to the shared memory hash table using reader notifications, such as the writer108as described inFIG.1above. In another embodiment, the line cards1902A-N read data from the dynamic shared memory hash table using notifications as described inFIG.11. In this embodiment, one or more of the line cards1902A-N include the reader hash module to read data from the shared memory hash table using notifications, such as the reader112A-C as described inFIG.1above. It should be understood that the architecture of the network element1900illustrated inFIG.19is exemplary, and different combinations of cards may be used in other embodiments of the invention. The preceding detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the tools used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be kept in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “storing,” “deleting,” “determining,” “copying,” “reading,” “updating,” “adding,” “remapping,” “receiving,” “publishing,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations described. The required structure for a variety of these systems will be evident from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. The foregoing discussion merely describes some exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion, the accompanying drawings and the claims that various modifications can be made without departing from the spirit and scope of the invention.
106,115
11860862
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Where possible, any terms expressed in the singular form herein are meant to also include the plural form and vice versa, unless explicitly stated otherwise. Also, as used herein, the term “a” and/or “an” shall mean “one or more,” even though the phrase “one or more” is also used herein. Furthermore, when it is said herein that something is “based on” something else, it may be based on one or more other things as well. In other words, unless expressly indicated otherwise, as used herein “based on” means “based at least in part on” or “based at least partially on.” Like numbers refer to like elements throughout. As used herein, an “entity” may be any institution employing information technology resources and particularly technology infrastructure configured for processing large amounts of data. Typically, these data can be related to the people who work for the organization, its products or services, the customers or any other aspect of the operations of the organization. As such, the entity may be any institution, group, association, financial institution, establishment, company, union, authority or the like, employing information technology resources for processing large amounts of data. As described herein, a “user” may be an individual associated with an entity. As such, in some embodiments, the user may be an individual having past relationships, current relationships or potential future relationships with an entity. In some embodiments, a “user” may be an employee (e.g., an associate, a project manager, an IT specialist, a manager, an administrator, an internal operations analyst, or the like) of the entity or enterprises affiliated with the entity, capable of operating the systems described herein. In some embodiments, a “user” may be any individual, entity or system who has a relationship with the entity, such as a customer or a prospective customer. In other embodiments, a user may be a system performing one or more tasks described herein. As used herein, a “user interface” may be any device or software that allows a user to input information, such as commands or data, into a device, or that allows the device to output information to the user. For example, the user interface includes a graphical user interface (GUI) or an interface to input computer-executable instructions that direct a processor to carry out specific functions. The user interface typically employs certain input and output devices to input data received from a user second user or output data to a user. These input and output devices may include a display, mouse, keyboard, button, touchpad, touch screen, microphone, speaker, LED, light, joystick, switch, buzzer, bell, and/or other user input/output device for communicating with one or more users. As used herein, an “engine” may refer to core elements of an application, or part of an application that serves as a foundation for a larger piece of software and drives the functionality of the software. In some embodiments, an engine may be self-contained, but externally-controllable code that encapsulates powerful logic designed to perform or execute a specific type of function. In one aspect, an engine may be underlying source code that establishes file hierarchy, input and output methods, and how a specific part of an application interacts or communicates with other software and/or hardware. The specific components of an engine may vary based on the needs of the specific application as part of the larger piece of software. In some embodiments, an engine may be configured to retrieve resources created in other applications, which may then be ported into the engine for use during specific operational aspects of the engine. An engine may be configurable to be implemented within any general purpose computing system. In doing so, the engine may be configured to execute source code embedded therein to control specific features of the general purpose computing system to execute specific computing operations, thereby transforming the general purpose system into a specific purpose computing system. As used herein, “authentication credentials” may be any information that can be used to identify of a user. For example, a system may prompt a user to enter authentication information such as a username, a password, a personal identification number (PIN), a passcode, biometric information (e.g., iris recognition, retina scans, fingerprints, finger veins, palm veins, palm prints, digital bone anatomy/structure and positioning (distal phalanges, intermediate phalanges, proximal phalanges, and the like), an answer to a security question, a unique intrinsic user activity, such as making a predefined motion with a user device. This authentication information may be used to authenticate the identity of the user (e.g., determine that the authentication information is associated with the account) and determine that the user has authority to access an account or system. In some embodiments, the system may be owned or operated by an entity. In such embodiments, the entity may employ additional computer systems, such as authentication servers, to validate and certify resources inputted by the plurality of users within the system. The system may further use its authentication servers to certify the identity of users of the system, such that other users may verify the identity of the certified users. In some embodiments, the entity may certify the identity of the users. Furthermore, authentication information or permission may be assigned to or required from a user, application, computing node, computing cluster, or the like to access stored data within at least a portion of the system. It should also be understood that “operatively coupled,” as used herein, means that the components may be formed integrally with each other, or may be formed separately and coupled together. Furthermore, “operatively coupled” means that the components may be formed directly to each other, or to each other with one or more components located between the components that are operatively coupled together. Furthermore, “operatively coupled” may mean that the components are detachable from each other, or that they are permanently coupled together. Furthermore, operatively coupled components may mean that the components retain at least some freedom of movement in one or more directions or may be rotated about an axis (i.e., rotationally coupled, pivotally coupled). Furthermore, “operatively coupled” may mean that components may be electronically connected and/or in fluid communication with one another. As used herein, an “interaction” may refer to any communication between one or more users, one or more entities or institutions, and/or one or more devices, nodes, clusters, or systems within the system environment described herein. For example, an interaction may refer to a transfer of data between devices, an accessing of stored data by one or more nodes of a computing cluster, a transmission of a requested task, or the like. As used herein, “determining” may encompass a variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, ascertaining, and/or the like. Furthermore, “determining” may also include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and/or the like. Also, “determining” may include resolving, selecting, choosing, calculating, establishing, and/or the like. Determining may also include ascertaining that a parameter matches a predetermined criterion, including that a threshold has been met, passed, exceeded, and so on. As used herein, a “distributed ledger” may refer to a consensus of replicated and synchronized data geographically shared across multiple nodes on a network. Without using a centralized data storage, each distributed ledger database replicates and saves an identical copy of the ledger. A distributed ledger may employ executing codes, also known as smart contracts, to manage transactions and store records of transactions among disparate participants in the distributed ledger-based network (DLN) without the need for a central authority. As used herein, a “non-fungible token” or “NFT” may refer to a digital unit of data used as a unique digital identifier for a resource. An NFT may be stored on a distributed ledger that certifies ownership and authenticity of the resource, and as such, cannot be copied, substituted, or subdivided. In specific embodiments, the NFT may include at least relationship layer, a token layer, a metadata layer(s), and a licensing layer. The relationship layer may include a map of various users that are associated with the NFT and their relationship to one another. For example, if the NFT is purchased by buyer B1from a seller S1, the relationship between B1and S1as a buyer-seller is recorded in the relationship layer. In another example, if the NFT is owned by O1and the resource itself is stored in a storage facility by storage provider SP1, then the relationship between O1and SP1as owner-file storage provider is recorded in the relationship layer. The token layer may include a smart contract that points to a series of metadata associated with the resource, and provides information about supply, authenticity, lineage, and provenance of the resource. The metadata layer(s) may include resource descriptors that provides information about the resource itself (e.g., resource information). These resource descriptors may be stored in the same metadata layer or grouped into multiple metadata layers. The licensing layer may include any restrictions and licensing rules associated with purchase, sale, and any other types of transfer of the resource from one person to another. Those skilled in the art will appreciate that various additional layers and combinations of layers can be configured as needed without departing from the scope and spirit of the invention. As used herein, a “resource” may generally refer to objects, products, devices, goods, commodities, services, and the like, and/or the ability and opportunity to access and use the same within a virtual medium. Some example implementations herein contemplate property held by a user, including property that is stored and/or maintained by a third-party entity. For purposes of this invention, a resource is typically stored in a resource repository—a storage location where one or more resources are organized, stored, and retrieved electronically using a computing device. With the ongoing digitalization of the world, non-fungible tokens (NFTs) are becoming a very viable solution for tokenizing ownership and property. By leveraging NFT technology, certain electronic media that are valued high can be partitioned in terms of ownership. Accordingly, the present invention: (i) receives, from a user input device, a request to generate a non-fungible token (NFT) for a first portion of a resource (digital image). A digital image may comprise a collection of pixels arranged in the form of a coordinate grid. The first portion of the digital resource may include a cluster of pixels (or individual pixels) identified by their location within the coordinate grid, (ii) retrieves information associated with the first portion of the resource by isolating one or more pixels associated with the cluster of pixels defining the first portion of the digital resource, extracting at least a red color value (R), a green color value (G), and a blue color value (B) from each of the one or more pixels, determining the coordinate positions of each of the one or more pixels to determine their location on the digital image, and defining the first portion of the digital resource as an array of RGB values and corresponding coordinate positions of each of the one or more pixels, (iii) generates, using the NFT engine, an NFT for the first portion of the resource. The NFT may include resource descriptors, i.e., information associated with the resource, to be stored in one of its many metadata layers. The resource descriptors may include at least the array of RGB values and corresponding coordinate positions of each of the one or more pixels. In addition, the NFT may include a value, to be stored in one of its many metadata layers. The value for the NFT is determined using one or more attributes such as a security status level of the distributed ledger in which the NFT is recorded, metadata storage type, lifetime of the NFT, rarity of the resource, identification features of the resource, and/or the like, (iv) records the NFT in a distributed ledger, (v) tags the first portion of the resource with a descriptor indicating that the first portion of the resource is associated with an NFT. The descriptor may include the distributed ledger address for the transaction object associated with that portion of the resource, and (vi) displays a notification to the user indicating that the NFT has been generated and recorded in the distributed ledger. FIG.1illustrates technical components of a system for identification and recordation of base components of a resource within a virtual medium100, in accordance with an embodiment of the invention.FIG.1provides a unique system that includes specialized servers and system communicably linked across a distributive network of nodes required to perform the functions of the process flows described herein in accordance with embodiments of the present invention. As illustrated, the system environment100includes a network110, a system130, and a user input device140. In some embodiments, the system130, and the user input device140may be used to implement the processes described herein, in accordance with an embodiment of the present invention. In this regard, the system130and/or the user input device140may include one or more applications stored thereon that are configured to interact with one another to implement any one or more portions of the various user interfaces and/or process flow described herein. In accordance with embodiments of the invention, the system130is intended to represent various forms of digital computers, such as laptops, desktops, video recorders, audio/video player, radio, workstations, servers, wearable devices, Internet-of-things devices, electronic kiosk devices (e.g., automated teller machine devices), blade servers, mainframes, or any combination of the aforementioned. In accordance with embodiments of the invention, the user input device140is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, augmented reality (AR) devices, virtual reality (VR) devices, extended reality (XR) devices, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. In accordance with some embodiments, the system130may include a processor102, memory104, a storage device106, a high-speed interface108connecting to memory104, and a low-speed interface112connecting to low speed bus114and storage device106. Each of the components102,104,106,108,111, and112are interconnected using various buses, and may be mounted on a common motherboard or in other manners as appropriate. The processor102can process instructions for execution within the system130, including instructions stored in the memory104or on the storage device106as part of an application that may perform the functions disclosed herein, display graphical information for a GUI on an external input/output device, such as display116coupled to a high-speed interface108, and/or the like. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple systems, same or similar to system130may be connected, with each system providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). In some embodiments, the system130may be a server managed by the business. The system130may be located at the facility associated with the business or remotely from the facility associated with the business. The memory104stores information within the system130. In one implementation, the memory104is a volatile memory unit or units, such as volatile random access memory (RAM) having a cache area for the temporary storage of information. In another implementation, the memory104is a non-volatile memory unit or units. The memory104may also be another form of computer-readable medium, such as a magnetic or optical disk, which may be embedded and/or may be removable. The non-volatile memory may additionally or alternatively include an EEPROM, flash memory, and/or the like. The memory104may store any one or more of pieces of information and data used by the system in which it resides to implement the functions of that system. In this regard, the system may dynamically utilize the volatile memory over the non-volatile memory by storing multiple pieces of information in the volatile memory, thereby reducing the load on the system and increasing the processing speed. The storage device106is capable of providing mass storage for the system130. In one aspect, the storage device106may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier may be a non-transitory computer- or machine-readable storage medium, such as the memory104, the storage device104, or memory on processor102. In some embodiments, the system130may be configured to access, via the network110, a number of other computing devices (not shown) in addition to the user input device140. In this regard, the system130may be configured to access one or more storage devices and/or one or more memory devices associated with each of the other computing devices. In this way, the system130may implement dynamic allocation and de-allocation of local memory resources among multiple computing devices in a parallel or distributed system. Given a group of computing devices and a collection of interconnected local memory devices, the fragmentation of memory resources is rendered irrelevant by configuring the system130to dynamically allocate memory based on availability of memory either locally, or in any of the other computing devices accessible via the network. In effect, it appears as though the memory is being allocated from a central pool of memory, even though the space is distributed throughout the system. This method of dynamically allocating memory provides increased flexibility when the data size changes and allows memory reuse for better utilization of the memory resources when the data sizes are large. The high-speed interface108manages bandwidth-intensive operations for the system130, while the low speed controller112manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In some embodiments, the high-speed interface108is coupled to memory104, display116(e.g., through a graphics processor or accelerator), and to high-speed expansion ports111, which may accept various expansion cards (not shown). In such an implementation, low-speed controller112is coupled to storage device106and low-speed expansion port114. The low-speed expansion port114, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The system130may be implemented in a number of different forms, as shown inFIG.1. For example, it may be implemented as a standard server, or multiple times in a group of such servers. Additionally, the system130may also be implemented as part of a rack server system or a personal computer such as a laptop computer. Alternatively, components from system130may be combined with one or more other same or similar systems and an entire system130may be made up of multiple computing devices communicating with each other. FIG.1also illustrates a user input device140, in accordance with an embodiment of the invention. The user input device140includes a processor152, memory154, an input/output device such as a display156, a communication interface158, and a transceiver160, among other components. The user input device140may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components152,154,158, and160, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor152is configured to execute instructions within the user input device140, including instructions stored in the memory154, which in one embodiment includes the instructions of an application that may perform the functions disclosed herein. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may be configured to provide, for example, for coordination of the other components of the user input device140, such as control of user interfaces, applications run by user input device140, and wireless communication by user input device140. The processor152may be configured to communicate with the user through control interface164and display interface166coupled to a display156. The display156may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface156may comprise appropriate circuitry and configured for driving the display156to present graphical and other information to a user. The control interface164may receive commands from a user and convert them for submission to the processor152. In addition, an external interface168may be provided in communication with processor152, so as to enable near area communication of user input device140with other devices. External interface168may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory154stores information within the user input device140. The memory154can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory may also be provided and connected to user input device140through an expansion interface (not shown), which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory may provide extra storage space for user input device140or may also store applications or other information therein. In some embodiments, expansion memory may include instructions to carry out or supplement the processes described above and may include secure information also. For example, expansion memory may be provided as a security module for user input device140and may be programmed with instructions that permit secure use of user input device140. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. In some embodiments, the user may use the applications to execute processes described with respect to the process flows described herein. Specifically, the application executes the process flows described herein. The memory154may include, for example, flash memory and/or NVRAM memory. In one aspect, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described herein. The information carrier is a computer- or machine-readable medium, such as the memory154, expansion memory, memory on processor152, or a propagated signal that may be received, for example, over transceiver160or external interface168. In some embodiments, the user may use the user input device140to transmit and/or receive information or commands to and from the system130via the network110. Any communication between the system130and the user input device140(or any other computing devices) may be subject to an authentication protocol allowing the system130to maintain security by permitting only authenticated users (or processes) to access the protected resources of the system130, which may include servers, databases, applications, and/or any of the components described herein. To this end, the system130may require the user (or process) to provide authentication credentials to determine whether the user (or process) is eligible to access the protected resources. Once the authentication credentials are validated and the user (or process) is authenticated, the system130may provide the user (or process) with permissioned access to the protected resources. Similarly, the user input device140(or any other computing devices) may provide the system130with permissioned to access the protected resources of the user input device130(or any other computing devices), which may include a GPS device, an image capturing component (e.g., camera), a microphone, a speaker, and/or any of the components described herein. The user input device140may communicate with the system130(and one or more other devices) wirelessly through communication interface158, which may include digital signal processing circuitry where necessary. Communication interface158may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver160. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module170may provide additional navigation—and location-related wireless data to user input device140, which may be used as appropriate by applications running thereon, and in some embodiments, one or more applications operating on the system130. The user input device140may also communicate audibly using audio codec162, which may receive spoken information from a user and convert it to usable digital information. Audio codec162may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of user input device140. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by one or more applications operating on the user input device140, and in some embodiments, one or more applications operating on the system130. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a technical environment that includes a back end component (e.g., as a data server), that includes a middleware component (e.g., an application server), that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. As shown inFIG.1, the components of the system130and the user input device140are interconnected using the network110. The network110, which may be include one or more separate networks, be a form of digital communication network such as a telecommunication network, a local area network (“LAN”), a wide area network (“WAN”), a global area network (“GAN”), the Internet, or any combination of the foregoing. It will also be understood that the network110may be secure and/or unsecure and may also include wireless and/or wired and/or optical interconnection technology. In accordance with an embodiments of the invention, the components of the system environment100, such as the system130and the user input device140may have a client-server relationship, where the user input device130makes a service request to the system130, the system130accepts the service request, processes the service request, and returns the requested information to the user input device140, and vice versa. This relationship of client and server typically arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. It will be understood that the embodiment of the system environment100illustrated inFIG.1is exemplary and that other embodiments may vary. As another example, in some embodiments, the system environment may include more, fewer, or different components. As another example, in some embodiments, some or all of the portions of the system environment100may be combined into a single portion. Likewise, in some embodiments, some, or all of the portions of the system130may be separated into two or more distinct portions. FIG.2illustrates a process flow for identification and recordation of base components of a resource within a virtual medium200, in accordance with an embodiment of the invention. As shown in block202, the process flow includes electronically receiving, from a user input device, a request to generate a non-fungible token (NFT) for a first portion of a resource. In some embodiments, the resource may be a digital image. A digital image may comprise a collection of pixels—smallest controllable element of an image represented on a display—arranged in the form of a coordinate grid. In such embodiments, the first portion of the digital resource may include a cluster of pixels (or individual pixels) identified by their location within the coordinate grid of pixels. This cluster of pixels may be predetermined by the user. It is to be understood that while this application focuses on using base components of a digital image, i.e., pixels, that is not to be construed as limiting the application in any way and that the application does not exclude the possibility of applying the same techniques to process other electronic media capable of being partitioned into its base components, such as video, audio, and/or the like. Next, as shown in block204, the process flow includes, in response, retrieving information associated with the first portion of the resource. In some embodiments, each pixel may be a sample of an original digital image, where more samples typically provide more-accurate representations of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black. Accordingly, to retrieve information associated with the first portion of the resource, the system may be configured to isolate one or more pixels associated with the cluster of pixels defining the first portion of the digital resource. Then, the system may be configured to extract at least a red color value (R), a green color value (G), and a blue color value (B) from each of the one or more pixels. In addition, the system may be configured to determine the coordinate positions of each of the one or more pixels to determine their location on the digital image. In response to extracting the RGB values and the coordinate positions, the system may be configured to define the first portion of the digital resource as an array of RGB values and corresponding coordinate positions of each of the one or more pixels. Next, as shown in block206, the process flow includes initiating an NFT engine on the first portion of the resource. In some embodiments, the NFT engine may be used to create tokenized representations (NFTs) of the first portion of the resource that are capable of being exchanged on public distributed ledger based platforms. Next, as shown in block208, the process flow includes generating, using the NFT engine, an NFT for the first portion of the resource, wherein the NFT comprises at least the information associated with the first portion of the resource. In some embodiments, the NFT engine may be configured to record resource descriptors, i.e., information associated with the resource for which the NFT is being generated, to be stored in one of its many metadata layers. Here, the resource descriptors may include at least the array of RGB values and corresponding coordinate positions of each of the one or more pixels. In some embodiments, the system may be configured to determine a value for the NFT using one or more attributes. In some embodiments, the attributes may include at least a security status level of the distributed ledger in which the NFT is recorded, metadata storage type, lifetime of the NFT, rarity of the resource, identification features of the resource, and/or the like. In some embodiments, the system may be configured to implement an NFT valuation engine that analyzes the various attributes to determine the value for the NFT. This value is then stored in one of the many metadata layers of the NFT. Next, as shown in block210, the process flow includes recording the NFT in a distributed ledger. In this regard, the system may be configured to generate a new transaction object (e.g., block) for the NFT. Each transaction object may include the NFT, a nonce—a randomly generated 32-bit whole number when the transaction object is created, and a hash value wedded to that nonce. Once generated, the NFT is considered signed and forever tied to its nonce and hash. Then, the system may be configured to deploy the new transaction object for the NFT on the distributed ledger. In some embodiments, when new transaction object is deployed on the distributed ledger, a distributed ledger address is generated for that new transaction object, i.e., an indication of where it is located on the distributed ledger. This distributed ledger address is captured for recording purposes. In some embodiments, in response to recording the NFT in the distributed ledger, the first portion of the resource may be tagged with a descriptor indicating that the first portion of the resource is associated with an NFT. Here, the descriptor may include the distributed ledger address for the transaction object associated with that portion of the resource. Next, as shown in block212, the process flow includes transmitting control signals configured to cause the user input device to display a notification indicating that the NFT has been generated and recorded in the distributed ledger. As will be appreciated by one of ordinary skill in the art in view of this disclosure, the present invention may include and/or be embodied as an apparatus (including, for example, a system, machine, device, computer program product, and/or the like), as a method (including, for example, a business method, computer-implemented process, and/or the like), or as any combination of the foregoing. Accordingly, embodiments of the present invention may take the form of an entirely business method embodiment, an entirely software embodiment (including firmware, resident software, micro-code, stored procedures in a database, or the like), an entirely hardware embodiment, or an embodiment combining business method, software, and hardware aspects that may generally be referred to herein as a “system.” Furthermore, embodiments of the present invention may take the form of a computer program product that includes a computer-readable storage medium having one or more computer-executable program code portions stored therein. As used herein, a processor, which may include one or more processors, may be “configured to” perform a certain function in a variety of ways, including, for example, by having one or more general-purpose circuits perform the function by executing one or more computer-executable program code portions embodied in a computer-readable medium, and/or by having one or more application-specific circuits perform the function. It will be understood that any suitable computer-readable medium may be utilized. The computer-readable medium may include, but is not limited to, a non-transitory computer-readable medium, such as a tangible electronic, magnetic, optical, electromagnetic, infrared, and/or semiconductor system, device, and/or other apparatus. For example, in some embodiments, the non-transitory computer-readable medium includes a tangible medium such as a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a compact disc read-only memory (CD-ROM), and/or some other tangible optical and/or magnetic storage device. In other embodiments of the present invention, however, the computer-readable medium may be transitory, such as, for example, a propagation signal including computer-executable program code portions embodied therein. One or more computer-executable program code portions for carrying out operations of the present invention may include object-oriented, scripted, and/or unscripted programming languages, such as, for example, Java, Perl, Smalltalk, C++, SAS, SQL, Python, Objective C, JavaScript, and/or the like. In some embodiments, the one or more computer-executable program code portions for carrying out operations of embodiments of the present invention are written in conventional procedural programming languages, such as the “C” programming languages and/or similar programming languages. The computer program code may alternatively or additionally be written in one or more multi-paradigm programming languages, such as, for example, F#. Some embodiments of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of apparatus and/or methods. It will be understood that each block included in the flowchart illustrations and/or block diagrams, and/or combinations of blocks included in the flowchart illustrations and/or block diagrams, may be implemented by one or more computer-executable program code portions. These one or more computer-executable program code portions may be provided to a processor of a general purpose computer, special purpose computer, and/or some other programmable data processing apparatus in order to produce a particular machine, such that the one or more computer-executable program code portions, which execute via the processor of the computer and/or other programmable data processing apparatus, create mechanisms for implementing the steps and/or functions represented by the flowchart(s) and/or block diagram block(s). The one or more computer-executable program code portions may be stored in a transitory and/or non-transitory computer-readable medium (e.g. a memory) that can direct, instruct, and/or cause a computer and/or other programmable data processing apparatus to function in a particular manner, such that the computer-executable program code portions stored in the computer-readable medium produce an article of manufacture including instruction mechanisms which implement the steps and/or functions specified in the flowchart(s) and/or block diagram block(s). The one or more computer-executable program code portions may also be loaded onto a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable apparatus. In some embodiments, this produces a computer-implemented process such that the one or more computer-executable program code portions which execute on the computer and/or other programmable apparatus provide operational steps to implement the steps specified in the flowchart(s) and/or the functions specified in the block diagram block(s). Alternatively, computer-implemented steps may be combined with, and/or replaced with, operator- and/or human-implemented steps in order to carry out an embodiment of the present invention. Although many embodiments of the present invention have just been described above, the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Also, it will be understood that, where possible, any of the advantages, features, functions, devices, and/or operational aspects of any of the embodiments of the present invention described and/or contemplated herein may be included in any of the other embodiments of the present invention described and/or contemplated herein, and/or vice versa. In addition, where possible, any terms expressed in the singular form herein are meant to also include the plural form and/or vice versa, unless explicitly stated otherwise. Accordingly, the terms “a” and/or “an” shall mean “one or more,” even though the phrase “one or more” is also used herein. Like numbers refer to like elements throughout. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other changes, combinations, omissions, modifications and substitutions, in addition to those set forth in the above paragraphs, are possible. Those skilled in the art will appreciate that various adaptations, modifications, and combinations of the just described embodiments can be configured without departing from the scope and spirit of the invention. Therefore, it is to be understood that, within the scope of the appended claims, the invention may be practiced other than as specifically described herein.
45,282
11860863
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION Various embodiments described herein relate to a journal-based database that allows data redaction in records of the database as well as in blocks of blockchain-based journals of the database. In some embodiments, the database may be implemented using one or more computing devices of a network-accessible or cloud-based service provider network. For example, the database may be part of a multi-tenant data storage service offered by the service provider network. The computing devices of the service provider network may implement a database management system, accessible by multiple tenant or users via network connections, to perform various database-related functions including data redaction. In addition, the computing devices may provide memory and storage spaces for storing records and/or other data of databases. To simplify description, the terms “database” and “database management system” may be used interchangeably in the disclosure. One shall understand that a database may store records, and a database management system may perform various functions for managing a database. In some embodiments, the database may include records, e.g., organized in one or more tables corresponding to respective indices. Each table may be also associated with a corresponding journal. For a given table, the journal for the table may include a set of blocks. Each block may include user-generated data and other system-generated data. For purposes of illustration, the user-generated data may be referred to as “user data” or “application data,” whereas the system-generated data may be referred to as “metadata.” One example of the user data may be individual transactions from a user committed to update records in a database and/or corresponding versions of the records produced by the individual transactions, whereas one example of the metadata may be hash values automatically generated by the database. Given that the blocks include the user data representing the transactions and associated records of the transactions, in some embodiments, the blocks in the journal may be considered a log to record the transactions committed to the database. In some embodiments, each block of the journal may include at least one hash value, which may be generated based on the user data (e.g., corresponding version of records and/or transactions) inside this block but also the user data (e.g., previous versions of records and/or previous transactions) inside one or more previously-generated block(s), e.g., the block generated immediately before this block. The hash value may uniquely represent and serve as an “ID” identifying this block. Because the hash value of one block may be generated based at least in part on the hash value(s) of previous block(s), the hash values of the set of blocks may provide an inter-dependency. For example, the hash value of a first block depends on the hash value of a second block preceding the first block, the hash value of the second block further depends on the hash value of a third block preceding the second block, and so on, until the very first and original block. As a result, with the hash values, the blocks may be able to refer to each other and thus may be visually considered “connected” together to form a hash chain, like a blockchain. Also, with the inter-dependency between the hash values, the contents of the blocks, such as the history of the records and/or transactions, may be able to be checked and cryptographically verified. In some embodiments, each block may also include metadata generated by users. The user-generated metadata may include data that is separate from the above-described user data. Instead, the user-generated metadata may be provided mainly as descriptive information of the user data, to facilitate the organization and management of the user data stored in the database. One example of the user-generated metadata may be tags provided by users, e.g., to indicate purposes of records, users or owners of records, etc. In some embodiments, a user may store records in a database, e.g., in a table of the database, and the database may accordingly create a journal to track the transactions to the records of the database. The changes to the records and the recording of the changes to the records may be considered as two parallel “processes.” On one hand, the user may request the database to perform the individual transactions to update one or more records in a database. For example, the user may create a table, and insert, update, and/or delete record(s) in the table. On the other hand, the transactions and changes to the records may be stored in respective blocks of the journal. As a result, using information of the blocks, the transaction history and thus integrity of the database may be able to be checked and cryptographically verified. As described above, traditionally, the blocks of a journal are immutable, meaning that their contents are not allowed to change. However, sometimes there may be a need to change data, e.g., redact data, in one or more blocks of the journal. For example, some regulatory requirements, e.g., General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), etc., may require businesses to provide consumers the control over personal information that the businesses have collected. In that case, if a business uses the journal-based database to collect records of consumers, the personal information may exist not only in the records but also in blocks of the journal. Therefore, to meet the regulatory requirement, the contents of the blocks may not be strictly immutable anymore. Instead, data such as personal information in the block(s) may need to be redactable. Moreover, in addition to regulatory requirements, other types of sensitive information may also need to be redacted sometimes. For example, sometimes financial information (e.g., such as credit card information) of a person or organization, confidential and/or proprietary information of an organization, or other nonpublic information of a person or organization may also need to be redacted from a journal-based database. Therefore, to meet the various requirements, the journal-based database described in this disclosure may provide the ability to allow a user to redact data from both records in a table and blocks in a journal, but still maintain the cryptographic verifiability of the blocks. In some embodiments, to implement the data redaction, a user may provide a request to the database, specifying the data to be redacted. In response to the request, the database may search the table and journal to determine where the data exists. If the data exists in a block of the journal, the database may redact the data in the block. If the data exists in the table, the database may also redact the data in the table. In addition, to maintain cryptographic verifiability, the data redaction may not alter existing metadata, or at least the portion of existing metadata in the block that is associated with the cryptographic verification of the hash-chained set of blocks. For example, the data redaction may retain the hash value identifying the block and leave it intact. Also, in some embodiment, the position of the block in the chain may stay the same. As a result, after the redaction, only the data requested to be redacted may get redacted in a block, whereas the remaining data of the hash-chained set of blocks and therefore the cryptographic verifiability of the chain may remain unchanged. The journal-based, data-redactable database described in the disclosure can provide at least several benefits. One, in contrast to traditional non-blockchain based databases, the database may use a journal including a hash-chained set of blocks to provide cryptographic verifiability. Two, in contrast to traditional blockchain-based databases, the database may allow users to redact data, e.g., personal information and/or other sensitive information, thoroughly from the database to meet regulatory and/or business requirements. FIG.1is a block diagram showing data redaction in an example journal-based database, according to some embodiments. For purposes of illustration, in this example, database100may include table102associated with journal110. Table102may include a current version of one or more records. In addition, for purposes of illustration, it may be assumed that records of the table have gone through a series of changes, as indicated inFIG.1. For example, at first, a user may create the table and insert one or more records into the table to generate table104with version 1 or original version of records. Next, the user may update the records in table104to generate another table106with version 2 of records. For example, the user may insert or add new record(s), update or change value(s) of existing record(s), and/or delete or remove existing record(s). The user may perform the updates to the records repeatedly, e.g., over a period of time, to finally result in table102that includes the current version of records. In some embodiments, table102with the current version of records may be always stored in database100, whereas tables104and106with the previous versions of records may or may not be kept in database100. To track the change history to the records, database100may generate journal110to store the above-describes updates and associated versions of records of the table. For example, when the user generates table104and inserts version 1 of the records, database100may generate block114in journal110. In some embodiments, block114may include user data124and metadata126. In some embodiments, user data124may include entry objects that represent the record revisions that are inserted, updated, and/or deleted, along with the transactions (e.g., statements in query languages) that committed the revisions. For example, as indicated inFIG.1, user data124may include the transaction of the updates (e.g., the creation of the table, and the insertion of records), as well as version 1 of the records after the updates are committed. In some embodiments, metadata126may include at least one hash value generated based on user data124. Therefore, the hash value may be considered a representation of user data124of block114. In addition, the hash value may serve as a unique ID identifying block114. In some embodiments, various hash algorithms may be applied to calculate the hash value, for example, Secure Hashing Algorithm algorithms (e.g., SHA-0, SHA-1, SHA-2, SHA-256, etc.), RACE Integrity Primitives Evaluation Message Digest algorithms (e.g., RIPEMD, RIPEMD-128, RIPEMD-160, RIPEMD-256, RIPEMD-320, etc.), Message-Digest algorithms (e.g., MD2, MD4, MD5, MD6, etc.), BLAKE algorithms (e.g., BLAKE 2, BLAKE3, etc.), Whirlpool algorithm, and so on. Similarly, when the user updates the records of table104to generate table106with version 2 of records, database100may accordingly generate another block116in journal110. Block116may include user data128to store the transaction of the corresponding updates to the records and version 2 of the records after the committed transaction, and metadata130. In some embodiments, metadata130may include at least one hash value that is generated based on user data128of block116but also user data124of block114, because block116is the second block, unlike the first and original block114. In some embodiments, metadata130may also include a copy of the hash value of metadata126that identifies block114. Therefore, with the hash values, block116may be able to refer to block114, and may be visually considered appended to block114. Also, as described, the hash values of block116may depend on the hash value of block114. Therefore, with the hash values, the contents of the two blocks may be checked against each other and thus cryptographically verifiable. The above may be repeated along with the changes to records of the table, where database100may generate one or more corresponding blocks in journal110to track the corresponding transactions and versions of records, until block112that is generated corresponding to table102with the current version of records. Similarly, block112may include user data120and metadata122. In some embodiments, user data120may include data representing the transaction committed to make the current version of records of table102, and data representing the current version of the records. In some embodiments, metadata122may include at least a hash value representing user data120and uniquely identifying block122. In some embodiments, metadata122may also include a copy of the hash value of the previous block immediately preceding block112. All the blocks114,116, . . . ,112may thus form a hash-chained set of blocks, with cryptographic verifiability, in journal110. In some embodiments, the user may need to redact data from database100. In that case, the user may send a request to database100, specifying the data to be redacted. According to the foregoing description, in some embodiments, the data may exist in the current version of records in table102, the previous versions of records in tables104and106, and/or one or more blocks in journal110. In some embodiments, database100may search tables102-106and the individual blocks of journal110to determine whether any of them contains the specific data. For example, the data may exist in user data128of block116. Accordingly, database110may cause the specific data in user data128of block116to be redacted. Meanwhile, database110may still retain metadata130of block116, as indicated inFIG.1. As a result, user data128of block116may change into “new” or updated user data132in which the specific data is redacted, but the remaining data of block116, including metadata130, may stay the same. In addition, in some embodiments, the position of block116in the hash-chained set of blocks may remain the same. For example, as indicated inFIG.1, block116with updated user data132may still remain as the second block in journal110. In some embodiments, database100may further generate block118to track the transaction that requests the data redaction. As indicated inFIG.1, block118may include user data134and metadata136. In some embodiments, user data134may include data representing the request, but not the data that is redacted. In some embodiments, metadata136may include at least one hash value generated based on user data134and user data120of block112. In some embodiments, metadata136may also include a copy of the hash value in metadata122that identifies block112. As a result, with the hash values, block118may be append to block112in journal110. In some embodiments, if the data to be redacted exists in a previous version of records, e.g., in the records of table106corresponding to the above-described block116, database100may redact the data in the records of table106. In some embodiments, if the data to be redacted exits in the current version of records of table102, database100may also cause table102to be updated, such that the data becomes redacted. In some embodiments, the data redaction may not necessarily be allowed to apply directly to the current version of records of table102. In that case, database100may generate a “new” table in the database to include the current version of records in table102except that the specific data is redacted, and then delete table102from the database. In addition, database100may further redact data from user data120of block112corresponding to table102. In some embodiments, database100may generate a new block corresponding to the “new” table, similar to what is described above, and append the new block to block118. In some embodiments, when data requested to be redacted exists in multiple blocks of journal110, redaction of data may be allowed to be applied to the individual blocks all at once or one at a time. In the first case, database100may redact the specific data from all the identified blocks at or around the same time. Alternatively, in the second case, database100may perform the data redaction to the individual blocks that include the specific data, one after another. For example, database100may targets a first block that includes the specific data, and redact the data in the first block. Next, database100may move to a second block to redact the data in the second block, and so on, until the data in the corresponding blocks gets redacts. In some embodiments, database100may perform the redaction in a specific order, e.g., from the block at the front of the chain towards blocks at the end of the chain. Alternatively, in some embodiments, the order in which the blocks get redacted may not necessarily matter, and thus database100may not necessarily perform the redaction in a fixed order. In addition, similar to what is described above, after the data redaction is completed for all the blocks, database100may generate a block, similar to block118, to capture the transaction for the redaction request. In some embodiments, redaction of data in tables102and/or journal110may include replacing the data with a hash value, or deleting the data. Either way, after the redaction, the data may become no more accessible. For purposes of illustration, consider an example where a user may create a table called “VehicleRegister” in a journal-based database (e.g., similar to database100) to record vehicle registration information. The user may insert a new vehicle's registration in the table. Next, the user may update the vehicle's registration to change the owner from one person to another. Finally, the user may delete the vehicle's registration from the table. For purposes of illustration,FIGS.2A-2Cillustrate example pseudo-statements in query languages to perform these transactions to update the records of the table. For example, as indicated inFIG.2A, the user may use the pseudo-statements in block240to create the table and add registration records for a Carmaker Model S vehicle. As the result of the transaction, table204(e.g., similar to table104) may be created with version 1 of records. The transaction may be committed on Jul. 16, 2012. As shown in the records, the owner of the vehicle s “John Doe” at that time. Next, the user may perform another transaction with the pseudo-statements in block242to update the vehicle's registration to change the owner from “John Doe” to “Jane Doe.” The transaction may be committed on Aug. 3, 2013. As the result of the transaction, table204may be updated into table206(e.g., similar to table106) with version 2 of records, as shown in the figure. Finally, the user may perform another transaction with the pseudo-statements in block244, to delete the vehicle's registration from the table. The transaction may be committed on Sep. 2, 2016. As the result of the transaction, table206may be further updated into table202(e.g., similar to table102) with the current version of records. Also, for purposes of illustration, a history of the changes to the table is also provided in table246. As indicated in table246, these three transactions are committed respectively on Jul. 16, 2012, Aug. 3, 2013, 2013, and Sep. 2, 2016, to result in the respective three versions of records. As described above, along with the updates to the records, the database may generate a journal with respective blocks to track the change history. In this example, given that there are three transactions, the database may generate at least three respective blocks, e.g., blocks314,316, and312(e.g., similar to blocks114,116, and112) as shown inFIGS.3A-3B. As shown inFIG.3A, block314may include user data324(e.g., similar to user data124) and metadata326(e.g., similar to metadata126). As described above, user data324may include entry objects that represent the record revisions that are inserted, updated, and/or deleted, along with the transactions (e.g., statements in query languages) that committed the revisions. For purposes of illustration, user data324may include data organized in one or more fields. For example, user data324may include data in the “transactionInfo” field and “revisions” field. The “transactionInfo” field may include data representing the transaction (e.g., statements in query languages) committed in block240, such as the creation of the table, the addition of the indices, and the insertion of the vehicle's registration. The “revisions” field may include data representing at least a partial version of the records of the table after the transaction committed to the database. For example, the “revisions” field may include data representing the “latest” non-deleted version of records in the table after the transaction is committed. In this example, the “revisions” field may include data representing version 1 of the vehicle's registration records in table204ofFIG.2A. Therefore, the data in the “revisions” field may be considered a “snapshot” of at least a partial version of the records. In addition to user data324, metadata326may include one or more system-generated data descriptive of user data324of the block. For example, metadata326may include an “entriesHash” that is a hash value calculated based on user data324. For example, the “entriesHash” may be the root hash of a Merkel tree in which the leaf nodes include user data324. Therefore, the “entriesHash” may be considered a representation of user data324of block314. In addition, metadata326may include a “previousBlockHash” that is the hash value of the previous block preceding block314. In this case, because block314is the first and original block generated after creation of the table, there is no other blocks preceding block314, and thus the “previousBlockHash” is empty. Moreover, metadata216may include a “blockHash” that is a hash value that is generated based on the “entriesHash” and “previousBlockHash.” For example, the “blockHash” may be the hash of the concatenation of “entriesHash” and “previousBlockHash.” As a result, the “blockHash” of block314may depend on user data324of block314, but also the user data of the preceding block (if there is one). In some embodiments, the user and/or database may use the “blockHash” of block314to uniquely identify the block. In some embodiments, user data326may also include data representing the date (e.g., Jul. 16, 2012) when the transaction is committed. As described above, various hash algorithms may be used to determine these hash values, e.g., SHA algorithms, RIPEMD algorithms, MD algorithms, BLAKE algorithms, Whirlpool algorithm, etc. Note thatFIG.3Ais only provided as an example for purposes of illustration. In some embodiments, one or more other user data and/or metadata may be included in block314as well. For example, in some embodiments, the “revisions” field of user data324may also include system-generated metadata, such as hash value(s) generated based on the version of item(s) in the “revisions” field. These hash value(s) may be used as an ID for the particular version of item(s) in block314and may be further used to generated the above-described “entriesHash” and “BlockHash.” In addition, in some embodiments, the metadata in the “revisions” field of user data324may include other metadata, e.g., a “version number” for the version of item(s) in the “revisions” field of block314. Referring back to the example, when the user performs the transaction in block242to change the vehicle's owner from “John Doe” to “Jane Doe,” similarly, the database may generate another block316(e.g., similar to block116), as shown inFIG.3B, to capture the transaction and updated records. As indicated inFIG.3B, block316may include user data328(e.g., similar to user data128) and metadata330(e.g., similar to metadata130). Also, user data328may include data, e.g., in the “transactionInfo” and “revisions” fields, representing the corresponding transaction in block242and the “latest” non-deleted version of records in the table after the transaction is committed. In this example, the “latest” non-deleted version of the records is version 2 of the vehicle's registration records in table206ofFIG.2B. For purposes of illustration, each row of the records in a table may be considered a data structure or a “document.” In this example, the transaction in block242changes only partial data of the data structure or document, e.g., only the name of the owner is changed, but the rest of the registration information stays the same. In some embodiments, user data328may include data representing the entire data structure or document, e.g., the entire version of the records, within which data is changed in the transaction, as shown inFIG.3B. Alternatively, in some embodiments, user data328may not necessarily include data representing the entire data structure or document. Instead, user data328may include only data representing the changed or updated data or records, e.g., only a partial version of the records. For example, user data328may only include data representing that the owner of the vehicle, after the transaction in block242committed, is “Jane Doe.” In addition, as indicated inFIG.3A, in some embodiments, data in the “revisions” field of user data324may not necessarily be organized as structured data, e.g., according to predetermined columns and/or rows in a table. Instead, the data may be in a semi-structured or non-structured form, e.g., in a JSON or superset JSON type. Therefore, in the disclosure, the data in the “revisions” field of a block may also be referred to as an “item.” Referring back to the example, when the user performs the transaction in block244to delete the vehicle's registration from the table, similarly, the database may generate yet another block312(e.g., similar to block112), as shown inFIG.3C, to capture the transaction and updated records. As indicated inFIG.3C, block312may include user data320(e.g., similar to user data120) and metadata322(e.g., similar to metadata122). Also, user data320may include data, e.g., in the “transactionInfo” and “revisions” fields, representing the corresponding transaction in block244and the “latest” non-deleted version of records in the table after the transaction is committed. In this example, the “latest” non-deleted version of the records is the current version of the vehicle's registration records in table202ofFIG.2C. Since in vehicle's registration is deleted from the table by the transaction in block244, the “revisions” fields of metadata320may thus be empty, as indicated inFIG.3C. Note thatFIGS.3A-3C, as well as the blocks described in following sections, are provided only as an example for purposes of illustration. In some embodiments, the content of a block in the journal-based database may not necessarily include exactly the same information as illustrated. Instead, a block may include less or more information. For example, in some embodiments, a block may also include, in the metadata, data representing the position of the block in the chain of the journal, data representing an ID number (different from the blockHash) that is automatically assigned by the database, data representing an ID number assigned to the table, data representing an ID number assigned to the data structure of document within which records are updated, etc. In some embodiments, the user may need to redact data. For example, the user may request to redact personal information like the name of the owner “John Doe,” or to redact a version of an item that includes the personal information such as the name “John Doe,” from the database. In other words, the data requested by the user to be redacted may correspond to a specific version of item(s) (that further includes a specific data), or a specific data which is only a partial portion of a version of item(s). According to the foregoing description, the name “John Doe” is part of previous versions of records, and does not exist in the current version of records of table202, as illustrated inFIG.2C. However, the name “John Doe” may be part of the data in one or more blocks of the journal, e.g., existing in (e.g., in the “revisions” field of) user data324of block314that represents at least a partial version (e.g., version 1) of records of the table. Therefore, in some embodiments, the database may search the records of tables202-206(if the database stores tables204-206with the previous versions of records) and blocks312,314, and316of the journal to determine where the data exists. For example, the database may determine that the data exists in block314, e.g., in user data324of block314. Accordingly, the database may redact the data in user data324. For purposes of illustration,FIG.3Dis provided as an example to illustrate the update of the user data of the block after the data redaction. As shown inFIGS.3A and3D, after the redaction, the version of the item in the “revisions” field of user data324, which represents at least a partial version of version 1 of records in table204including the name “John Doe,” may be replaced with a hash value “xxxxxx.” As described above, in some embodiment, the redaction may not necessarily replace the data with a hash value, but instead delete the data (e.g., version 1 that includes the name “John Doe”) from the block. In addition, the “revisions” field of user data332may include data representing the date (e.g., “Oct. 23, 2021”) when the redaction is committed. However, as described above, to preserve cryptographic verifiability, metadata330of block316may be stay the same. For example, as indicted inFIGS.3A and3D, the “blockHash,” “entriesHash,” and “previousHash” in metadata330of block316may still stay the same. As a result, after the data redaction, block316inFIG.3Dmay still be cryptographically verifiable together with the other blocks of the hash chain in the journal. In addition, if block316includes one or more other metadata, e.g., data representing the position of block316in the chain, data representing an ID number of block316, data representing an ID of the table, data representing an ID of the data structure of document within which records are updated, etc., the additional metadata may also be retained during the data redaction. Moreover, in some embodiments, data in the “transactionInfo” field of block316may also be preserved. For example, the “transactionInfo” field of user data332may still include the same data, as metadata324, representing the transaction committed in block240(e.g., the creation of the table, and the insertion of the records). Note thatFIG.3Dis provided only as an example for purposes of illustration. In this example, the redaction of the data may replace data representing the entire data structure or document (or the entire version of an item) in the “revisions” field with a hash value. In some embodiments, the data redaction may not necessarily redact the entire data structure or document or entire version of item. Instead, it may redact only the data representing the name “John Doe” (e.g., replace only the name “John Doe” with a hash value, or delete only the name), but leave the rest of the data structure or document (or the rest of the version of the item) of the “revisions” field unchanged. In that case, when the user accesses block316, the user may still be able to obtain other information of the vehicle's registration, such as the “ID,” “Manufacturer,” “Model,” “Year,” and/or “VIN” that are not requested to be redacted, but the name “John Doe” may not be accessible anymore. In addition, as described above, in some embodiments, the database may keep tables204-206with the previous versions of records. In that case, when the database searches for existence of the data to be redacted, the database may determine that the data also exists in table204. Accordingly, the database may redact the data in table204, e.g., to replacing the data with a hash value or remove the data from table204. In some embodiments, the database may further generate block318(e.g., similar to block118), as shown inFIG.3E, to track the transaction that requests the data redaction. In some embodiments, block318may include user data334(e.g., similar to user data134) and metadata336(e.g., similar to metadata136). In some embodiments, user data334may include a “transactionInfo” field that may include data recording the transaction that requests the redaction, and a “redactionInfo” field that may include data associated with the redaction request. However, user data334may not necessarily include data representing the data requested to be redacted. For example, as indicated inFIG.3E, the “transactionInfo” field may include data representing the pseudo-statements committed to perform the data redaction, which may specify the block, item ID, and/or version number of item corresponding to the data is to be redacted. In addition, metadata336of block318may include an “entriesHash” that is calculated based on user data334, a “previousBlockHash” that is the hash value of the last block of the chain, e.g., hash value of block316, and a “blockHash” that is the hash value calculated based on the “entiresHash” and “previousBlockHash” of block318. Therefore, block318may refer back to and be appended to block316in the journal of the database. FIG.4is a block diagram showing example interactions between a user and a database to perform data redaction, according to some embodiments. In this example, database400(e.g., similar to the databases described above) may include query engine404, journal manager406, and journal408. As described above, in some embodiments, journal408may include a hash-chained set of blocks that are generated to track the change history to items in database400. Also, in some embodiments, a current version of the items may be stored in a table (not shown inFIG.4) in database400. In some embodiments, database400may be accessible via interface410, through which user402may access database400via network connections to request performance of various database-related functions. In some embodiments, interface410may include various appropriate types of interfaces, e.g., an application programming interface (API), command line interface (CLI), a website-based interface, etc. In addition, in some embodiments, user402may use various computing devices, e.g., a laptop, desktop, smartphone, tablet, etc., to interact with database400. As indicated inFIG.4, the vertical lines represent time, and the horizontal lines represent individual interactions between user402and database400. In this example, to perform data redaction, user402may provide request420to database400, e.g., via interface410, to request database400to redact specific data. In addition, in some embodiments, request420may specify a version of item and/or a block that includes the version of item, in which the specific data resides and is requested to be redacted. In some embodiments, the item and/or block may be specified by the user using the above-described metadata in the blocks of journal408, e.g., data representing the position of a block, data representing the IDs of the item or block, etc. In some embodiments, in response to receiving request420, query engine404of database400may send request422to journal manager406to perform the redaction. Accordingly, journal manager406may validate424whether the specific data exists in the blocks of journal408. As described above, in some embodiments, request420from the user may specify the item and/or block, in which the specific data resides and is requested to be redacted. In that case, journal manager406may identify the item and/or block in journal408to verify that the specific data exists. In some embodiments, journal manager406may further verify that the data has not been redacted yet. In addition, in some embodiments, request420from the user may not necessarily specify the location of the specific data in the database. Therefore, as part of validation424, journal manager406may search the individual blocks of journal408to determine whether the specific data exists in any of the blocks. Similarly, in some embodiments, if the specific data is identified in a block of journal408, journal manger406may further validate if the specific data in the block has not been redacted before. In some embodiments, the data redaction may be performed in an asynchronous mode. The asynchronous mode may perform the redaction in multiple steps. For example, in a first step, the request to redact the specific data may be committed to journal406. In some embodiments, the first step may be finished in a relatively short period of time, e.g., in a few seconds or minutes. For example, upon obtaining completion426of the validation, journal manager406may command428the redaction request to be committed to journal406. In some embodiment, the commitment of the request to journal406may include generating and appending a new block to the hash-chained blocks of journal406. In some embodiment, the new block may include data, e.g., data similar to user data334inFIG.3E, to record the transaction that requests the redaction of the specific data. In some embodiments, the commitment may also include blocking future accesses to the specific data in the identified block. For example, if database400receives a request to read the specific data, an error message may be returned to indicate that the specific data is not accessible or does not exist anymore. In some embodiments, when the redaction transaction is recorded in journal408, journal manager406may send indication430of completion of the commitment to query engine404. In response, query engine404may send acknowledgement432to user402, e.g., to notify user402that the redaction request is committed. Referring back to the above-described asynchronous mode, in a second step of the asynchronous mode, after acknowledgement432is provided, the redaction may be materialized. The materialization may include actually redacting434the specific data from the identified block of journal408. In some embodiments, the second step may be finished in a relatively long period of time, e.g., in a few days or months. Therefore, one purpose of the asynchronous mode is to provide user402acknowledgement432in a relatively short period of time (e.g., seconds or minutes as described above) after receiving request420, without requiring user402to actually wait until the specific data is redacted in journal408. This is especially beneficially when the specific data requested to be redacted includes a large amount of data. In some embodiments, once the redaction of the actual data is completed in journal408, journal manager406may obtain indication436from journal408. In some embodiments, journal manager406may further send optional indication438to inform query engine404of completion of the redaction materialization. As described above, in some embodiments, the specific data to be redacted may exist not only in the block(s) of journal408, but also in other locations of database400, e.g., in a previous and/or current version(s) of the item stored in database400. In that case, in some of the above interactions, database400may also identify the other storage locations for the specific data, e.g., in a version of the item in a table stored in database400, and accordingly redact the data from the identified storage locations. Similar to what is described above, the redaction of the data from the other storage locations may also be performed in an asynchronous mode. For example, query engine404and/or journal manager406may first identify and validate existence of the specific data in the other storage locations, and redact the actual data afterwards. Note that the above-described asynchronous mode is only provided as an example for purposes of illustration. In some embodiments, the data redaction may be performed in a different, synchronous mode. For example, in the synchronous mode, database400may not necessarily divide the data redaction into multiple steps and provide acknowledgement432in the middle of the redaction. Instead, database400may combine the operations indicated by422-430and434-438, and provide acknowledgement432until after the specific data is in journal408redacted (e.g., as indicated by436-438). FIG.5is a block diagram showing an example provider network that provides a data storage service including journal-based databases with data redaction abilities, according to some embodiments. InFIG.5, provider network500may be a private or closed system or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to one or more user(s)505. Provider network500may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system910described below with regard toFIG.9), needed to implement and distribute the infrastructure and storage services offered by provider network500. In some embodiments, provider network500may implement various computing resources or services, such as data storage service(s)510(e.g., object storage services, block-based storage services, or data warehouse storage services), database service520, and other service(s)520, which may include a virtual compute service, data processing service(s) (e.g., map reduce, data flow, and/or other large scale data processing techniques), and/or any other type of network based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services not illustrated). Data storage service(s)510may implement different types of data stores for storing, accessing, and managing data on behalf of client(s)505as a network-based service that enables one or more user(s)505to operate a data storage system in a cloud or network computing environment. For example, data storage service(s)510may include various types of database storage services (both relational and non-relational) or data warehouses for storing, querying, and updating data. Such services may be enterprise-class database systems that are scalable and extensible. Queries may be directed to a database or data warehouse in data storage service(s)510that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. The database system may work effectively with database schemas of various types and/or organizations, in different embodiments. In some embodiments, clients/subscribers may submit queries in a number of ways, e.g., interactively via an SQL interface to the database system. In other embodiments, external applications and programs may submit queries using Open Database Connectivity (ODBC) and/or Java Database Connectivity (JDBC) driver interfaces to the database system. Data storage service(s)510may also include various kinds of object or file data stores for putting, updating, and getting data objects or files, which may include data files of unknown file type. Such data storage service(s)510may be accessed via programmatic interfaces (e.g., APIs) or graphical user interfaces. Data storage service(s)510may provide virtual block-based storage for maintaining data as part of data volumes that can be mounted or accessed similar to local block-based storage devices (e.g., hard disk drives, solid state drives, etc.) and may be accessed utilizing block-based data storage protocols or interfaces, such as internet small computer interface (iSCSI). In some embodiments, database service520may allow user(s)505to create journal-based databases (e.g., similar to the databases described above). The journal-based databases may include tables individually associated with respective journals. Each journal may include a hash-chained set of blocks with cryptographic verifiability. Database service520may allow users to redact data thoroughly from the journal-based databases, e.g., from tables and/or blocks of the journals, as described above. Other service(s)520may include various types of data processing services to perform different functions (e.g., anomaly detection, machine learning, querying, or any other type of data processing operation). For example, in at least some embodiments, data processing services may include a map reduce service that creates clusters of processing nodes that implement map reduce functionality over data stored in one of data storage service(s)510. Various other distributed processing architectures and techniques may be implemented by data processing services (e.g., grid computing, sharding, distributed hashing, etc.). Note that in some embodiments, data processing operations may be implemented as part of data storage service(s)510(e.g., query engines processing requests for specified data). Generally speaking, user(s)505may use any type of computing devices configurable to submit network-based requests to provider network500via network525, including requests for storage services (e.g., a request to create, read, write, obtain, or modify data in data storage service(s)510, including requests to redact data in journal-based databases as described above. For example, a given user505may use a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a user505may use an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of storage resources in data storage service(s)510to store and/or access the data to implement various applications. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, user505may use an application configured to interact directly with provider network500. In some embodiments, user(s)505may use computing devices configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In various embodiments, network525may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between user(s)505and provider network500. For example, network525may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network525may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given user505and provider network500may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network525may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given user505and the Internet as well as between the Internet and provider network500. It is noted that in some embodiments, user(s)505may communicate with provider network1100using a private network rather than the public Internet. FIG.6is a flowchart showing example operations to perform data redaction in a journal-based database, according to some embodiments. In this example, in some embodiments, a request may be received at a journal-based database (e.g., similar to the databases described above) to request redaction of specific data from an item in the database, as indicated in block602. As described above, in some embodiments, the database may be part of a network-accessible or cloud-based data storage service of a provider network, and the request may be received via an interface of the data storage service and/or the provider network. In some embodiments, the received request may specify the location of the item. For example, the request may specify one of a hash-chained set of blocks of the journal of the database in which the specific data resides and is requested to be redacted. As described above, in some embodiments, each of the blocks of the journal may individually include a previous or current version of one or more items of the database. For example, each block may include user data that further includes the “transactionInfo” and “revisions” fields. Data in the “transactionsInfo” field may represent one or more transactions committed to the database to update one or more corresponding items, whereas data in the “revisions” fields may represent a version of the corresponding items produced by those transactions. Therefore, when the request includes an indication of the particular block that comprises a version of the item including the specific data, the database may identify the block from the journal, as indicated by block604. Alternatively, in some embodiments, the received request may not necessarily specify the location of the specific data requested to be redacted. In that case, the database may automatically search the database, e.g., the individual blocks of the journal to identify the particular block that comprises a version of item including the specific data, as indicated by block604. In some embodiments, the database may redact the specific data in the identified block, as indicated in block606. As described above, in some embodiments, the redaction may replace the version of the item in the identified block that includes the specific data with a hash value. Alternatively, in some embodiments, the redaction may delete version of the item including the specific data from the block. In addition, in some embodiments, if the specific data is only partial of the version of item, but not the entire version of the item, the redaction may redact only the specific data and leave the other data in the version of the item unchanged. Furthermore, as described above, the redaction may not alter existing metadata of the block, or at least the portion of the metadata that is associated with cryptographic verifiability of the block relative to the other blocks in the chain, e.g., the hash value identifying the block, so that the cryptographical verifiability of the hash chain of the journal (including the block after the data is redacted) may be still preserved. In some embodiments, the specific data may exist in more than one block of the journal. As described above, in some embodiments, the database may only allow data redaction to these blocks one at a time. In that case, the database may redact the specific data from the individual blocks one after another, in a specific or random order. In addition, in some embodiments, the specific data requested to be redacted may exist in a previous version or a current version of records in the database. In that case, the database may also redact the specific data in the previous or current version of records, as described above. In some embodiments, the data redaction may not be allowed on the current version of records. In that case, the database may perform the data redaction in more than one step. For example, as described below inFIG.8, the database may first redact the specific data in the current version of records, and then redact the specific data in the block corresponding to the (previously) “current” version of records. FIG.7is a flowchart showing example operations to perform data redaction in a journal-based database, according to some embodiments. In this example, in some embodiments, a request may be received at a journal-based database (e.g., similar to the databases described above) to request redaction of specific data from an item in the database, as indicated in block702. The database may include one or more items, and a journal associated with the records. The journal may include a hash-chained set of blocks individually including data that represent at least a partial version of the items produced after corresponding transactions to update the items. The partial version of an item may be a partial previous version of the item, or a partial current version of the item. In addition, the blocks may individually include metadata, such as a hash value generated based on and representing the contents of the individual blocks, which may also uniquely identify the individual blocks. As described above, in some embodiments, the data redaction may be performed in an asynchronous mode. For example, in response to the request, the database may identify one of the blocks comprising a version of the item including the specific data, as indicated in block704. In some embodiments, the dataset may also validate that the specific data in the identified block has not been previously redacted. In some embodiments, in response to identifying the block, the database may first record the received transaction that requests the data redaction in the journal, as indicated block706. As described above, to record the redact transaction, the database may generate and append a new block the hash-chained set of blocks of the journal. The new block may include data, e.g., data similar to user data334inFIG.3E, to record the transaction that requests the redaction of the specific data. In some embodiments, the database may further block future accesses to the specific data, as described above. In some embodiments, after the database records the redaction request, e.g., in the appended block, in the journal, the database may provide an acknowledgement to indicate that the redaction is committed, as indicated by block706. As described above, the transaction may be recorded and the acknowledgement may be provided in a relatively short period of time after receiving the redaction request. In some embodiments, after providing the acknowledgement, the database may materialize the redaction, as indicated in block708. For example, in some embodiments, the database may replace the version of the item including the specific data in the identified block with a hash value. Alternatively, in some embodiments, the database may delete the version of the item including the specific data from the identified block. As described above, the materialization of the redaction may be performed in a relatively longer period of time, e.g., in days or months. FIG.8is a flowchart showing example operations to perform data redaction in a journal-based database, according to some embodiments. In this example, in some embodiments, a request may be received at a journal-based database (e.g., similar to the databases described above) to request redaction of specific data from an item in the database, as indicated in block802. As described above, in some embodiments, a current version of the items may be contained in a table, and the specific data requested to be redacted may also exist in the current version of an item in the table, as indicated by block804. In some embodiments, the data redaction may not necessarily be allowed to apply to the current version of the records directly. Therefore, in that case, the database may first delete the current version of the item including the specific data from the table to generate another table with a newer version of items, where the specific data does not exist anymore in the newer version of the items, as indicated block806. Further, the database may generate and append a new block to the hash-chained set of blocks of the journal to record the deletion of the specific data and the newer version of the item produced by the deletion. Since the current version of the item including the specific data is deleted, the newer version of the item in the new block may be empty, e.g., as indicated by data in the “revisions” field of user data320inFIG.3C. Because the newer version is generated, the previously “current” version of the item may now become an old version of the item, and therefore the data redaction may be applied, e.g., in similar ways as described above. For example, in some embodiments, the database may identify a block of the journal that comprises the previously “current” version of the item including the specific data, and replace the previously “current” version of the item in the identified block with a hash value, as indicated by block808. The database system described above may in various embodiments be implemented by any combination of hardware and software, for instance, a computer system as inFIG.9that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. In the illustrated embodiment, computer system900includes one or more processors910coupled to a system memory920via an input/output (I/O) interface930. Computer system900further includes a network interface940coupled to I/O interface930. WhileFIG.9shows computer system900as a single computing device, in various embodiments a computer system900may include one computing device or any number of computing devices configured to work together as a single computer system900. In various embodiments, computer system900may be a uniprocessor system including one processor910, or a multiprocessor system including several processors910(e.g., two, four, eight, or another suitable number). Processors910may be any suitable processors capable of executing instructions. For example, in various embodiments, processors910may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors910may commonly, but not necessarily, implement the same ISA. System memory920may be one embodiment of a computer-accessible medium configured to store instructions and data accessible by processor(s)910. In various embodiments, system memory920may be implemented using any non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system900via I/O interface930. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system900as system memory920or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface940. In the illustrated embodiment, program instructions (e.g., code) and data implementing one or more desired functions, such as data redaction in a journal-based database described above, are shown stored within system memory930as code926and data927. In one embodiment, I/O interface930may be configured to coordinate I/O traffic between processor910, system memory920, and any peripheral devices in the device, including network interface940or other peripheral interfaces. In some embodiments, I/O interface930may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory920) into a format suitable for use by another component (e.g., processor910). In some embodiments, I/O interface930may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface930may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface930, such as an interface to system memory920, may be incorporated directly into processor910. Network interface940may be configured to allow data to be exchanged between computer system900and other devices960attached to a network or networks950. In various embodiments, network interface940may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface940may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory920may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1—xx. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system900via I/O interface930. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system900as system memory920or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface940. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various systems and methods as illustrated in the figures and described herein represent example embodiments of methods. The systems and methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly, the above description is to be regarded in an illustrative rather than a restrictive sense.
66,137
11860864
DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments. However, it will be understood by those of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the embodiments. One or more specific embodiments of the present invention will now be described. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. FIG.1is a block diagram of a database architecture according to some embodiments. Embodiments are not limited to theFIG.1architecture. The database system100includes a DataBase Management System (“DBMS”)105, a query processor110, and a data store120. Generally, the database system100operates to receive queries and return results based on data stored within the data store120. A received query may include instructions to create, read, update, or delete one or more records. The database system100may comprise any single-node or distributed database system that is or becomes known. Generally, the database management system105includes program code to perform administrative and management functions of the database system100. Such functions may include external communication, lifecycle management, snapshot and backup, indexing, optimization, garbage collection, and/or any other database functions that are or become known. The query processor110processes received Structured Query Language (“SQL”) and Multi-Dimensional eXpression (“MDX”) statements. The query processor110comprises program code executable to pre-process a received query, generate a query execution plan, and execute the plan. As will be described, the query processor110may operate in some embodiments to replace a named view within a query with a materialized view. The data store120comprises data tables storing data and system tables storing metadata such as database catalog as is known in the art. The data store120of the present example also stores persisted tables of the above-described materialized views. The data store120may also comprise a distributed system using any combination of storage devices that are or become known. In some embodiments, the data of data store120may comprise one or more of conventional tabular data, row-based data, column-based data, and object-based data. Moreover, the data may be indexed and/or selectively replicated in an index to allow fast searching and retrieval thereof. The database system100may support multi-tenancy to separately support multiple unrelated clients by providing multiple logical database systems which are programmatically isolated from one another. The database system100may implement an “in-memory” database, in which a full database stored in volatile (e.g., non-disk-based) memory such as Random Access Memory (“RAM”). The full database may be persisted in and/or backed up to fixed disks (not shown). Embodiments are not limited to an in-memory implementation. For example, data may be stored in RAM (e.g., cache memory for storing recently used data) and one or more fixed disks (e.g., persistent memory for storing their respective portions of the full database). An administrative application130may be operated by analyst to configure and manage the database system100. The administrative application130may communicate with the DBMS105via a graphical user interface and/or console. Configuration of the database system100may include configuration of user permissions, specification of backup parameters, definition of logical schemas, definition of views, definition of materialized views, etc. These permissions, parameters and definitions may be stored within system tables of the data store120and used during operation of the database system100. FIG.2illustrates system tables210and data tables220of a database system according to some embodiments. The system tables210store database objects of two different views, View1and View2. Each database object includes a SELECT statement specifying underlying base tables of each view. The system tables210also store a database object associated with a materialized view Mat View. This database object includes a SELECT statement specifying underlying base tables of the materialized view. The materialized view Mat View is associated with a persisted table Mat View Data of the data tables210. FIG.3is a high-level block diagram of a system300according to some embodiments. The system300includes system tables310, a transaction compute unit320, a query parser330, and a materialized view compute unit350. The system tables310may, according to some embodiments, include tables that store system metadata associating a first materialized view with a first view and a first table. At (A), the transaction compute unit320receives an update request and may process an update request relevant to the first view. The query parser330may capture the update request from the transaction compute unit320at (B). The query parser330may then detect which system tables are associated with the update request relevant to the first view and (C) and, responsive to the request, arrange for first table data to be replicated. At (D) the query parser330transmits a materialized view request to the materialized view compute unit350. The materialized view compute unit350may be created, according to some embodiments, as a Materialized View-as-a-Service (“MVaaS”) independent of the transaction compute unit. The materialized view compute unit350receive the materialized view request and refresh the first materialized view at (E). The materialized view compute unit350may then automatically compute the first materialized view and store a result of the computation. As used herein, the term “automatically” may refer to a device or process that can operate with little or no human interaction. According to some embodiments, devices, including those associated with the system300and any other device described herein, may exchange data via any communication network, which may be one or more of a Local Area Network (“LAN”), a Metropolitan Area Network (“MAN”), a Wide Area Network (“WAN”), a proprietary network, a Public Switched Telephone Network (“PSTN”), a Wireless Application Protocol (“WAP”) network, a Bluetooth network, a wireless LAN network, and/or an Internet Protocol (“IP”) network such as the Internet, an intranet, or an extranet. Note that any devices described herein may communicate via one or more such communication networks. The elements of the system300may store data into and/or retrieve data from various data stores (e.g., the system tables310), which may be locally stored or reside remote from the transaction compute unit320. Although a materialized view compute unit350is shown inFIG.3, any number of such devices may be included. Moreover, various devices described herein might be combined according to embodiments of the present invention. For example, in some embodiments, the transaction compute unit320and system tables310might comprise a single apparatus. Some or all of the system300functions may be performed by a constellation of networked apparatuses, such as in a distributed processing or cloud-based architecture. An analyst may access the system300via a remote device (e.g., a Personal Computer (“PC”), tablet, or smartphone) to view data about and/or manage operational data in accordance with any of the embodiments described herein. In some cases, an interactive graphical user interface display may let the analyst define and/or adjust certain parameters (e.g., to set up or adjust various mapping relationships) and/or provide or receive automatically generated recommendations, results, and/or alerts from the system300. FIG.4illustrates a method to perform big data analytics for a cloud computing environment in a secure and efficient manner according to some embodiments. The flow charts described herein do not imply a fixed order to the steps, and embodiments of the present invention may be practiced in any order that is practicable. Note that any of the methods described herein may be performed by hardware, software, an automated script of commands, or any combination of these approaches. For example, a computer-readable storage medium may store instructions that when executed by a machine result in performance according to any of the embodiments described herein. At S410, a transaction compute unit may process an update request relevant to a first view. Moreover, a plurality of system tables may store system metadata that associates a first materialized view with the first view and a first table. At S420, a computer processor of a query parser may capture the update request from the transaction compute unit. At S430, the system may detect which system tables are associated with the update request relevant to the first view. Responsive to the request, the system may arrange for first table data to be replicated at S440. At S450, the system may transmit a materialized view request to a materialized view compute unit. The materialized view compute unit may, according to some embodiments, be created as a Materialized View-as-a-Service (“MVaaS”) independent of the transaction compute unit. As used herein, the phrase “as-a-service” may refer to something being presented to a consumer as a service (e.g., providing endpoints, usually API driven, for customers or consumers to interface). As a result, compute unit resources of the materialized view compute unit (e.g., CPU, memory, etc.) can be scaled independently of those associated with the transaction compute unit. According to some embodiments, a plurality of materialized view compute units are associated with the transaction compute unit. In some embodiments, the transaction compute unit comprises a database transaction compute unit. At S460, the system may refresh the first materialized view. At S470, the materialized view compute unit computes the first materialized view and store a result of the computation at S480. According to some embodiments, the computation of the first materialized view comprises serverless execution such that a process is only spun on demand. Note that the serverless execution might be achieved using a Linux container or a Web-assembly (“Wasm”) module in a Wasm browser sandbox that has a memory heap not accessible from outside the Wasm browser sandbox. Thus, embodiments may create a MVaaS that separates the management of materialized views into different compute units which can be scaled independently of transaction compute units520of the database.FIG.5is a system500architecture in accordance with some embodiments. The system500includes a transaction compute unit520that sends an update event to a materialized view compute unit550. The system500may also include a means of capturing and transferring the updates relevant to a specific view from the transaction compute unit520to the materialized view compute unit550. The computation can then run on the materialized view compute unit550(and be transferred from the same unit or transferred back to the transaction compute unit520). A query parser may detect which tables get used to create the specific view. For example, a materialized view may capture API hits for tenants grouped by different tenants. By parsing the query for the view, the system can determine that this query selects API hits from a certain table and then aggregates by tenant. If there is a replica of the database running on the materialized view compute unit550, the transaction compute unit520(e.g., continuously and/or when it receives a request for updating the view) does following:replicate the table data on the replica (note that replication may be a continuous process and not happen on receipt of the query—instead the materialized view request may only be sent to the materialized view compute unit550on receipt of the SQL query), andsend a materialized view request to the materialized view compute unit550. As part of this process, the system may run a refresh of the materialized view on the materialized view compute unit550. According to some embodiments, refresh of the materialized view can be performed a scheduled way (e.g., combined with data change detection on the materialized view compute units550). Although a single materialized view compute unit550is illustrated inFIG.5, note that embodiments may support multiple unit for multiple views. For example,FIG.6is a system600architecture that supports multiple materialized view compute units650according to some embodiments. The system600includes a transaction compute unit620that sends update events to the materialized view compute units650as appropriate. According to some embodiments, on demand updates are a scheduled job (e.g., combined with data changes detected) which can trigger the serverless unit for refreshing the materialized views. Since the updates happen only on demand, a materialized view compute unit650can be designed in a way to execute the update process in a serverless way. That is, the process is only spun on demand to save compute resources (e.g., CPU and memory).FIG.7is a serverless method in accordance with some embodiments. At S710, the system may refresh a first materialized view using a materialized view compute unit serverless function. At S720, the first materialized view is computed by the materialized view compute unit serverless function. At S730, the materialized view compute unit serverless function can then store a result of the computation. According to some embodiments, the serverless execution is achieved using Linux containers. In other embodiments, more granular serverless execution is achieved using WebAssembly modules (“Wasm”). Wasm provides a portable binary-code format and a corresponding text format for executable programs as well as software interfaces to facilitate interactions between programs and host environment. Wasm may enable high-performance applications on web pages and can be employed in other environments. It is an open standard that supports multiple languages on multiple operating systems. For example,FIG.8is a high-level block diagram of a Wasm system800where a client browser may execute a Wasm module in a Wasm browser sandbox (associated with a memory heap that is not accessible from outside the Wasm browser sandbox). In particular, a browser sandbox850may execute a WebAssembly module820. For the WebAssembly module820, the browser sandbox850may utilize a decode element855before executing a Just-In-Time (“JIT”) compiler856that also receives browser APIs890. The output of the JIT compiler856may comprise machine code860. According to some embodiments, the WebAssembly module820is a portable binary format designed to be: compact and fast to parse/load so it can be efficiently transferred, loaded, and executed by the browser; compatible with existing web platforms (e.g., to allow calls to/from, access browser APIs890, etc.); and run in a secure browser sandbox850. Note that higher-level languages can be compiled to a WebAssembly module820that is then run by the browser in the same sandboxed environment. Moreover, WebAssembly modules820compiled from higher-level languages may have been already parsed and compiled/optimized so they can go through a fast-decoding phase (as the module is already in bytecode format close to machine code) before being injected into the JIT compiler856. As a result, WebAssembly may represent a more efficient/faster way of running code in a browser, using any higher-level language that can target it for development, while being compatible with existing web technologies. To save storage costs, a transaction compute unit and a materialized view compute unit can share a filesystem. For example,FIG.9is a system900architecture in which a transaction compute unit920and a materialized view compute unit950share a filesystem990. The filesystem990may be associated with a method and data structure that are used to control how data is stored and retrieved. In this system900, the transaction compute unit920may just send an update event as, and when, specific tables get updated. The materialized view compute unit950can then run the view to refresh code on the materialized view compute unit950independently. Thus, embodiments may separate the processing of materialized view updates and refreshes into a separate set of compute resources as compared to the transaction compute unit. These materialized view compute units can then be independently scaled on-demand as appropriate. Since the materialized view can also be created in a serverless way (depending on the frequency of updates to the views). Note that the embodiments described herein may be implemented using any number of different hardware configurations. For example,FIG.10is a block diagram of an apparatus or platform1000that may be, for example, associated with the systems300,600ofFIGS.3and6, respectively (and/or any other system described herein). The platform1000comprises a processor1010, such as one or more commercially available CPUs in the form of one-chip microprocessors, coupled to a communication device1020configured to communicate via a communication network (not shown inFIG.10). The communication device1020may be used to communicate, for example, with one or more remote user platforms or a query device1024via a communication network1022. The platform1000further includes an input device1040(e.g., a computer mouse and/or keyboard to input data about monitored system or data sources) and an output device1050(e.g., a computer monitor to render a display, transmit views and/or create monitoring reports). According to some embodiments, a mobile device and/or PC may be used to exchange data with the platform1000. The processor1010also communicates with a storage device1030. The storage device1030can be implemented as a single database, or the different components of the storage device1030can be distributed using multiple databases (that is, different deployment data storage options are possible). The storage device1030may comprise any appropriate data storage device, including combinations of magnetic storage devices (e.g., a hard disk drive), optical storage devices, mobile telephones, and/or semiconductor memory devices. The storage device1030stores a program1012and/or a data analytics engine1014for controlling the processor1010. The processor1010performs instructions of the programs1012,1014, and thereby operates in accordance with any of the embodiments described herein. For example, the processor1010may facilitate data analytics for a cloud computing environment. A plurality of system tables1060may store system metadata that associates a first materialized view with a first view and a first table. A transaction compute unit may process an update request relevant to the first view. A query parser may capture the update request from the transaction compute unit and detect which system tables are associated with the update request relevant to the first view. Responsive to the request, the query processor arranges for first table data to be replicated and transmit a materialized view request to a materialized view compute unit. The materialized view compute unit may be created as a MVaaS independent of the transaction compute unit. The materialized view compute unit may receive the materialized view request, refresh the first materialized view, compute the first materialized view, and store a result of the computation. The programs1012,1014may be stored in a compressed, uncompiled and/or encrypted format. The programs1012,1014may furthermore include other program elements, such as an operating system, clipboard application, a database management system, and/or device drivers used by the processor1010to interface with peripheral devices. As used herein, data may be “received” by or “transmitted” to, for example: (i) the platform1000from another device; or (ii) a software application or module within the platform1000from another software application, module, or any other source. In some embodiments (such as the one shown inFIG.10), the storage device1030further the system tables1060and a MVaaS database1100. An example of a database that may be used in connection with the platform1000will now be described in detail with respect toFIG.11. Note that the database described herein is only one example, and additional and/or different data may be stored therein. Moreover, various databases might be split or combined in accordance with any of the embodiments described herein. Referring toFIG.11, a table is shown that represents the MVaaS database1100that may be stored at the platform1000according to some embodiments. The table may include, for example, entries identifying queries received in connection with a cloud computing environment. The table may also define fields1102,1104,1106,1108for each of the entries. The fields1102,1104,1106,1108may, according to some embodiments, specify a query identifier1102, a view identifier1104, a MVaaS identifier1106, and a computation result1108. The MVaaS database1100may be created and updated, for example, when a new query is received, when a computation result1108is generated, etc. The query identifier1102might be a unique alphanumeric label or link that is associated with a materialized view in the analytics domain. The view identifier1104may define the view (e.g., based on information in system tables) and the MVaaS identifier1106may specify a serverless material view compute unit spun separately from a transaction compute unit. The computation result1108may represent a computation of the view as generated by the MVaaS. FIG.12is a human-machine interface display1200in accordance with some embodiments. The display1200includes a graphical representation1210or dashboard that might be used to manage or monitor a query service for analytics framework (e.g., associated with a cloud provider). In particular, selection of an element (e.g., via a touchscreen or computer mouse pointer1220) might result in the display of a popup window that contains configuration data. The display1200may also include a user selectable “Edit System” icon1230that an analyst may use to request system changes (e.g., to investigate or improve system performance). Thus, embodiments may help perform analytics for a cloud computing environment in a secure and efficient manner. Although some embodiments have been described in connection with an SAP® HANA database embodiments may be associated with other database (e.g., PostgreSQL or ORACLE®) that separate the creation of materialized views from the actual transaction processing unit. This may help build a separation of concerns and an ability to independently scale these based on different requirements. Embodiments may also improve system performance and resource utilization as compared to traditional materialized view approaches. The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications. Although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with some embodiments of the present invention (e.g., some of the data associated with the databases described herein may be combined or stored in external systems). Moreover, although some embodiments are focused on particular types of queries, any of the embodiments described herein could be applied to other types of queries. Moreover, the displays shown herein are provided only as examples, and any other type of user interface could be implemented. For example,FIG.13shows a handheld tablet computer1300rendering a MVaaS for analytics display1310that may be used to view or adjust existing system framework components and/or to request additional data about the system configuration (e.g., via a “More Info” icon1320). The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.
25,547
11860865
Throughout the drawings, reference numbers may be reused to indicate correspondence between referenced elements. Nevertheless, use of different numbers does not necessarily indicate a lack of correspondence between elements. And, conversely, reuse of a number does not necessarily indicate that the elements are the same. DETAILED DESCRIPTION Definitions In order to facilitate an understanding of the systems and methods discussed herein, a number of terms are defined below. The terms defined below, as well as other terms used herein, should be construed to include the provided definitions, the ordinary and customary meaning of the terms, and/or any other implied meaning for the respective terms. Thus, the definitions below do not limit the meaning of these terms, but only provide example definitions. A “document” refers to an electronically stored paper or other written item furnishing information and includes, without limitation, electronically stored books, articles, letters, passports, deeds, bills of sale, bills of lading, forms, and any other documents referred to herein. “Structured documents” are documents in which information is uniformly positioned in the same location. An example of a structured document is the Internal Revenue Service Form W-2. Employees in the United States fill out the same Form W-2, which includes information types such as social security number (SSN), name, and wages, in the same location. “Semi-structured documents” may have similar information on them, but the information is not necessarily positioned in the same location for all variations. Examples of semi-structured documents are invoices. Most companies create invoices, and these invoices tend to include similar information, such as invoice amount, invoice date, part numbers, shipping date, etc. But this information is not positioned in the same location across the many vendors or companies that create invoices. “Unstructured documents” are documents that do not include similar information as other documents and the information is not positioned in a particular location. An example of an unstructured document is the message body of an email, a blog post, or a TWEET® communication (Twitter, Inc., San Francisco, California). The message body of an email may have information about opening an accident claim with an insurance company. Other emails and letters relating to this claim may contain information such as name, account number, address, and accident date, but no document will look like any other document. A “pre-defined value” is a value of interest. A “contender value” is a value that can possibly be associated with a pre-defined value. Before the system makes a decision whether a contender value is positively associated with a pre-defined value, the system will evaluate the contender value across many dimensions. At the beginning, each word on a page document is a contender value. After going through each dimension, the contender values will be upgraded to values of interest and the contender with highest score will be deemed as positively associated with the pre-defined values. For example, when evaluating the textual string “Ephesoft agrees to pay $1,000 for taxes and $200 for interest on Jan. 1, 2015,” the system may be instructed to locate information positively associated with “tax amount.” The system will consider all 15 words as contender values. When the software is evaluating amounts, the formatting dimension will reduce the contender values into two ($1000 and $200). Other dimensions like keyword dimensions will finally decide $1,000 is the best choice for tax amount. A “block” is a textual grouping of one or more words and may include a contender value or a pre-defined value. An “anchor block” is a block that includes or appears spatially near a specific contender value or a specific pre-defined value on a page of a document. A “compilation” is a collection of one or more electronically stored documents. A “confidence” is a numerical likelihood that a contender value is positively associated with a pre-defined value. A “field type” represents the data type for a particular value. A “keyword” is a word assigned by a user as associated with a pre-defined value. A “page” is an electronically stored sheet in a compilation. A “pre-selected page” is a page of interest. A “weight” is a number assigned to a data item that reflects its relative importance. A “word” is a single distinct meaningful element on a page typically shown with a space and/or punctuation element(s) on either side. Technological Improvements Various embodiments of the present disclosure provide improvements to various technologies and technological fields. For example, various aspects can enable users to mine document stores for information that can be charted, graphed, studied, and compared to help make better decisions. These could be financial documents, patient records, contracts, HR records, or other types of documents typically stored in an enterprise content management system, a large file store, and the like. In another aspect, the improvements can be deployed such that the system does not require information technology and specialized document management experts to run it. It should be understood that the invention can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as can be taught or suggested herein. Various embodiments of the present disclosure discuss recently arisen technological problems and solutions inextricably tied to those technologies. For example, some parts of the specification disclose technology that allows for identification of specific data in huge electronic repositories of unstructured or semi-structured documents, a recently arisen technological problem. Such a usage of electronic documents is not possible in a system without computer technology, and therefore is inextricably tied to at least specialized systems featuring electronic document storage. In addition, certain embodiments address the realization that modern computing is both a blessing and a curse. It has reduced the need to store and maintain paper records. But modern computers have saddled entities with a serious problem. Entities can now cheaply store electronic data in an infinitesimal fraction of the space required for equivalent paper records. And now that entities can easily store vast amounts of electronic data, they do—often without regard for what to do with those overwhelming data stores later. The analysts tasked with reviewing such large pluralities of data cannot keep up with the influx, and time-sensitive information can remain undetected until it is too late to do anything. Simply, put, modern computing created a problem and various embodiments address this computer-centric problem of processing haystacks of electronic transaction data, allowing analysts to quickly find needles in those haystacks. In other words, such embodiments solve a computer-centric problem with a solution that is necessarily rooted in computer technology. Parts of the specification disclose how to implement specific technological solutions that are otherwise difficult to implement on a computer. Some parts of the specification discuss computer-implementable solutions to non-mathematical problems such as determining “Is this the data I am looking for?” Parts of the specification disclose improvements to existing technological solutions. For example, some embodiments implement document analysis systems that are far faster to set up or required less manual input than prior solutions. As another example, some embodiments feature improved data location accuracy over previous solutions. Parts of the specification disclose the use of computer systems to solve problems that cannot be inherently solved by humans alone. The disclosed system can constantly learn from the human feedback. As a starting point, initial algorithm parameters dictate how each dimension should be evaluated and weighted by the system. For example, when the system is looking for a field, the parameters might be initially programmed such that certain keywords might be more important than on which page the field located. Multiple users interact with the disclosed system, and the system will learn from their feedback and automatically adjust the dimensions and their weights and importance. Such processing on every field, on every document, and for every user interacting with the system is not something a human can do. For example, in American mortgage documents, there are about 450 document types, and each document types can have somewhere between 10 fields to 1000 fields. If we have 500 users, the system can fine tune the extraction for 2.25 billion things to track per feedback. No human can do this. Description of the Drawings A computer system to positively associate a pre-defined value with contender values from a compilation of one or more electronically stored documents is disclosed herein. The system can include one or more computer readable storage devices. The one or more computer readable storage devices can be configured to store one or more software modules including computer executable instructions. The one or more computer readable storage devices also can be configured to store the compilation. It was inventively realized that the disclosed system is particularly desirable for processing semi-structured documents and unstructured documents, in which important data may not be included in an expected location on a page or in which the placement of important data may be seemingly arbitrary. Accordingly, in certain embodiments, the electronically stored documents can comprise one or more semi-structured document(s) and/or one or more unstructured document(s). It should be understood, however, that processing of structured documents is not specifically excluded. In any event, each of the one or more electronically stored documents comprises one or more pages. As discussed below, the one or more electronically stored documents are advantageously processed page-by-page. The computer system can also comprise a network configured to distribute information to a user workstation. The user workstation can be local to or remote from the computer system. Accordingly, the network can comprise internal wiring, a locally connected cable, or an external network such as the Internet. The computer system can further include one or more hardware computer processors in communication with the one or more computer readable storage devices and configured to execute the one or more software modules in order to cause the computer system to perform various functions. For example, a function can be accessing the compilation from the one or more computer readable storage devices. Such computer readable storage devices can be incorporated in a variety of electronic devices, including mobile devices like tablets or smartphones, and computers like laptops, desktops, and servers. Another function can be receiving information regarding the pre-defined value. For instance, the information can include information about the pre-defined value's format, any keywords associated with the pre-defined value, and/or the two-dimensional spatial relationship to words in a pre-selected page. Yet another function can include, for each page of the compilation, identifying words and contender values on the subject page using optical character recognition (OCR) and post-OCR processing. A related function can include, for each page of the compilation, receiving an identification of words and contender values on the subject page determined using processes such as OCR and post-OCR processing. As used herein, OCR generally refers to electronic conversion of images of typed, handwritten, or printed text into machine-encoded text. Post-OCR processing generally refers to a process of identifying words in the machine-encoded text. Such post-OCR processing can include comparing strings of the machine-encoded text to a lexicon (a list of words that are allowed to occur in a document). Example lexicons include, for example, all the words in the English language, or a more technical lexicon for a specific field. Post-OCR processing can also include more sophisticated processing such as “near-neighbor analysis” that makes use of co-occurrence frequencies to correct errors, based on the realization that certain words are often seen together. For example, “Washington D.C.” is generally far more common than “Washington DOC.” Knowledge of the grammar of the language being processed can also help determine if a word is likely to be a verb or a noun, for example, allowing for even greater accuracy. Still another function can include, over all the pages of the compilation, extracting positive contender values as positively associated with the pre-defined value based at least in part on numerical confidence values that certain contender values are associated with the pre-defined value. Optionally, the system can store the positive contender values in the one or more computer readable storage devices and/or transmit the positive contender values over the network to the user workstation in response to a search for values associated with the pre-defined value at the user workstation. The processing to positively identify contender values does not necessary occur in response to any search for values. Rather, the processing can occur independent of any search and quickly return requested data on demand. Additional functions are discussed below with reference to the figures. 1. Blocks 1.1 Block Generation In at least one embodiment, the system is capable of identifying a block on a page of a document. That the system can process data as blocks, rather than solely individual words, is an important advance because it allows the system to process data having many possible formats. For example, an address can appear on a page on one line. On other pages, the address may be split across multiple lines. The system can recognize both as blocks, based on their spatial properties. For example, a multi-word block can be identified by identifying groups of words. A limit can be calculated for each line based on the spaces that are close to each other. Starting with a first word, a spatial distance to a second word and a third word can be calculated. The shortest distance is used to form blocks. Because font size may affect the space between words in each line, font size can also be factored when calculating the minimum space. FIG.1illustrates an excerpt of a page of a document. Block1and Block2are generated by using white space as the parameter to determine block boundaries. Blocks are used both for multi-word extraction and identifying anchor blocks near values. The pre-defined value under inquiry may contain more than one word. For example, address, name, title, and some dates correspond with multi-word pre-defined values that require multi-word extraction. Accordingly, contender values are not necessarily single words. The anchor blocks are used to give equal weightage to words in a phrase. For example, in an anchor block “Borrower's Name,” both the words “Borrower's” and “Name” are equally important in identifying desired value. Thus, in certain embodiments, the one or more hardware computer processors can be configured to execute the one or more software modules in order to cause the computer system to perform grouping the identified words and the identified contender values (from OCR and post-OCR processing) into anchor blocks based on their spatial positioning on the subject page, such that the subject page comprises a plurality of anchor blocks and each anchor block comprises one or more words, one contender value, or a combination thereof. 1.2 Graphs-Based Representation of Related Blocks The blocks on a page can be interrelated by representing them as nodes of a connected graph. These nodes have a bidirectional relationship with each other. For example,FIG.2displays text blocks connected to each other forming a graph. 2. Dimensional Model for Extraction In at least one embodiment, the system employs a multi-dimensional algorithm to extract data from documents. Each dimension is independently applied on pages and the results from each dimension are aggregated using a weighted-mean calculation. Desirable, the results from each dimension are represented by a numerical value in the range of 0.0 and 1.0. The final confidence score is associated with a contender value for a pre-defined value and the contender value with the highest confidence score is chosen as positively associated with the pre-defined value. Each dimension has a certain weight, such as a numerical value in the range of 0.0 and 1.0, associated with it and this weight is multiplied with the result of applying a dimension while extracting contender values for a pre-defined value. final⁢⁢confidence⁢=∑wi⁢ci∑wi wherei ranges from 1 to the total number of dimensions,wirepresents the weight for dimension i, andcirepresents the confidence for dimension i. Dimensions are divided into three broad categories: anchor dimensions, value dimensions, and use-case specific dimensions. The next sections discuss each category of dimensions in greater detail. 2.1 Anchor Block Dimensions Anchor blocks can help positively identify candidate values as associated with pre-defined values, as both have relationships with their respective anchor blocks. The importance of an anchor block is a function of various dimensions. 2.1.1 Location In certain embodiments, the one or more hardware computer processors can be configured to execute the one or more software modules in order to cause the computer system to numerically determine a first confidence that a contender value is associated with the pre-defined value based at least in part on a comparison of a calculated two-dimensional spatial relationship between the subject contender value and the anchor blocks on the subject page with the pre-defined two-dimensional spatial relationship between the pre-defined value to the words in the pre-selected page. An anchor block's location relative to a contender value is an important property for quantifying the anchor block's relevance to the contender value. Certain embodiments contemplate at least two phases for determining and refining the weight and confidence values assigned to an anchor block with respect to a contender value: a training phase and a testing phase. 2.1.1.1 Training Phase In the training phase, the two-dimensional spatial relationship between the pre-defined value and words in a pre-selected page is determined. During the training phase, a user provides a sample of (pre-defines) the pre-defined value. For example, a user can choose a particular value on a selected page of a mortgage or deed of trust as a sample of a mortgage information number (MIN). Weights are then assigned to words in anchor blocks on the same page as the pre-defined value. The weights are assigned based on the location of that anchor block relative to pre-defined value. In at least one embodiment, all words in an anchor block are given same weight. InFIG.3, the value block represents the particular anchor block containing the pre-defined value. Words in the value block are assigned a high weight, such as 1.0. Block1and Block4are spatially close to the value block in the horizontal and vertical directions. Words in Block1and Block4are assigned a high weight, such as 1.0. Block2, Block5, and Block7are spatially close to the value block but farther from the value block than Block1and Block4. Words in Block2, Block5, and Block7are assigned a moderately high weight, such as 0.8. Block3, Block6, and Block8are farther from the value block than Block2, Block5, and Block7, and words in Block3, Block6, and Block8are assigned a lower weight than the words in Block2, Block5, and Block7. In this example, the words in Block3, Block6, and Block8are assigned a weight such as 0.25. Block9, Block10, and Block11are spatially close to the value block in the horizontal and vertical directions. But it was inventively realized that, in language scripts read left-to-right, a block positioned to the left of a value block is more likely to be related to that value block than a block positioned to the right of a value block. Because Block9, Block10, and Block11are positioned to the right of the value block, they are assigned a lower weight than the blocks positioned directly above or to the left of the value block. In this example, Block9, Block10, and Block11are assigned a weight such as 0.125. It should be understood that the numerical weights discussed with reference toFIG.3are non-limiting examples. Other suitable numerical weights are within the scope of the invention. For every pre-defined value, the system learns a set (Xs), as defined below. Xs={(A1C1),(A2C2),(A3C3), . . . ,(AMCM)} whereM represents the total number of anchor blocks for the set (s),A1represents the first anchor block in the set (s),C1represents the weight for the first anchor block (A1),A2represents the second anchor block in the set (s),C2represents the weight for the second anchor block (A2),A3represents the third anchor block in the set (s),C3represents the weight for the third anchor block (A3),AMrepresents the last anchor block in the set (s), andCMrepresents the weight for the last anchor block (AM). 2.1.1.2 Testing Phase In the testing phase, the two-dimensional spatial relationship between a subject contender value and the anchor blocks on the subject page is determined. The system first identifies contender values on a page. For each contender value of a pre-defined value, the system identifies anchor blocks near that contender value in the same manner described above with respect to the training phase and prepares a set (Yi) defined below. Yi={(A1C′1), (A2C′2), (A3C′3), . . . , (ANC′N)} whereN represents the total number of anchor blocks for this contender value (i),A1represents the first anchor block near the contender value,C1represents the weight for the first anchor block (A1),A2represents the second anchor block in the set (i),C2represents the weight for the second anchor block (A2),A3represents the third anchor block in the set (i),C3represents the weight for the third anchor block (A3),ANrepresents the last anchor block in the set (i), andCNrepresents the weight for the last anchor block (AN). The system compares set Yiwith training set Xs. In certain embodiments, all anchor blocks from the training phase that are missing in the testing phase are given zero weight. A final confidence score in the range of 0 to 1 is calculated as follows. First, based on the inventive realization that it is undesirable to give an unusually high score to values having few anchors, an anchor count threshold K is defined to divide the process. In certain embodiments, K is equal to 5. When N≥K in the testing phase, anchor blocks identified in the testing phase will be weighed with the training phase considered that, when the training set increases, confidence should not be lowered to a great extent. Thus, Confidence⁢⁢of⁢⁢value⁢⁢for⁢⁢index⁢⁢field=∑j=1N⁢xjN where xj=min⁡(Cj,Cj′)max⁡(Cj,Cj′)Cjrepresents the weight for the jthanchor block in the training phase, andC′jrepresents the weight for the jthanchor block in the testing phase. When N<K in the testing phase, Confidence⁢⁢of⁢⁢value⁢⁢for⁢⁢index⁢⁢field=∑j=1N⁢xjmin⁡((N+12⁢(K-N)),L) where xj=min⁡(Cj,Cj′)max⁡(Cj,Cj′)Cjrepresents the weight for the jthanchor block in the training phase,C′jrepresents the weight for the jthanchor block in the testing phase, andL represents the total anchor blocks learned in the training phase. The foregoing algorithm provides an example method for numerically determining a first confidence that the subject contender value is associated with the pre-defined value based at least in part on a comparison of a calculated two-dimensional spatial relationship between the subject contender value and the anchor blocks on the subject page with the pre-defined two-dimensional spatial relationship between the pre-defined value to the words in the pre-selected page. More specifically, in the method, the one or more hardware computer processors can be configured to execute the one or more software modules in order to cause the computer system to, for each of the anchor blocks comprising a contender value, assign a first anchor block weight to all words in the subject anchor block, assign a second anchor block weight to all words in a second anchor block above and immediately adjacent to the subject anchor block such that there are no anchor blocks between the second anchor block and the subject anchor block in the vertical direction, assign a third anchor block weight to all words in a third anchor block to the left of and immediately adjacent to the subject anchor block such that there are no anchor blocks between the third anchor block and the subject anchor block in the horizontal direction, and assign various other anchor block weights, lower than the first anchor block weight, the second anchor block weight, and the third anchor block weight, to remaining anchor blocks, each based on a respective two-dimensional spatial distance to the subject anchor block. Assigning the various other anchor block weights to the remaining anchor blocks can comprise assigning lower anchor block weights to anchor blocks located to the right of the value block than anchor blocks located an equivalent two-dimensional spatial distance to the left of the value block. 2.1.2 Anchor Imprecision Certain embodiments include the inventive realization that there may be some words in anchor blocks which are misread during OCR and hence certain characters may not match from the training and evaluation phases. As explained above, in certain embodiments, all anchor blocks from the training phase that are missing in the testing phase are given zero weight. To avoid the potentially undesirable result that an anchor block is given zero weight during the testing phase solely because of a misreading during OCR, the system can allow for imprecision in the matching of anchors. Thus, the system can compensate for typographical differences between words in the anchor blocks on the subject page and the words in the pre-selected page not exceeding a numerical threshold. For example, the system may recognize words as a match when they have greater than or equal to 70% of the same characters. 2.1.3 Root-Stem Root-stems of words in anchor blocks were discovered to decrease highly coupled dependence on exact word matching during the training and evaluation phases. As used herein, the term “root-stem” refers to a part of a word to which affixes can be attached. The root-stem is common to all inflected variants. Consider, for example, “Borrower Name,” “Name of Borrower,” “Borrowing Party,” and “Borrower.” Borrow is the root-stem word for “Borrower” and “Borrowing” in each of these phrases. As explained above, in certain embodiments, all anchor blocks from the training phrase that are missing in the testing phase are given zero weight. To avoid the potentially undesirable results that an anchor block is given zero weight during the testing phase solely because two phrases use different variants of the same root-stem, the system can incorporate root-stem matching while looking for words in anchor blocks near contender values and pre-defined values. Thus, in certain embodiments, a numerical confidence can be based at least in part on a compensation for root-stem associations between words in the anchor blocks on the subject page and the words in the pre-selected page. 2.1.4 Relative Position of Words The relative position of a word in an anchor block can be given importance. The position of each word as compared to other words is learned during the training phase and this knowledge is applied during the evaluation phase. For example, for the anchor block “Borrowing Name” the system learns that the word “Borrowing” appears before “Name.” Thus, in certain embodiments, a numerical confidence can be based at least in part on relative positions of words in the anchor blocks. 2.1.5 Anchor Quantization Generally the words in anchor blocks in a document follow the same convention in terms of font size, font face, and other characteristics. Hence, this information can be used to separate anchors from contender values automatically. This feature can remove or lessen the need to train empty documents to identify words in anchor blocks versus empty spaces that will eventually be filled by values. Thus, in certain embodiments, grouping the identified words and the identified contender values into anchor blocks is further based on typographical characteristics of the identified words and identified contender values, the typographical characteristics comprising font size and/or font face. 2.1.6 Pre-Determined Value Keywords It was inventively realized that name words or other keywords associated with a pre-determined value during the training phase can be highly correlated to the words in anchor blocks around a contender value likely associated with the pre-determined value. For example, a pre-determined value “123-45-6789” can be assigned a keyword “SSN” during the training phase. Words in anchor blocks associated with that pre-determined value may be “Borrower,” “Social,” “Security,” and “Number.” During the testing phase, while evaluating a contender value, the system encounters the words “Borrower” and “SSN” in associated anchor blocks. In this example, the word “SSN” is recognized from the pre-defined value keyword defined during the training phase and therefore the system is able to give more confidence to this contender value as associated with the pre-defined value. Thus, in certain embodiments, a numerical confidence that the subject contender value is associated with the pre-defined value can be based at least in part on a comparison of words in the anchor blocks on the subject page with the one or more keywords associated with the pre-defined value. 2.2 Value Dimensions Contender values can also be evaluated along with their properties to generate confidence values. 2.2.1 Value Imprecision A contender value may not exactly match the format of the pre-defined value due to errors during OCR. For example, the format of the pre-defined value may include, for example, an “Integer” field type designation. The actual value of a contender value being evaluated is “69010.” Due to an error during OCR, the recorded value of that contender value is “690J0.” It would be undesirable to ignore or give little weight to the recorded value because of the error and advantageous to consider the recorded value despite the type mismatch due to the imprecision factor being within limits. Thus, in certain embodiments, a numerical confidence can be based at least in part on a compensation for typographical differences between the subject contender value and the pre-defined value not exceeding a numerical threshold. For example, the system may recognize a contender value when it has greater than or equal to 70% format match. 2.2.2 Type Hierarchy A hierarchy of a field type of the pre-determined value can be learned during the training phase. For example, a field type “USA ZIP” can be part of an example hierarchy, as shown inFIG.4. If the pre-determined value is associated with the field type “USA ZIP” during training, and the contender value under consideration is not USA ZIP, but a UK ZIP, lesser confidence is given to that contender value. Conversely, if the hierarchies of field types of the pre-determined value and a subject contender value match exactly, then more confidence that the subject contender values is positively associated with the pre-determined value is given. In general, however, a numerical confidence that the subject contender value is associated with the pre-defined value can be based at least in part on a comparison of a format of the contender value with the certain format of the pre-defined value. 2.2.3 Value Quantization It was discovered that similar contender values in a document tend to follow the same convention in terms of font size, font face, and other characteristics. Certain words which are of specific type such as numbers (whole, 69010, or character delimited, 123-45-6789), dates (01/12/2001), and so forth are likely to be contender values. The properties of these words can be used to identify the pattern being followed by most other values on a page. Thus, in certain embodiments, the post-OCR processing is configured to identify a contender value based on formatting including one or more of numerical formatting, date formatting, and delimiting character formatting. 2.2.4 Page Zone This dimension takes into consideration the zone inside a page in which a subject contender value and a pre-defined value appears. TABLE 1Page ZonesTop LeftTop CenterTop RightMiddle LeftMiddle CenterMiddle RightBottom LeftBottom CenterBottom Right If, during the testing phase, a contender value appears in the same zone that the pre-defined value appeared in during the training phase, a higher confidence can be given to the subject contender value that when a contender value appears in a different zone. The page is divided into the following nine zones. Thus, in certain embodiments, the format of the pre-defined value can comprise a location of the pre-defined value in a zone on the pre-selected page, and a numerical confidence that a subject contender value is associated with the pre-defined value can be at least in part on a location of the subject contender value being in the zone. 2.2.5 Page Number Page number also can be taken into consideration while assessing contender values. If a contender value appears on the same page number within a document that the pre-defined value appeared on during the training phase, the contender value can be given higher confidence in this dimension. Thus, in certain embodiments, a numerical confidence that the subject contender value is associated with the pre-defined value can be based at least in part on a page number of the compilation. 2.2.6 Fixed Value Location For documents with fixed text block areas for values, the system can learn the exact co-ordinates of rectangular areas during training. This feature is particularly useful for structured documents where data of interested repeatably appears at a certain location. The words appearing inside the defined area will be preferred over rest of the words in the document. Thus, in certain embodiments, the electronically stored documents can further comprise one or more structured document(s) and a numerical confidence that the subject contender value is associated with the pre-defined value can be based at least in part on the subject contender value's position in pre-defined location. 2.3 Use-Case Specific Dimensions The following three dimensions are use-case specific and solve a niche area of extraction: ZIP code location, ZIP code dictionary, and geo-location. 2.3.1 ZIP Code Location This dimension is particularly useful for extracting ZIP codes from within a block comprising an address. It was realized that a ZIP code ordinarily appears after a city and a state in an address. Based on this realization, the system can use the information that the ZIP code should appear after the city and state inside an address to assign relevant weights and/or confidence for a contender value. For example, inFIG.5, the system would give more weight to the actual ZIP code “92653” than the street address “23041” using the fact that former is at the expected place inside the address. Thus, in certain embodiments, when the pre-defined value is a ZIP code, a numerical confidence the subject contender value is associated with the pre-defined value can be based at least in part on an evaluation of a position of the subject contender value within its associated anchor block. 2.3.1 ZIP Code Dictionary This dimension is also particularly useful for extracting ZIP codes. Embodiments comprising this dimension can incorporate a locally or remotely stored dictionary of all valid ZIP codes in a country specific manner. For example, only the five digit numbers that are valid US ZIP codes as per the dictionary will be considered for this dimension. Thus, in certain embodiments, when the pre-defined value is a ZIP code, a numerical confidence that the subject contender value is associated with the pre-defined value is based at least in part on a comparison of the subject contender value to valid ZIP codes. 2.3.1 Geo-Location The fields related to location like addresses and ZIP code can be validated against one or more local or remotely stored geolocation libraries. This would serve as yet another dimension supporting a conclusion a subject contender value is positively associated with a pre-defined value. Thus, in certain embodiments, when the pre-defined value is an address or a portion thereof, the system can further comprise a network connection configured to access a geolocation library. A numerical confidence that the subject contender value is associated with the pre-defined value can be based at least in part on a validation of the subject contender value against the geolocation library. 2.4 Additional Comments on Dimensions The system will compute at least one, more advantageously several, and in some embodiments all, of the foregoing dimensions to conclude whether a contender value should be positively identified as associated with the pre-defined value itself. Furthermore, as discussed herein, the system is also able to adjust which dimension has more weighting and which has less based on the samples (pre-determined values) users provide. Thus, each confidence can be associated with a distinct dimension (and vice versa), and each dimension can be associated with a distinct weight. The system can adjust the weight assigned to each dimension based on the pre-determined value when extracting positive contender values as positively associated with the pre-defined value. Example Implementation Mechanisms FIG.6illustrates an example system architecture according to at least one embodiment.FIG.7illustrates an example system technology stack according to at least one embodiment. The techniques described herein are implemented through special processing capabilities on the back-end. The system can be built on an Apache™ Hadoop® platform. The Hadoop® platform is advantageous because it enables multiple off-the-shelf PCs to be connected, such that they perform like a single supercomputer, providing powerful CPU functionality at a lower cost than a supercomputer. An example Hadoop® cluster is shown inFIG.8. The cluster includes rack servers populated in racks (Rack1, Rack2, Rack3, Rack4, and Rack N) each connected to a top of rack switch801,803,805,807,809, usually with 1 or 2 GE boned links. The rack switches801,803,805,807,809have uplinks connected to another tier of switches811,813connecting all the other racks with uniform bandwidth, forming the cluster. The majority of the servers will be Slave nodes with local disk storage and moderate amounts of CPU and DRAM. Some of the machines will be Master nodes that might have a slightly different configuration favoring more DRAM and CPU, and less local storage. The Hadoop® platform is desirable not only for handling the large volumes of documents that the system is contemplated to process, but also for powering the recognition algorithms described above. It should be understood, however, that although embodiments disclosed herein use the Hadoop® framework as a representative example, embodiments are not limited to the Hadoop® framework. Rather, it is broadly contemplated that embodiments can be extended to all types of distributed file systems, known or unknown. The system can additionally leverage the Apache™ Spark™ platform, an open source technology that accelerates data processing by loading data into memory instead of writing from the clustered servers' disks in the Hadoop® distributed file system (the approach used by MapReduce, the primary processing engine used by Hadoop®). The efficiency of the Spark™ framework comes from optimizing processing jobs by writing output to resilient distributed data sets (RDDs). The system disclosed herein takes the clustered computing of the Spark™ framework and uses it to run MLlib, the Spark™ platform's scalable machine learning library, to perform iterative computations that produce more accurate results while enabling the disclosed system to process document volumes at a pace almost 100 times faster than those observed with MapReduce. It was discovered that the combination of high volume and velocity allows the disclosed system to identify content faster and more accurately. One or more databases may be used or referred to by one or more embodiments of the invention. It should be understood that such databases may be arranged in a wide variety of architectures and using a wide variety of data access and manipulation means. For example, in various embodiments, one or more databases may comprise a relational database system using a structured query language (SQL), while others may comprise an alternative data storage technology such as those referred to in the art as “NoSQL.” A NoSQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. A MongoDB (NoSQL-type) database (MongoDB Inc., New York City, NY) was discovered to be particularly advantageous for the disclosed system, owing to its simplicity and feasibility for this application. MongoDB is characterized by a number of potential advantages, including scalability, open source architecture, NoSQL database structure, document-oriented storage (JSON-Style document storage), quick retrieval of data, easy replication, rich queries, fly indexes which can be created with a single command and cool data structure available with realm of key-value. It should be understood, however, that variant database architectures such as column-oriented databases, in-memory databases, clustered databases, distributed databases, or even flat file data repositories may be used. It may be appreciated by one having ordinary skill in the art that any combination of known or future database technologies may be used as appropriate, unless a specific database technology or a specific arrangement of components is specified for a particular embodiment herein. Moreover, it should be appreciated that the term “database” as used herein may refer to a physical database machine, a cluster of machines acting as a single database system, or a logical database within an overall database management system. Unless a specific meaning is specified for a given use of the term “database,” it should be construed to mean any of these senses of the word, all of which are understood as a plain meaning of the term “database” by those having ordinary skill in the art. The disclosed system is designed to work with RestFUL APIs that make integration into third-party document stores and repositories fairly straightforward. RestFUL is an example of an Elasticsearch database (Elasticsearch BV, Amsterdam, Netherlands) which can be incorporated in certain embodiments. An Elasticsearch database allows for searching across all the data, all the columns and rows, and provides fast access to desired data. This integration, along with the Hadoop® platform, can be set up ahead of time with the help of a trained services team. After that, executing the system is in the hands of end-users, such as business analysts, that do not have specialized training. Document Preparation and Analytics At least one embodiment comprises a user interface, such as a multi-step user interface. For example, the system can comprise a six step user interface for interacting with the user to gather user feedback. In general, the first three interface screens (FIGS.9,10, and11) are for classification of documents in preparation for data extraction. The last three screens (FIGS.12,13, and14) are implemented to acquire user feedback on fields to be extracted. The disclosed system can classify multiple document types in preparation for data extraction. Multiple data characteristics for each document can be extracted and available for analytics. The system, for example, understands that “Shell,” when used in context with the terms “oil” or “gasoline,” is referring to the energy corporation and not to the beach. These algorithms make document classification and data extraction simple and straightforward for the end-user. FIG.11shows a user what the system did with the initial knowledge. With this interface, the user can provide feedback by dragging and dropping files to different categories. This feedback improves the algorithm with every new classification. InFIG.12, the user labels what is important for the system to capture. During this process, the user picks something to be extracted. For example, the user could define a loan amount as important information. The user clicks around various pages of a document or compilation and labels fields relevant to the loan amount. Multiple data characteristics from each document can be selected and made available for eventual extraction and analytics.FIG.13illustrates how a page will appear after a page has been set up by a user, with the fields to be captured being highlighted on the document. In the interface ofFIG.14, the system asks the user to give feedback on what the system extracted with the initial knowledge. As the user gives more feedback on the screens, the system readjusts and fine-tunes its algorithms, features, and the importance of those features. After the system has been trained, the system will process all relevant documents, potentially millions of documents, to identify the learned field. The user has to train the system that a particular field is important; otherwise the system will not necessarily recognize that field as important. In the event the user does not properly define a field, the user will have to go back and set it up and reprocess. It is important to note, however, that while the system must be trained to learn to identify important information, only a small training set is required in order to do so. In various embodiments, the system can include a user interface for displaying information resembling the form of a mind map to a user and a control circuit that controls the displayed content of the mind map. As used herein, a mind map refers to a diagram used to represent words, ideas, tasks, or other items linked to and arranged radially around a central key word or idea. As shown inFIGS.20and21, such mind maps can be used to generate, visualize, structure, and classify ideas, and as an aid in study, organization, problem solving, decision making, and writing. The elements of a given mind map are arranged intuitively according to the importance of the concepts, and are classified into groupings, branches, or areas, with the goal of representing semantic or other connections between portions of information. In was inventively recognized that, by presenting ideas in a non-linear manner, mind maps encourage a brainstorming approach to planning and organizational tasks. Though the branches of a mind map represent hierarchical tree structures, their radial arrangement disrupts the prioritizing of concepts typically associated with hierarchies presented with more linear visual cues. The disclosed techniques of data extraction and association build on the use of mind maps to facilitate display of important information. Thus, according to at least one aspect of the disclosure, the system includes a browsing tool with modified mind map functionality. A user can, for example, choose to analyze five (or any number of) documents. From these five documents, the user can extract names, SSNs, and ZIP codes. The user interface would display these labels. The user could click on the label SSN shown on the user interface, and all identified SSNs (for example, matching the structure but not necessarily the exact number of a pre-determined SSN) would be displayed. It should be understood that such SSNs were identified using the algorithms described herein. The user could then click on a specific SSN from the displayed identified SSNs. The user interface would then show a mind map view of all connected fields. For example, if someone used two different names using the same SSN, the user could easily see the discrepancy. A mind map style display is particularly advantageous because one field, in this example SSN, can be connected to all other fields extracted from it. Continuing this example, SSN is in the center and all connected information is around it like a mind map. But when the user clicks on a connected field, in this example name, or a connected document, the user interface would change and put the name in the center and started to show all connected fields and documents this information is coming from. In other words, the center of the mind map changes based on the selected data. Because the center of the mind map and fields that are connected to it will constantly be changing as the user clicks on them, the user interface will always show the datum in the middle and connected data around it. For this reason, the functionality of this user is significantly different from currently existing mind maps. Everything the user clicks will be centered and connected data will be automatically readjusted. Although the user interface may resemble a mind map when first opened, the way the user interface operates and organizes data is specifically related to data captured from documents. Such mind map functionality can be useful, for example, in fraud investigation and missing document identification, among other things. In fraud investigation, an investigator can easily find and visualize when the same SSN or address is being fraudulently used in other documents across millions of documents. In missing document identification, a user can find a page or document misfiled with wrong document or folder. With a user interface with mind map functionality, the user could quickly explore documents from the data extracted. Stated another ways, the user can find documents based on data, rather than data based on documents. Thus, in certain embodiments, an electronic device comprises a display for displaying contender values that have been positively associated with a pre-defined value from a compilation of one or more electronically stored documents in the form of a mind map to a user. The electronic device can further display a control circuit that controls the displayed content of the mind map, the control circuit configured to: receive a starting field input from the user and associate the starting field input with a center of the mind map, analyze the starting field input to establish branches of additional data from fields connected with the starting field input, receive a selection from the additional data and re-associate the selected additional data with the center of the mind map. And in certain embodiments, an electronic device comprises a display for displaying contender values that have been positively associated with a pre-defined value from a compilation of one or more electronically stored documents in the form of a mind map to a user. The electronic device can further display a control circuit that controls the displayed content of the mind map, the control circuit configured to: receive a starting field input from the user and associate the starting field input with a center of the mind map, analyze the starting field input to establish branches of documents from fields connected with the starting field input, receive a selection from the documents, and display the selected document. It should be understood that such embodiments and user interfaces for displaying positively associated contender values can be incorporated into any other embodiments described herein. Machine Learning In at least one embodiment, the system uses machine learning techniques to positively associate contender values with a pre-defined value. Machine learning comprises at least two phases: training and evaluation. During the training phase, a corpus of training data is used to derive a model. The corpus comprises one or more vectors and a disposition relating to a contender value. It is important to note that any single vector might not yield any conclusive evidence over whether a contender value is positively associated with a pre-defined value, but examining a plurality of such vectors could provide conclusive evidence. Thus, it is desirable that the model include data for a plurality of the above-described vectors. It is desirable for the data inputted to the machine learning to be representative of the real world scenarios in which the machine learning techniques will ultimately be applied. Thus, as discussed above, the data used to derive the model can be taken directly from actual compilations. The model also takes as input a disposition determined by a human analyst that can positively associate contender value with a pre-defined value. The human analyst reviews the vectors, makes a determination regarding the contender value, and enters the disposition into the machine learning algorithm along with the vectors. It is desirable to have fewer unknown samples, though at the same time is understood in the art that conclusively resolved contender value dispositions can be difficult and expensive to obtain. Next, a machine learning method is applied to the corpus. The methods by which training can be done include, but are not limited to Support Vector Machines, Neural Networks, Decision Trees, Naïve Bayes, Logistic Regression, and other techniques from supervised, semi-supervised, and unsupervised training. The training or “model-derivation” may be practiced with any of the above techniques so long as they can yield a method for associating contender values with a pre-defined value. The corpus need not be analyzed in one batch. Machine learning can be refined over time by inputting additional vectors and associated dispositions. Suitable program instructions stored on a non-transitory computer readable storage medium are executed by a computer processor in order to cause the computing system of to store the resulting model to a server or other appropriate storage location. Once the training is sufficient and a model is derived, the model can be used to automatically evaluate new instances of contender values that are presented to the computer or computer network in practice. In this regard, there is a second evaluation phase, wherein the model is applied to the vectors to determine whether a contender values is likely associated with a pre-defined value. The server can output a disposition based on the model. The output can be a binary classification (associated or not associated). Advantageously, however, the output is a score that represents the likelihood of or confidence in this distinction, such as a score from 0 to 1 where 0 represents an overwhelming likelihood that the contender value is not associated with the pre-defined value and 1 represents an overwhelming likelihood that the contender value is associated with the pre-defined value. As another example, the output might be an encoding of the form (“associated”, 0.95) which can be taken to mean that the model believes that a contender value has a 95% chance of being associated with the pre-defined value. Multi-User Environment In at least one embodiment, the system allows each user in an organization to look at the same repository or repositories or the same compilation or compilations but come to different conclusions about the data therein. For example, an employee in a company's marketing department can look at a compilation from a marketing perspective, utilize the system to process years of mortgage applications, and with the results, devise new marketing promotions that will address the company's consumers. But an employee in the same company's fraud department might want to look at the same documents to find fraud. The system allows every distinct user to mine the same set of documents differently. As an example, an analyst at a mortgage lender may be given the task of preparing a report to help reduce the risk of loans being issued. The mortgage company may have millions of loans on file that could provide valuable data for this task, but with each loan file containing several hundred pages, annually examining them would be out of the question. The analyst's first task may be determining which files contain loans that are in default, indicated when the file contains some sort of default notice. Providing the system with a few samples of these notices would enable it to go through and locate which files contain similar notices. Once this has been accomplished, the analyst can separate the loan files into good and defaulted and begin minding them for data and looking for trends. Data that might be helpful could include the average income of the person or persons the loans were issued to, the number of people in the household, the assessed value of the properties, the geographic region of the property, the year a house was built, and so forth. Assuming this information is contained somewhere in the hundreds of pages associated with a loan file and the analyst thinks it might be useful, the disclosed system can find it and extract it. To find average income data, for example, the analyst could submit some samples of W-2s, 1099s, and other tax forms to the disclosed system, which can then identify similar forms. On each sample, the analyst could also highlight the field where the income total is located, and the disclosed system can locate the totals in a high percentage of the tax forms within the loan files. The disclosed system typically requires only a small amount of samples before it can start classifying documents and extracting data. The process of submitting the samples, running the classifier, highlighting the desired fields, and running the extractor typically takes only a few minutes due to the intuitive interface and desirable processing power of the Hadoop® platform. After the desired data is extracted, it is output into an analytics tool that is optionally included in certain embodiments of the disclosed system. In the mortgage loan example, the data could be used to produce two tables, one for defaulted loans and one for good loans. Each table could contain a column for each loan and a row for each piece of data. These data sets can also be used to produce graphs. A graph could help the analyst determine where the greatest and least risk lies in issuing a mortgage loan related to factors like income, value of the property, number of people in the household, and so forth. Other data visualizations are shown inFIGS.15-19.FIG.15illustrates that data points from millions of document sets can be incorporated in reports that can be easily visualized.FIG.16shows that data visualizations can be configured to graphically represent changing market conditions in geographical and time-period context.FIG.17shows that multiple visualizations can be combined on a single dashboard user interface. The tools also enable the analyst to make projections about the future, based on past results. For example, if the analyst wants to project the effect an upcoming plant closing in a large city will have on mortgage defaults, the analyst can examine results from the city were similar event occurred in the past.FIG.18shows that data can be graphed and modeled to create predictive forecasts.FIG.19shows another aspect of the analytics tool. As shown here, datasets can be represented in “heatmaps,” allowing users to identify areas of interest or concern and drill down for more specific information. Fraud prevention is another potential use case for the disclosed system. A security analyst could set up the system to find all Social Security numbers on loan applications and then look for any duplicates. If a particular ID number was used multiple times, it could alert the analyst to possible fraud. Another potential use case is searching across a company's expense reports and receipts to determine which vendors an organization is spending the most money with. This information could be used to negotiate better discounts. While organizations in industries like financial services, insurance, government, healthcare, energy, and transportation, where paper documents are an important part of transactions, are going to have a clear need for the disclosed system. It is contemplated that the system can also be valuable across industries from mining documents like HR forms, invoices, contracts, and other types of legal documents. Additional Implementation Mechanisms In general, the foregoing computing system can include one or more computer readable storage devices, one or more software modules including computer executable instructions, a network connection, and one or more hardware computer processors in communication with the one or more computer readable storage devices. According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices can be hard-wired to perform the techniques, or can include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or can include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices can also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices can be desktop computer systems, server computer systems, portable computer systems, handheld devices, networking devices or any other device or combination of devices that incorporate hard-wired and/or program logic to implement the techniques. Computing device(s) are generally controlled and coordinated by operating system software, such as iOS, Android, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, iOS, Blackberry OS, VxWorks, or other compatible operating systems. In other embodiments, the computing device can be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things. For example,FIG.22illustrates a block diagram that illustrates a computer system2000upon which various embodiments can be implemented. For example, any of the computing devices discussed herein can include some or all of the components and/or functionality of the computer system2000. Computer system2000includes a bus2002or other communication mechanism for communicating information, and a hardware processor, or multiple processors,2004coupled with bus2002for processing information. Hardware processor(s)2004can be, for example, one or more general purpose microprocessors. Computer system2000also includes a main memory2006, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus2002for storing information and instructions to be executed by processor2004. Main memory2006also can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor2004. Such instructions, when stored in storage media accessible to processor2004, render computer system2000into a special-purpose machine that is customized to perform the operations specified in the instructions. Main memory2006can also store cached data, such as zoom levels and maximum and minimum sensor values at each zoom level. Computer system2000further includes a read only memory (ROM)2008or other static storage device coupled to bus2002for storing static information and instructions for processor2004. A storage device2010, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus2002for storing information and instructions. For example, the storage device2010can store measurement data obtained from a plurality of sensors. Computer system2000can be coupled via bus2002to a display2012, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a computer user. For example, the display2012can be used to display any of the user interfaces described herein with respect toFIGS.3A-3B. An input device2014, including alphanumeric and other keys, is coupled to bus2002for communicating information and command selections to processor2004. Another type of user input device is cursor control416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor2004and for controlling cursor movement on display2012. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control can be implemented via receiving touches on a touch screen without a cursor. Computing system2000can include a user interface module to implement a GUI that can be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules can include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, C or C++. A software module can be compiled and linked into an executable program, installed in a dynamic link library, or can be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules can be callable from other modules or from themselves, and/or can be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices can be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code can be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions can be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules can be comprised of connected logic units, such as gates and flip-flops, and/or can be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as software modules, but can be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage Computer system2000can implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system2000to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system2000in response to processor(s)2004executing one or more sequences of one or more instructions contained in main memory2006. Such instructions can be read into main memory2006from another storage medium, such as storage device2010. Execution of the sequences of instructions contained in main memory2006causes processor(s)2004to perform the process steps described herein. In alternative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions. The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media can comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device2010. Volatile media includes dynamic memory, such as main memory2006. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but can be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus2002. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media can be involved in carrying one or more sequences of one or more instructions to processor2004for execution. For example, the instructions can initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system2000can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus2002. Bus2002carries the data to main memory2006, from which processor2004retrieves and executes the instructions. The instructions received by main memory2006can retrieve and execute the instructions. The instructions received by main memory2006can optionally be stored on storage device2010either before or after execution by processor2004. Computer system2000also includes a communication interface2018coupled to bus2002. Communication interface2018provides a two-way data communication coupling to a network link2020that is connected to a local network2022. For example, communication interface2018can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface2018can be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links can also be implemented. In any such implementation, communication interface2018sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link2020typically provides data communication through one or more networks to other data devices. For example, network link2020can provide a connection through local network2022to a host computer2024or to data equipment operated by an Internet Service Provider (ISP)2026. ISP2026in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”2028. Local network2022and Internet2028both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link2020and through communication interface2018, which carry the digital data to and from computer system2000, are example forms of transmission media. Computer system2000can send messages and receive data, including program code, through the network(s), network link2020and communication interface2018. In the Internet example, a server2030might transmit a requested code for an application program through Internet2028, ISP2026, local network2022and communication interface2018. The received code can be executed by processor2004as it is received, and/or stored in storage device2010, or other non-volatile storage for later execution. Terminology Each of the processes, methods, and algorithms described in the preceding sections can be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms can be implemented partially or wholly in application-specific circuitry. The various features and processes described above can be used independently of one another, or can be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks can be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states can be performed in an order other than that specifically disclosed, or multiple blocks or states can be combined in a single block or state. The example blocks or states can be performed in serial, in parallel, or in some other manner. Blocks or states can be added to or removed from the disclosed example embodiments. The example systems and components described herein can be configured differently than described. For example, elements can be added to, removed from, or rearranged compared to the disclosed example embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The term “comprising” as used herein should be given an inclusive rather than exclusive interpretation. For example, a general purpose computer comprising one or more processors should not be interpreted as excluding other computer components, and can possibly include such components as memory, input/output devices, and/or network interfaces, among others. The term “a” as used herein should also be given an inclusive rather than exclusive interpretation. For example, unless specifically noted, the term “a” should not be understood to mean “one” or “one and only one”; instead, the term “a” generally means “one or more” in open-ended claims or embodiments when used with language such as “comprising” or “including.” Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions can be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art. Furthermore, the embodiments illustratively disclosed herein may be suitably practiced in the absence of any element or aspect which is not specifically disclosed herein. It should be emphasized that many variations and modifications can be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.
78,596
11860866
DETAILED DESCRIPTION The various concepts, architectures, methods, and modes of operation described herein are intended as illustrative examples that can be implemented singly or in any suitable combination with one another. Some may be omitted and others included, as suitable for various embodiments. Accordingly, the following description and accompanying Figures merely set forth a subset of the possible embodiments, and are not intended to limit scope. System Architecture According to various embodiments, the system can be implemented on any electronic device or devices equipped to receive, store, and present information. Such electronic devices may be, for example, desktop computers, laptop computers, smartphones, tablet computers, smart watches, wearable devices, or the like. Although the system is primarily described herein in connection with an implementation in a client/server context wherein the client is a computer, smartphone, tablet, or similar device, one skilled in the art will recognize that the techniques described herein can be implemented in other contexts, and indeed in any suitable device capable of receiving and/or processing user input, and/or communicating with other components over an electronic network. Accordingly, the following description is intended to illustrate various embodiments by way of example, rather than to limit scope. Referring now toFIG.1, there is shown a block diagram depicting a system100for implementing the techniques described herein according to one embodiment. As shown inFIG.1, in at least one embodiment, the system is implemented in a client/server environment wherein client device102can send and receive communications with any number of e-commerce website servers109via communications network113. In at least one embodiment, server110can also be provided to implement universal cart115, although such functionality is not required in order to implement the techniques described herein. Server110, if provided, receives and responds to requests from client device102. Client device102may be any electronic device equipped to receive, store, and/or present information, and to receive user input in connect with such information, such as a desktop computer, laptop computer, personal digital assistant (PDA), cellular telephone, smartphone, music player, handheld computer, tablet computer, kiosk, game system, smart watch, wearable device, or the like. In at least one embodiment, client device102has a number of hardware components well known to those skilled in the art. Input device(s)103can be any element or elements capable of receiving input from user101, including, for example, a keyboard, mouse, stylus, touch-sensitive screen (touchscreen), touchpad, trackball, accelerometer, five-way switch, microphone, or the like. Input can be provided via any suitable mode, including for example, one or more of: pointing, tapping, typing, dragging, and/or speech. Processor106can be a conventional microprocessor for performing operations on data under the direction of software, according to well-known techniques. Memory105can be random-access memory, having a structure and architecture as are known in the art, for use by processor106in the course of running software. Browser107is an example of a software application that can be used by user101to access and interact with websites over communications network113. In at least one embodiment, user101can view and interact with ecommerce web servers109via browser107, for example by clicking on links to view items and to place items in a shopping cart. In other embodiments, any suitable app (software application) or other component can be used in place of browser107. In at least one embodiment, browser107includes plug-in108(or browser extension) which performs certain functions in connection with the system and method described herein. For example, as described in more detail below, in at least one embodiment, plug-in108records requests made during interactions between browser107and web servers109(and/or between browser107and server110). Alternatively, such operations can be performed by another component that need not be a part of browser107. In at least one embodiment, plug-in108or some other software application runs in the background no matter what browser or application user101is running. The background application can thereby see and record any relevant interactions with websites run by web servers109, regardless of which software is being used to perform the interactions. In at least one embodiment, client device102also runs analysis tool117, which is used to interpret and filter the recorded interactions with web pages run by web servers109. Analysis tool117can run on another device, such as server110or any other client device communicatively coupled to server110. In at least one embodiment, client device102also runs request identification module118, which is used to review recorded requests and identify those that are necessary to complete process flow. Request identification module118can run on another device, such as server110or any other client device communicatively coupled to server110. In at least one embodiment, client device102also runs automated site navigation module116, which is able to automatically extract information from various websites (such as those run by web servers109) without the need for rendering on a browser. In at least one embodiment, module116operates using information generated by analysis tool117, based on recorded interactions with web pages run by web servers109. In at least one embodiment, automated site navigation module116can function on a device102that also runs browser116; alternatively, module116can be implemented on a separate device that does not run a browser. In at least one embodiment, module116operates without any need for human interaction. As depicted and described herein, components116,117, and118can be implemented as software running on processor106. However, these components need not be implemented as separate modules, and can instead be part of a single software application. Alternatively, some or all of these components can run on devices other than client device102. Alternatively, these components can be implemented as hardware, or they can be omitted, with their functionality assigned to other components. Display screen104can be any element that graphically displays information such as items presented by browser107, user interface elements, and/or the like. Such output may include, for example, descriptions and images depicting items that user101places in a shopping cart, navigational elements, search results, pricing and shipping information, graphical elements, forms, or the like. In at least one embodiment where only some of the desired output is presented at a time, a dynamic control, such as a scrolling mechanism, may be available via input device103to change which information is currently displayed, and/or to alter the manner in which the information is displayed. In at least one embodiment, the information displayed on display screen104may include data in text and/or graphical form. Data store111can be any magnetic, optical, or electronic storage device for data in digital form; examples include flash memory, magnetic hard drive, CD-ROM, DVD-ROM, thumbdrive, or the like. Data store111may be fixed or removable. In at least one embodiment, device102can include additional components. For example, a camera114can be included, as is well known for devices such as smartphones. Camera114is optional and can be omitted. Additional input mechanisms, sensors, and/or devices can also be included in device102, such as a speaker (for voice commands), accelerometer (to detect shaking and changes in position or orientation), GPS sensor (to detect location), and/or the like. As mentioned above,FIG.1depicts an example of a system implementation in a client/server environment. An example of such a client/server environment is a web-based implementation, wherein client device102runs automated site navigation module116that automatically interacts with web pages and/or other web-based resources from e-commerce web servers109. Information, images, and/or text from websites of e-commerce web servers109can be transmitted to module116as part of such web pages and/or other web-based resources, using known protocols and languages such as Hypertext Markup Language (HTML), Java, JavaScript, and the like. In addition, such information images, and/or text from websites of e-commerce web servers109can be presented in browser107, or in some other software application (app) or other component running on client device102, as part of user interactions with websites of e-commerce web servers109. As described in more detail below, plug-ins108can record such interactions. Any suitable type of communications network113, such as the Internet, can be used as the mechanism for transmitting data among client device102, server110, and web servers109, according to any suitable protocols and techniques. In addition to the Internet, other examples include cellular telephone networks, EDGE, 3G, 4G, long term evolution (LTE), Session Initiation Protocol (SIP), Short Message Peer-to-Peer protocol (SMPP), SS7, Wi-Fi, Bluetooth, ZigBee, Hypertext Transfer Protocol (HTTP), Secure Hypertext Transfer Protocol (SHTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and/or the like, and/or any combination thereof. Communications network113can be wired or wireless, or any combination thereof. Communications across network113can be encrypted or unencrypted. In at least one embodiment, client device102transmits requests for data via communications network113, and receives responses from server110and/or e-commerce web servers109containing the requested data. In at least one embodiment, some components of system100can be implemented as software written in any suitable computer programming language. Alternatively, such components may be implemented and/or embedded in hardware. As described in more detail below, in at least one embodiment, plug-in108(or other software) records interactions between browser107and websites of e-commerce web servers109, and/or between browser107and server110containing universal cart115. Recorded interactions, or data extracted therefrom, are stored in data store111or elsewhere. Automated site navigation module116then uses such recorded information to automatically navigate websites of e-commerce web servers109without the need for human interaction, so as to extract useful information from such websites. As depicted inFIG.1, in at least one embodiment, the system can be implemented in connection with a server110. Server110can operate a universal cart115; items are added to universal cart115according to techniques described in related U.S. Utility application Ser. No. 14/933,173 for “Universal E-Universal Electronic Shopping Cart”, filed Nov. 5, 2015, the disclosure of which is incorporated by reference herein. However, universal cart115is optional and need not be included to implement the techniques discussed herein. In addition, universal cart115is not necessarily a physical component of server110, but is, in at least one embodiment, a data structure or dataset that can be stored in a database or other suitable storage architecture on an electronic storage device. Universal cart115need not be maintained at server110itself, but can be maintained at another component to which server110has access, such as a separate server or data storage device. Additional details concerning the structure and organization of server110, and the operation of universal cart115, are described in the above-referenced related application. In another embodiment, as discussed in the above-referenced related application, the functionality for recording interactions with web server109and for performing automated sequential site navigation can be built into browser107itself, or into an operating system running at client device102. Alternatively, such functionality can be implemented as a separate software applications (app) running on device102. In another embodiment, server110can be omitted entirely, and the described system can be implemented as a technique to perform automatic website navigation without the use of a server110. Indeed, in at least one embodiment, the described system can be implemented entirely within one or more client device(s)102. Method Referring now toFIG.2, there is shown a flowchart depicting a method for implementing automated sequential site navigation according to one embodiment. Although described herein in terms of tangible goods, the system and method can be implemented for any type of online purchases, including for example services, travel, event tickets, media and entertainment content, and/or the like. In at least one embodiment, the method depicted inFIG.2can be performed using the architecture depicted inFIG.1. However, one skilled in the art will recognize that the method can be performed using other architectures and arrangements. In at least one embodiment, the method ofFIG.2can be implemented on any client device(s)102or other device(s) capable of interacting with web server(s)109. In at least one embodiment, the system records201all network requests made while performing a web-based operation that involves multiple steps, such as checking out on a website of an e-commerce web server109. In at least one embodiment, the stored records of network requests are analyzed, such as by automated analysis tool117that can be implemented as a web-based application or in some other manner, either on client device102or on some other device. In at least one embodiment, recording step201is performed by plug-in108or other software component that monitors and records interactions between client device102and websites of e-commerce web servers109, as initiated by user101using browser107. Alternatively, plug-in108can record102interactions that take place automatically, for example by “bots” interacting with web servers109. By recording exactly what information was entered during checkout operations and/or other interaction(s) with websites of e-commerce web servers109, the system is able to determine which requests contain which information, and to swap the information in each request for a key that can be referenced for future requests. Analysis tool117analyzes202the results of the recording, so as to determine how the data was encoded in the form. This allows substitute data to be encoded in the same way when formulating new requests. Once information has been found in a request, the same information can be searched for in all previous responses. Information in requests can come from a number of sources, including for example response HTML, cookies, formData within HTML, XML, JSON and/or plain text. In at least one embodiment, applied encryption can be reverse-engineered, particularly if it takes place at client device102. In at least one embodiment, once all requests have been made in the correct sequence, analysis tool117can automatically extract the relevant data (such as subtotal, shipping options, tax, discounts, fees and total for an e-commerce transaction) by entering values ahead of time and automatically generating a parser to extract the data from the responses that contain the correct values. In at least one embodiment, coupons for use in ecommerce sites can be tested, for example by running through checkout in parallel, one for each coupon code, and one without a coupon code. This results in the checkout process taking roughly the same amount of time as it would to test a single coupon code, and adds the ability to try any number of coupons without significantly increasing processing time. In at least one embodiment, after pricing information has been extracted, contents of the cart are cleared, so that any items added to the cart are returned back to the retailer's inventory and are not held unnecessarily. Based on the analysis performed in step202, request identification module118(or some other component) identifies203those requests that are necessary to complete the operation, by determining which requests are sending required data for completing e-commerce-related tasks. Once such requests have been identified, the system can rewrite204requests as needed to include new information; for example, a new address or a different coupon code can be substituted for previous data, thus allowing for automated site navigation. Automated site navigation module116then executes205the rewritten requests, for example by transmitting the rewritten requests to web server(s)109, and receiving and interpreting the responses. An example of this might be changing the address to an address in a different state and reading the response that includes the tax rate for that state, or changing the request to try a different coupon and read the total in the response to determine whether the coupon changed the price. In at least one embodiment, this process can be run without the use of a browser, for example by keeping track of recorded cookies during normal website interaction, and passing such recorded information from one request to the next. In at least one embodiment, results of analysis step202are made available to other client devices102, so that multiple client devices102can rewrite their requests using the data collected from one client device102. In at least one embodiment, data collected from analysis step202performed at multiple client devices102can be combined, so as to provide more effective, reliable, and accurate aggregate data for rewriting requests. Any suitable means can be used for sharing such data among devices102; these include peer-to-peer approaches, where data is sent directly from one device to another, as well as centralized approaches, where data is stored at a central server (such as server110) and retrieved by client devices102as needed. This approach is particularly useful in cases where a site is A/B testing checkout flows. By reviewing requests made by a client within the A test and other requests within the B test, it is possible to automatically determine whether the test will affect the results, and if so, what data is the same or different within the requests to support both flows. Recording Step201 Referring now toFIG.3, there is shown a flowchart depicting a method of recording201all of the requests being made during typical interaction with a website of an e-commerce web server109using browser107. Recording201can be done with a utility such as a browser extension or plug-in108that is able to view all requests and responses being made by browser107. In at least one embodiment, requests can be filtered301in order to cut down on the content being produced. Filtering out images and CSS helps reduce the size, while having little impact on completing the flow. By default, it is generally useful to exclude any requests that are purely media or styling, such as CSS, images, videos, or pure text files. On the other hand, it is often useful to identify requests that contain JSON. In at least one embodiment, the type of request and type of response are determined by reviewing the request and response headers, which contain a content-type field, which can be used to filter unnecessary requests. In at least one embodiment, during the recording of requests, annotations can be made302to keep track of information that may help determine which requests contain what information. For example, it may be useful to receive input indicating the subtotal, tax, shipping cost and speed, as well as grand total, during a checkout flow, in order to automatically tag those requests that return those specific data items. In at least one embodiment, annotation302is performed based on user input indicating what information is currently displayed on the web page, or what information was entered by the user. Alternatively, annotation302can be performed based on automatically derived information, e.g. by scraping or by other means. For example, in at least one embodiment, annotations can be made302to track input information, such as first name, last name, address, and any other dynamic information that can be used to identify requests that send that content. Depending on the flow, it may be useful to swap out some content with new content in order to complete the flow with different information; annotations from step302can help to identify which content is to be swapped out. Example of Recording Referring now toFIGS.4through12, there are shown screen shots depicting an example in which a checkout process is recorded, as described in step201above, according to one embodiment. Similar techniques can be used to record other types of website interactions. In each ofFIGS.4through12, the left side is a display of a web page401of an e-commerce website, shown during the recording process. The right side is a user interface402that enables control of the recording process, as provided for example by a browser extension or plug-in108. The depicted layout is merely exemplary, however. In the following description, user101refers to the individual controlling or administering the recording process. In at least one embodiment, such process can be performed automatically, with little or no human interaction; alternatively, a human can be in control of the process. Typically, user101controlling or administering the recording process is a different person than an ordinary customer of the website, though this may or may not be the case. InFIG.4, user101enters the URL of the website to be recorded in field403, and clicks “Start Recording” button404. The system then begins recording201network requests, as described above in connection withFIG.2. InFIG.5, recording201of the interaction with web page401is proceeding. All requests to the web server109of the associated website, and responses received from the web server109of the associated website, are recorded. “Stop Recording” button405is shown, allowing user101to stop the recording. Additional buttons406are provided, such as “ATC” button406A, “Pricing” button406B, “Coupon” button406C, “ClearCart” button406D, and the like, to allow user101to denote different requests as belonging to a particular type, based on the interaction that is taking place when the request is performed; this allows requests to be grouped together, and makes it easier to categorize and find specific requests later. Alternatively, such categorization can be performed automatically. Fields407are provided for user101to enter output information displayed on web page401, as described in more detail below. InFIG.6,177requests have been recorded. Here, user101clicks on “Pricing” button406B to indicate to the system that, at this point in the flow, pricing information409is visible on web page401. As can be seen in the Figure, “ATC” button406A has also been clicked, to indicate that an item has been added to user's101shopping cart. In this manner, the information just received from web server109can be properly categorized as pricing information. As mentioned above, in at least one embodiment, such categorization can be automatic, without requiring user101to manually identify the type of information currently being displayed. Such automatic categorization can be based on analysis of the displayed data, scraping, and/or on other factors. InFIG.7, user101has entered output information in fields407, such as pricing in field407A, tax in field407B, shipping speed in field407C, shipping cost in field407D, and total amount in field407E. User101enters this information based on what is seen on web page401itself, in order summary410. In at least one embodiment, the system uses this information as entered by user101in fields407, in order to determine exactly which request produced the specified values. InFIG.8, user101has entered an invalid coupon code, namely IVTESTCOUPON, as indicated by message411. User101clicks “Coupon” button406C, so as to indicate that a coupon code has been entered. In at least one embodiment, the system can thereby record and identify those interactions that take place upon entry of a coupon code, whether valid or invalid. FIGS.9and10show an example of recording the clear cart process. InFIG.9, there is an item412in the cart. InFIG.10, user101has removed the item412from the cart, and has clicked “ClearCart” button406D to indicate that the cart was just cleared. In at least one embodiment, the system can thereby record and identify those interactions that take place when the cart is cleared. In at least one embodiment, the cart is cleared so that the site's inventory is not artificially affected by the recording process. InFIG.11, user101has clicked “Stop Recording” button405. “Export Recording” button408has replaced “Stop Recording” button405; clicking “Export Recording” button408causes the record of requests to be exported, either by saving it to local storage (such as data store111), or by transmitting it to a server (such as server110or another server) for further analysis. For example, in at least one embodiment, the system stores information about interactions with web servers109, and categorization information (whether entered by user101or automatically generated), on a storage device associated with server110or some other hardware device. In at least one embodiment, analysis tool117, such as a web-based application, can be used to interpret and filter the recorded results. The analysis tool can run on any client device102, or on server110, or on some other hardware device that has access to stored information describing user interactions with web pages.FIG.12depicts an example of a user interface1200for such an analysis tool, which may be used to identify requests generated from a recording, and rewrite the requests to allow for dynamic data, such as a new address or different coupon code. User101, who may be a website administrator or other individual, can use analysis tool117to choose a file containing recorded interactions, load in a request, perform searches, and apply various filters. Examples include the following:Filter buttonsAnalytics: filters out analytics requests, such as those used for monitoring traffic to ecommerce sites.Assets: removes all images, CSS, and the like.Scripts: removes JavaScript files and the like.Customer tags: removes any requests that do not include information entered by user101, such as address, name, ZIP code, and the like.Request tags: removes ATC tag or clear cart tagsOutput tags: removes requests that do not include output information such as subtotal, total, tax, and the likeFilter request field: allows user101to remove requests that include particular textFind request field: allows user101to view requests that include particular textFind response field: allows user101to view responses that include particular text Referring now toFIG.13, there is shown an example of a user interface1300displaying request1302as viewed via analysis tool117. Request1302can be named in field1303, if desired, for ease of reference. Check box1301indicates that request1300has been selected for export. A response code of “201” is indicated1304, along with the URL1305that the response was posted to. User101can expand the display by clicking on triangles1306and1307next to the words “request” and “response”, respectively, so as to cause the corresponding information to be shown. User101can also click on various links/buttons1308, such as “Copy full request”, “View Response”, etc. to cause the display of every request that took place up to a certain point as identified by the button1308user101clicks on. When user101expands one of the groups by clicking on a button1308, analysis tool117displays a list of requests for that group, filtered based on whatever current filter is applied. Referring now toFIG.14, there is shown a screen1400within analysis tool117, for reviewing, extracting, and mapping information from a request. Boxes1401indicate information that has been automatically extracted and determined to be present in the request. Once the information has been so identified, it can be automatically replaced with any desired values before submitting an automated request. FIG.14also shows how web server109is encoding the information, so that the automated request can include information encoded in the same way. Boxes1402represent user-created labels for data that is known to be part of the response. User101can select boxes1402to view information about that part of the response; this causes the display of a path to the field1403that has the same value. In the displayed example, these fields1403are JSON fields. For example, if user101clicks “subtotal” box1402a, the system generates and displays the path for that field1403. In at least one embodiment, the information for each of these fields1403is entered during checkout, as described above. Because the information is entered during the recording process, it can be automatically extracted by analysis tool117. Specifically, analysis tool117is able to determine which value is associated with which field, so that it can generate a path to the value that maps to the key. If the automatically generated paths are incorrect, they can be adjusted manually. Some fields1403are indicated by asterisks1404(in this example, the subtotal field1403A and shippingCost field1403D). This means the field1403contains a string that includes the data entered by user101, but is not an exact match for the entered data. Referring now toFIG.15, there is shown a screen1500within analysis tool117, for displaying particular extracted data for a field (in this case, a coupon field). A coupon was entered; user101can mark the corresponding field as a “coupon” field. If multiple coupons have been entered, the request may be run multiple times. In this manner, the coupon request can be included or excluded when the checkout process is replayed. Since the system is able to determine what coupon was entered during checkout, it can automatically identify the request that contains the coupon value as part of the sent data. Referring now toFIG.16, there is shown a screen1600within an analysis tool, for displaying a request key. Here, the 13-digit number is a key1601that is necessary to make one of the requests at the website run by web server109. Various web servers109may generate key1601in different ways, either as a checksum, or security token, or the like. Typically, such keys1601may last for the duration of a session, or they may be generated anew for each request. In at least one embodiment, analysis tool117identifies such keys1601, and looks for input values that generate particular keys1601. The appropriate key1601for an input value can then be copied and added (using “Add a key” button1602). Subsequently, analysis tool117looks thru all requests, and replaces all instances of the key with a variable name (such as “_iv_checksum”). Analysis tool117also looks thru previous requests to see where that value came from. In this case, analysis tool117determines that the value came from a previous request in a field called orderChecksum1603. This allows the system to make the association between the orderChecksum field1603and the variable_iv_checksum. Specifically, when it sees a response that includes orderChecksum, the system extracts the value, assigns it to the variable _iv_checksum, and then substitutes it where appropriate. Additionally, in at least one embodiment, the _iv_checksum variable is auto-assigned a value from the previous request, and future requests use the auto-assigned value. In this manner, information can be automatically mapped from one request to the next. In at least one embodiment, the system can handle several different request keys1601from different sources, so that it can determine what information is needed from the different sources, and what requests are required. In at least one embodiment, the system can also detect changes to the website, because the returned data is not what is expected. In such a situation, the system can alert user101that further analysis is needed to re-associate variables, request keys1601, values, and the like, in view of the new website architecture. One example of this is an “out of stock” message. In such a situation, web server109may respond with an error that can be parsed and mapped to an “out of stock” error format that is then automatically reported by the system. Identifying Requests203 Once the process flow has been completed and all requests have been recorded and saved, the request identification module118reviews the recorded requests and identifies requests203that are necessary to complete the process flow. In at least one embodiment, request identification module118keeps track of the sequence for such requests, so that they can be run in the correct order. In many cases, this sequence is critical, as the response of one request can be used in the header of a subsequent request. Cookies often come back in the response from requests. In at least one embodiment, request identification module118determines which cookies are necessary to make a request, and further determines where these cookies are generated. In some cases, a cookie comes back as a set-cookie in the response header, and in other cases it may come back in the response body. If the cookie is in the response body, request identification module118parses it and then uses that information to set the cookie when making its own requests. Request identification module118likewise processes other data in a request, such as post data or query strings. Request identification module118identifies the request being made, as well as the parts of the request that need to be substituted with session-specific data. In at least one embodiment, the system has input a unique value when recording session data, so that request identification module118can review prior requests for a particular data item and thereby determine where it originated. After determining where it originated, a parser can be generated to automatically grab the value and set that value on future requests. In that case, both requests are used to complete the flow. In at least one embodiment, the system swaps out input data when making new requests, in order to complete the flow with the correct inputs. In order to do so accurately, request identification module118determines how the site handles encoding, and ensures that the input data matches the site's encoding scheme.FIG.12depicts an example user interface1200for a tool for identifying requests according to one embodiment. This tool is used for both analysis and rewriting of the requests to support dynamic data.FIG.12illustrates the ability to filter requests, expand requests that were captured during various parts of the checkout flow, such as during an add to cart event, while pricing is available on the screen, while a coupon code is being applied, and while the cart is being cleared. The ability to export the identified requests is possible, once requests have been identified and selected. Rewriting204and Executing205Requests Once all of the requests have been identified and rules have been generated to swap the necessary inputs and outputs from one request to the next, automated site navigation module116can rewrite204and execute205requests to complete process flow. In at least one embodiment, the rewriting step204can be performed by automated site navigation module116; alternatively, it can be performed by any other component of the system. In at least one embodiment, automated site navigation module116executes requests in virtually any suitable context that allows for requests and responses to be handled manually. In at least one embodiment, the system uses a browser extension that can run in a headless browser, or within a standard browser that support extensions and the ability to manipulate cookies. Alternatively, requests can be executed without the use of a browser, such as through a server based node process. This allows the module to run server side or client side. In at least one embodiment, when executing requests, automated site navigation module116maintains control of both the request data and response data. Often, sites will respond with a “302” error message, which signals for a new request to be made at a different location. In at least one embodiment, automated site navigation module116follows redirects in order to successfully complete certain flows. In at least one embodiment, automated site navigation module116is configured to properly process request-response content, such as a set-cookie response which signals the cookie to be updated before sending the next request. In at least one embodiment, automated site navigation module116records responses that can be passed to the next request, so as to rewrite the request appropriately before sending it. Responses may be in any suitable format, such as a HTML, JSON, plain text, compressed data, or the like. In at least one embodiment, automated site navigation module116is configured so that it can parse responses in whatever format they appear, so as to extract the necessary output values. In some embodiments, automated site navigation module116can be configured to enable parallel execution of multiple requests, in order to speed up execution. It may be that a few requests can run in parallel and their combined output is necessary for the following request. Automated site navigation module116may be configured to take such rules into account, so that it is capable of coordinating such parallel processing. In at least one embodiment, once execution is complete, automated site navigation module116cleans up and resets in order to run future flows. Applications As mentioned above, the described techniques can be used in many different contexts, including e-commerce as well as other domains. For example, the techniques can be used in any situation for automated navigation of a website-based process, particularly when multiple requests are to be performed in a specified sequence in order to obtain desired information. The described system and method thus provide an efficient, generalized approach for performing automated web-based processing in an efficient manner. In one example, the system can be used for automated navigation of web-enabled processes related to products, travel, lodging, automobile shopping, and/or the like, from any number of disparate sources. The present system and method have been described in particular detail with respect to possible embodiments. Those of skill in the art will appreciate that the system and method may be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms and/or features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, or entirely in hardware elements, or entirely in software elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component. Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. The appearances of the phrases “in at least one embodiment” or “in at least one embodiment”in various places in the specification are not necessarily all referring to the same embodiment. Various embodiments may include any number of systems and/or methods for performing the above-described techniques, either singly or in any combination. Another embodiment includes a computer program product comprising a non-transitory computer-readable storage medium and computer program code, encoded on the medium, for causing a processor in a computing device or other electronic device to perform the above-described techniques. Some portions of the above are presented in terms of algorithms and symbolic representations of operations on data bits within the memory of a computing device. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing module and/or device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions can be embodied in software, firmware and/or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems. The present document also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computing device. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, DVD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, flash memory, solid state drives, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. The program and its associated data may also be hosted and run remotely, for example on a server. Further, the computing devices referred to herein may include a single processor or may be architectures employing multiple processor designs for increased computing capability. The algorithms and displays presented herein are not inherently related to any particular computing device, virtualized system, or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent from the description provided herein. In addition, the system and method are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings described herein, and any references above to specific languages are provided for disclosure of enablement and best mode. Accordingly, various embodiments include software, hardware, and/or other elements for controlling a computer system, computing device, or other electronic device, or any combination or plurality thereof. Such an electronic device can include, for example, a processor, an input device (such as a keyboard, mouse, touchpad, track pad, joystick, trackball, microphone, and/or any combination thereof), an output device (such as a screen, speaker, and/or the like), memory, long-term storage (such as magnetic storage, optical storage, and/or the like), and/or network connectivity, according to techniques that are well known in the art. Such an electronic device may be portable or non-portable. Examples of electronic devices that may be used for implementing the described system and method include: a desktop computer, laptop computer, television, smartphone, tablet, music player, audio device, kiosk, set-top box, game system, wearable device, consumer electronic device, server computer, and/or the like. An electronic device may use any operating system such as, for example and without limitation: Linux; Microsoft Windows, available from Microsoft Corporation of Redmond, Washington; Mac OS X, available from Apple Inc. of Cupertino, California; iOS, available from Apple Inc. of Cupertino, California; Android, available from Google, Inc. of Mountain View, California; and/or any other operating system that is adapted for use on the device. While a limited number of embodiments have been described herein, those skilled in the art, having benefit of the above description, will appreciate that other embodiments may be devised. In addition, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the subject matter. Accordingly, the disclosure is intended to be illustrative, but not limiting, of scope.
46,631
11860867
For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the present disclosure. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present disclosure. The same reference numerals in different figures denote the same elements. The terms “first,” “second,” “third,” “fourth,” and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms “include,” and “have,” and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, device, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, system, article, device, or apparatus. The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the apparatus, methods, and/or articles of manufacture described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The terms “couple,” “coupled,” “couples,” “coupling,” and the like should be broadly understood and refer to connecting two or more elements mechanically and/or otherwise. Two or more electrical elements may be electrically coupled together, but not be mechanically or otherwise coupled together. Coupling may be for any length of time, e.g., permanent or semi-permanent or only for an instant. “Electrical coupling” and the like should be broadly understood and include electrical coupling of all types. The absence of the word “removably,” “removable,” and the like near the word “coupled,” and the like does not mean that the coupling, etc. in question is or is not removable. As defined herein, two or more elements are “integral” if they are comprised of the same piece of material. As defined herein, two or more elements are “non-integral” if each is comprised of a different piece of material. As defined herein, “approximately” can, in some embodiments, mean within plus or minus ten percent of the stated value. In other embodiments, “approximately” can mean within plus or minus five percent of the stated value. In further embodiments, “approximately” can mean within plus or minus three percent of the stated value. In yet other embodiments, “approximately” can mean within plus or minus one percent of the stated value. As defined herein, “real-time” can, in some embodiments, be defined with respect to operations carried out as soon as practically possible upon occurrence of a triggering event. A triggering event can include receipt of data necessary to execute a task or to otherwise process information. Because of delays inherent in transmission and/or in computing speeds, the term “real-time” encompasses operations that occur in “near” real-time or somewhat delayed from a triggering event. In a number of embodiments, “real-time” can mean real-time less a time delay for processing (e.g., determining) and/or transmitting data. The particular time delay can vary depending on the type and/or amount of the data, the processing speeds of the hardware, the transmission capability of the communication hardware, the transmission distance, etc. However, in many embodiments, the time delay can be less than 1 minute, 5 minutes, 10 minutes, or another suitable time delay period. DESCRIPTION OF EXAMPLES OF EMBODIMENTS Turning to the drawings,FIG.1illustrates an exemplary embodiment of a computer system100, all of which or a portion of which can be suitable for (i) implementing part or all of one or more embodiments of the techniques, methods, and systems and/or (ii) implementing and/or operating part or all of one or more embodiments of the non-transitory computer readable media described herein. As an example, a different or separate one of computer system100(and its internal components, or one or more elements of computer system100) can be suitable for implementing part or all of the techniques described herein. Computer system100can comprise chassis102containing one or more circuit boards (not shown), a Universal Serial Bus (USB) port112, a Compact Disc Read-Only Memory (CD-ROM) and/or Digital Video Disc (DVD) drive116, and a hard drive114. A representative block diagram of the elements included on the circuit boards inside chassis102is shown inFIG.2. A central processing unit (CPU)210inFIG.2is coupled to a system bus214inFIG.2. In various embodiments, the architecture of CPU210can be compliant with any of a variety of commercially distributed architecture families. Continuing withFIG.2, system bus214also is coupled to memory storage unit208that includes both read only memory (ROM) and random access memory (RAM). Non-volatile portions of memory storage unit208or the ROM can be encoded with a boot code sequence suitable for restoring computer system100(FIG.1) to a functional state after a system reset. In addition, memory storage unit208can include microcode such as a Basic Input-Output System (BIOS). In some examples, the one or more memory storage units of the various embodiments disclosed herein can include memory storage unit208, a USB-equipped electronic device (e.g., an external memory storage unit (not shown) coupled to universal serial bus (USB) port112(FIGS.1-2)), hard drive114(FIGS.1-2), and/or CD-ROM, DVD, Blu-Ray, or other suitable media, such as media configured to be used in CD-ROM and/or DVD drive116(FIGS.1-2). Non-volatile or non-transitory memory storage unit(s) refer to the portions of the memory storage units(s) that are non-volatile memory and not a transitory signal. In the same or different examples, the one or more memory storage units of the various embodiments disclosed herein can include an operating system, which can be a software program that manages the hardware and software resources of a computer and/or a computer network. The operating system can perform basic tasks such as, for example, controlling and allocating memory, prioritizing the processing of instructions, controlling input and output devices, facilitating networking, and managing files. Exemplary operating systems can include one or more of the following: (i) Microsoft® Windows® operating system (OS) by Microsoft Corp. of Redmond, Washington, United States of America, (ii) Mac® OS X by Apple Inc. of Cupertino, California, United States of America, (iii) UNIX® OS, and (iv) Linux® OS. Further exemplary operating systems can comprise one of the following: (i) the iOS® operating system by Apple Inc. of Cupertino, California, United States of America, (ii) the Blackberry® operating system by Research In Motion (RIM) of Waterloo, Ontario, Canada, (iii) the WebOS operating system by LG Electronics of Seoul, South Korea, (iv) the Android™ operating system developed by Google, of Mountain View, California, United States of America, (v) the Windows Mobile™ operating system by Microsoft Corp. of Redmond, Washington, United States of America, or (vi) the Symbian™ operating system by Accenture PLC of Dublin, Ireland. As used herein, “processor” and/or “processing module” means any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a controller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor, or any other type of processor or processing circuit capable of performing the desired functions. In some examples, the one or more processors of the various embodiments disclosed herein can comprise CPU210. In the depicted embodiment ofFIG.2, various I/O devices such as a disk controller204, a graphics adapter224, a video controller202, a keyboard adapter226, a mouse adapter206, a network adapter220, and other I/O devices222can be coupled to system bus214. Keyboard adapter226and mouse adapter206are coupled to a keyboard104(FIGS.1-2) and a mouse110(FIGS.1-2), respectively, of computer system100(FIG.1). While graphics adapter224and video controller202are indicated as distinct units inFIG.2, video controller202can be integrated into graphics adapter224, or vice versa in other embodiments. Video controller202is suitable for refreshing a monitor106(FIGS.1-2) to display images on a screen108(FIG.1) of computer system100(FIG.1). Disk controller204can control hard drive114(FIGS.1-2), USB port112(FIGS.1-2), and CD-ROM and/or DVD drive116(FIGS.1-2). In other embodiments, distinct units can be used to control each of these devices separately. In some embodiments, network adapter220can comprise and/or be implemented as a WNIC (wireless network interface controller) card (not shown) plugged or coupled to an expansion port (not shown) in computer system100(FIG.1). In other embodiments, the WNIC card can be a wireless network card built into computer system100(FIG.1). A wireless network adapter can be built into computer system100(FIG.1) by having wireless communication capabilities integrated into the motherboard chipset (not shown), or implemented via one or more dedicated wireless communication chips (not shown), connected through a PCI (peripheral component interconnector) or a PCI express bus of computer system100(FIG.1) or USB port112(FIG.1). In other embodiments, network adapter220can comprise and/or be implemented as a wired network interface controller card (not shown). Although many other components of computer system100(FIG.1) are not shown, such components and their interconnection are well known to those of ordinary skill in the art. Accordingly, further details concerning the construction and composition of computer system100(FIG.1) and the circuit boards inside chassis102(FIG.1) are not discussed herein. When computer system100inFIG.1is running, program instructions stored on a USB drive in USB port112, on a CD-ROM or DVD in CD-ROM and/or DVD drive116, on hard drive114, or in memory storage unit208(FIG.2) are executed by CPU210(FIG.2). A portion of the program instructions, stored on these devices, can be suitable for carrying out all or at least part of the techniques described herein. In various embodiments, computer system100can be reprogrammed with one or more modules, system, applications, and/or databases, such as those described herein, to convert a general purpose computer to a special purpose computer. For purposes of illustration, programs and other executable program components are shown herein as discrete systems, although it is understood that such programs and components may reside at various times in different storage components of computing device100, and can be executed by CPU210. Alternatively, or in addition to, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. For example, one or more of the programs and/or executable program components described herein can be implemented in one or more ASICs. Although computer system100is illustrated as a desktop computer inFIG.1, there can be examples where computer system100may take a different form factor while still having functional elements similar to those described for computer system100. In some embodiments, computer system100may comprise a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. Typically, a cluster or collection of servers can be used when the demand on computer system100exceeds the reasonable capability of a single server or computer. In certain embodiments, computer system100may comprise a portable computer, such as a laptop computer. In certain other embodiments, computer system100may comprise a mobile device, such as a smartphone. In certain additional embodiments, computer system100may comprise an embedded system. Turning ahead in the drawings,FIG.3illustrates a block diagram of a system300that can be employed for optimizing scans using query planning on batch data, according to an embodiment. System300is merely exemplary and embodiments of the system are not limited to the embodiments presented herein. The system can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, certain elements, modules, or systems of system300can perform various procedures, processes, and/or activities. In other embodiments, the procedures, processes, and/or activities can be performed by other suitable elements, modules, or systems of system300. System300can be implemented with hardware and/or software, as described herein. In some embodiments, part or all of the hardware and/or software can be conventional, while in these or other embodiments, part or all of the hardware and/or software can be customized (e.g., optimized) for implementing part or all of the functionality of system300described herein. In many embodiments, system300can include a query planning system310, and in some embodiments, can include data producer computers (e.g.,340) operated by data producers (e.g.,350), data client computers (e.g.,341) operated by data clients (e.g.,351), and/or a network330. Query planning system310, data producer computer340, and/or data client computer341can each be a computer system, such as computer system100(FIG.1), as described above, and can each be a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. In another embodiment, a single computer system can host two or more of, or all of, query planning system310, data producer computer340, and/or data client computer341. Additional details regarding query planning system310, data producer computer340, and/or data client computer341are described herein. In a number of embodiments, each of query planning system310, data producer computer340, and/or data client computer341can be a special-purpose computer programed specifically to perform specific functions not associated with a general-purpose computer, as described in greater detail below. In some embodiments, query panning system310can be in data communication through network330with data producer computer340and/or data client computer341. Network330can be a public network (e.g., the Internet), a private network, or a hybrid network. In some embodiments, the operator and/or administrator of system300can manage system300, the processor(s) of system300, and/or the memory storage unit(s) of system300using the input device(s) and/or display device(s) of system300. In several embodiments, query planning system310can include one or more input devices (e.g., one or more keyboards, one or more keypads, one or more pointing devices such as a computer mouse or computer mice, one or more touchscreen displays, a microphone, etc.), and/or can each include one or more display devices (e.g., one or more monitors, one or more touch screen displays, projectors, etc.). In these or other embodiments, one or more of the input device(s) can be similar or identical to keyboard104(FIG.1) and/or a mouse110(FIG.1). Further, one or more of the display device(s) can be similar or identical to monitor106(FIG.1) and/or screen108(FIG.1). The input device(s) and the display device(s) can be coupled to query planning system310in a wired manner and/or a wireless manner, and the coupling can be direct and/or indirect, as well as locally and/or remotely. As an example of an indirect manner (which may or may not also be a remote manner), a keyboard-video-mouse (KVM) switch can be used to couple the input device(s) and the display device(s) to the processor(s) and/or the memory storage unit(s). In some embodiments, the KVM switch also can be part of query planning system310. In a similar manner, the processors and/or the non-transitory computer-readable media can be local and/or remote to each other. Meanwhile, in many embodiments, query planning system310also can be configured to communicate with and/or include one or more databases. The one or more databases can include a query planning database, a data producer database or data client database, for example. The one or more databases can be stored on one or more memory storage units (e.g., non-transitory computer readable media), which can be similar or identical to the one or more memory storage units (e.g., non-transitory computer readable media) described above with respect to computer system100(FIG.1). Also, in some embodiments, for any particular database of the one or more databases, that particular database can be stored on a single memory storage unit or the contents of that particular database can be spread across multiple ones of the memory storage units storing the one or more databases, depending on the size of the particular database and/or the storage capacity of the memory storage units. The one or more databases can each include a structured (e.g., indexed) collection of data and can be managed by any suitable database management systems configured to define, create, query, organize, update, and manage database(s). Exemplary database management systems can include MySQL (Structured Query Language) Database, PostgreSQL Database, Microsoft SQL Server Database, Oracle Database, SAP (Systems, Applications, & Products) Database, and IBM DB2 Database. Meanwhile, communication between query planning system310, network330, data producer computer340, data client computer341, and/or the one or more databases can be implemented using any suitable manner of wired and/or wireless communication. Accordingly, query planning system310can include any software and/or hardware components configured to implement the wired and/or wireless communication. Further, the wired and/or wireless communication can be implemented using any one or any combination of wired and/or wireless communication network topologies (e.g., ring, line, tree, bus, mesh, star, daisy chain, hybrid, etc.) and/or protocols (e.g., personal area network (PAN) protocol(s), local area network (LAN) protocol(s), wide area network (WAN) protocol(s), cellular network protocol(s), powerline network protocol(s), etc.). Exemplary PAN protocol(s) can include Bluetooth, Zigbee, Wireless Universal Serial Bus (USB), Z-Wave, etc.; exemplary LAN and/or WAN protocol(s) can include Institute of Electrical and Electronic Engineers (IEEE) 802.3 (also known as Ethernet), IEEE 802.11 (also known as WiFi), etc.; and exemplary wireless cellular network protocol(s) can include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/Time Division Multiple Access (TDMA)), Integrated Digital Enhanced Network (iDEN), Evolved High-Speed Packet Access (HSPA+), Long-Term Evolution (LTE), WiMAX, etc. The specific communication software and/or hardware implemented can depend on the network topologies and/or protocols implemented, and vice versa. In many embodiments, exemplary communication hardware can include wired communication hardware including, for example, one or more data buses, such as, for example, universal serial bus(es), one or more networking cables, such as, for example, coaxial cable(s), optical fiber cable(s), and/or twisted pair cable(s), any other suitable data cable, etc. Further exemplary communication hardware can include wireless communication hardware including, for example, one or more radio transceivers, one or more infrared transceivers, etc. Additional exemplary communication hardware can include one or more networking components (e.g., modulator-demodulator components, gateway components, etc.). In many embodiments, query planning system310can include a scheduling system311, a scanning system312, a generating system313, a communication system314, a defragmenting system315, and/or a translating system316. In many embodiments, the systems of query planning system310can be modules of computing instructions (e.g., software modules) stored at non-transitory computer readable media that operate on one or more processors. In other embodiments, the systems of query planning system310can be implemented in hardware. Query planning system310can be a computer system, such as computer system100(FIG.1), as described above, and can be a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. In another embodiment, a single computer system can host query planning system310. Additional details regarding query planning system310and the components thereof are described herein. Turning ahead in the drawings,FIG.4illustrates a flow chart for a method400of optimizing scans using query planning on batch data, according to another embodiment. Method400is merely exemplary and is not limited to the embodiments presented herein. Method400can be employed in many different embodiments and/or examples not specifically depicted or described herein. In some embodiments, the procedures, the processes, and/or the activities of method400can be performed in the order presented. In other embodiments, the procedures, the processes, and/or the activities of method400can be performed in any suitable order. In still other embodiments, one or more of the procedures, the processes, and/or the activities of method400can be combined or skipped. In several embodiments, system300(FIG.3) and/or query planning system310(FIG.3) can be suitable to perform method400and/or one or more of the activities of method400. In these or other embodiments, one or more of the activities of method400can be implemented as one or more computing instructions configured to run at one or more processors and configured to be stored at one or more non-transitory computer-readable media. Such non-transitory computer-readable media can be part of a computer system such as query planning system310. The processor(s) can be similar or identical to the processor(s) described above with respect to computer system100(FIG.1). Referring toFIG.4, method400optionally can include a block410of defragmenting event records received in event streams from one or more producers by assigning user identifiers of users to the event records in a user domain object model. The term “producers” can be used interchangeably with the term “data producers,” which can be similar or identical to data producer350. In several embodiments, each producer of the one or more producers can have independent objectives and/or goals that are separate from another producer. In various embodiments, one or more of the objectives and/or goals of each producer of the one or more producers can overlap with the objectives and/or goals of other producers, even when the producers are interacting with the same user or users. In many embodiments, each producer of the one or more producers can operate without the benefit of a full or collective view of each user profile and/or a history of interactions with each user. For example, producer A can send an email campaign to one or more users for a certain product on day one, while producer B sends a coupon for a certain product on day two to the same user. In both events, producer A and producer B, acting without knowledge of the purpose and/or an action of each producer, receive different sets of data from the same user that can result in fragmented views of the same user. In some embodiments, data producers can be viewed as different modules that can produce data from various interactions with users. In many embodiments, user interactions can be captured by the producers then shared via a downstream pipeline or repository of data. In several embodiments, block410of defragmenting data from event records can include using a type of data defragmentation process (e.g., module) that can receive data from multiple different sources and with different identifiers (e.g., an email, a cookie, a device). In some embodiments, a defragmenting process can determine whether each source and/or identifier is mapped (e.g., identified) to a user. In various embodiments, after identifying the same user is mapped to the different sources from different producers, assigning (e.g., attaching) an identifier (e.g., a common identifier) to the same user for each of the events to the same user associated with each of the producers. In a number of embodiments, block410can include generating a defragmented single view of each user as a semantic representation for managing large volumes of data. In some examples, the dataset can exceed 10 billion data points. In several embodiments, a defragmented single view of each user of multiple users can include a coherent way to represent a user (e.g., a customer) comprehensively (e.g., entirety) by incorporating or mapping events, such as a user identifier, a user profile, an attribute, an inferred attribute, an interaction, a touchpoint, an attribution signal and/or another suitable interaction. In various embodiments, the defragmented single view additionally can provide a definitive set of privacy safe, flexible logical rules used to map current and new customer attributes in a scalable fashion. For example, an internal and/or external user can feed data and/or responses to a producer, a provider, a modeler, an architect, and/or another suitable data producer, which can represent some aspect of a user and/or a representation of the user. Conventionally, data producers historically relied on relational databases via entity relational modeling by first transforming data representations to a set of real-time application programming interfaces (APIs) which enabled a real-time serving. The entity relational modeling would produce a batch table or a Hadoop Distributed File System table for the serving. Such a conventional process is time-consuming and uses extensive computing resources during implementation. In some embodiments, using non-relational (NoSQL) databases for large data domains can be advantageous as a technical improvement over the conventional technology and/or systems for managing large unrelated documents and/or data records. In various embodiments, block410can include combining the event records into a single state of a customer domain object model (CDOM) for serving in batch data or real-time. In some embodiments, block410also can include identifying which respective records of the event records can be associated with each respective user of the users using a set of rules for mapping scalable representations. In various embodiments, block410further can include assigning each respective user of the users a respective user identifier. In several embodiments, block410additionally can include converting the scalable representations to automated artifacts for the producers to provide data into a central representation. In some embodiments, generating a defragmented single view of each user also can use a predefined set of rules to convert the representation of the user to automated artifacts (e.g., software) of different types. For example, an automated artifact can include an instrumentation stub for producers to provide data into a central representation without first coordinating with peer producers. In several embodiments, method400also optionally can include a block420of translating the event records into a non-relational (NoSQL) schema. In many embodiments, the NoSQL schema can include dataset layers. In some embodiments, block420can be implemented as shown inFIG.5and described below. In various embodiments, tracking query performances via NoSQL and/or denormalized state implementation can include using translation rules. In some embodiments, the translation rules can be applied to data, including, for example, large amounts of data of users on a large scale accumulated over a number of years. In many embodiments, the translation rules can include distilling the rules of translating from a user domain object model representation to a NoSQL schema-based state. Such translation rules can provide advantages over the conventional system, such as increased cost efficiency and securing privacy of the data. In a number of embodiments, generating a user domain object model can be implemented in 3 stages, such as an L1 state, an L2 state, and an L3 state, as discussed further below. In various embodiments, the L1 state can represent a physical implementation of the CDOM. In some embodiments, the L1 state can represent a snapshot of the relevant data at a point in time of a user. In some embodiments, relevant data can include such data points as a user attribute, a user profile, a historical interaction, and/or other suitable information about a user. In several embodiments, the L2 state can include generating intermediate aggregations and/or transformations, which can be servable to cover a wide variety of use cases and/or access patterns of data clients and/or client teams. In some embodiments, the intermediate aggregations and/or transformations produced for L2 can be produced via one or more translations from the L1 state to the L2 state. In various embodiments, the intermediate aggregations and/or transformations can include deploying cost efficient algorithms where the cost efficient algorithms can be used on large data. For example, a cost efficient algorithm can include a sliding windows algorithm used for large data instead of smaller states in regular cases. In various embodiments, the L3 state can include enabling data clients (e.g., client teams, tenants) to define their own materialized view rules for the L3 state, which can include data from the L1 state or L2 state. For example, the L3 state can represent a “Bring your own model” concept for batching big data systems. In some embodiments, using the L3 state can help data clients host a particular L3 state that is part of a central system. In many embodiments, one or more advantages of hosting a particular L3 state for data clients can include a saving state, a decrease in computing costs, and/or avoiding duplicate and/or redundant processing among data clients. In many embodiments, using the L3 state also can be advantageous by enabling privacy compliance by avoiding sensitive user tokens, allowing attributes to permeate into data client spaces, and/or another suitable advantage. In some embodiments, method400further can include a block430of bundling multiple registered queries of a dataset using a scheduling technique. In many embodiment, the dataset can be homogenous in schema. In various embodiments, block430of bundling multiple registered queries of a dataset using a scheduling technique can include optimizing read query access, via an application of one or more algorithms, by recasting the problem as a scheduling problem. Conventionally, access and/or read query access via a clever state implementation can include studying data access patterns. In some embodiments, optimizing read query access can include determining (i) types of queries (e.g., registered queries) being fired and (ii) a proportion of queries relative to an amount of data to be accessed and/or scanned. In various embodiments, block430can provide a technological improvement over conventional query planning by determining whether or not to combine queries with requests for similar attributes and/or another suitable request metric. In some embodiments, block430can include first registering queries then parsing each registered query to extract attributes from each request creating a homogenous dataset of the registered queries. For example, a registered query 1 can request all user records and/or user identifiers associated with a particular age group, a registered query 2 can request all users associated with specific demographics, and a registered query 3 can request all users associated with (i) a specific preference, and (ii) from a particular geographical location, as well as a variety of other types of conditional filters. In various embodiments, block430can include extracting enough relevant attributes from each registered query responsive to each registered query and input the relevant attributes into a homogenous schema. In various embodiments, method400additionally can include a block440of running a single table scan of the dataset to process the multiple registered queries of the dataset in parallel. In several embodiments, block440can include extracting attributes from the respective row of the dataset responsive to the multiple registered queries. In several embodiments, an advantage of block440can include running infrequent single full table scans versus running multiple full tables scans per each query. In some embodiments, a precondition to implementing block440can be that the data (e.g., each record) is homogenous in schema and an assumption that every query generally is run using a full table scan as part of a domain. In many embodiments, additional advantages of block440can include (i) decreasing the use of computer resources by running fewer scans and (ii) increasing the speed of running full table scans by running the full table scans in parallel. In various embodiments, block440can provide an improvement over the conventional method of running each of the queries separately, which can be time-consuming and inefficient due to redundant scans for similar attributes of each request. In many embodiments, block440can include events based on a few hundred queries to build different user segments and running a full table scan on each of the queries that can include accessing billions of rows of user records. In a number of embodiments, method400also can include a block450of generating a respective output responsive to each of the multiple registered queries. Turning ahead in the drawings,FIG.5illustrates a flow chart of block420of translating the event records into a NoSQL schema. Block420is merely exemplary and is not limited to the embodiments presented herein. Block420can be employed in many different embodiments and/or examples not specifically depicted or described herein. In some embodiments, the procedures, the processes, and/or the activities of block420can be performed in the order presented. In other embodiments, the procedures, the processes, and/or the activities of block420can be combined or skipped. Referring toFIG.5, block420also can include a block510of determining access patterns of data clients. In some embodiments, the dataset layers can be based on the access patterns of the data clients. Block510can be similar or identical to the activities described below in connection with blocks630-650(FIG.6). In a number of embodiments, block420further can include a block520of generating, based on the access patterns, a first layer of the dataset layers including user profiles of the users and historical interactions of the users. The term first layer can be used interchangeably with an L1 layer and/or an L1 table layer. Block520can be similar or identical to the activities described below in connection with a block630(FIG.6). In some embodiments, the L1 table layer can include data presented in a granular format. In various embodiments, access patterns known from client teams and/or teams of data clients (e.g.,351(FIG.3)) can define each of the layers of the table schemas. In several embodiments, block420additionally can include a block530of generating, based on the access patterns, a second layer of the dataset layers including intermediate states for a subset of queries of the access patterns that exceed a predetermined threshold. In some embodiments, the intermediate states can include one or more of aggregations or transformations responsive to the subset of the queries. The term second layer can be used interchangeably with an L2 layer and/or an L2 table layer. Block530can be similar or identical to the activities described below in connection with a block640(FIG.6). For example, the second L2 layer can be expressed in a table format where the table can include 100 bundled registered queries for each data client (e.g., data client660,665, or670(FIG.6), described below), as follows: TABLE 1L2 layerDatasetQueryData ClientUser NumberNumberNumberNumberUser 1Dataset 1Query 1660User 2Dataset 1Query 2665User 3Dataset 1Query 3670 In various embodiments, the L2 table layer can include a layer of tables with previously completed “commonly-asked-for” aggregations and/or transformations made available for access by data clients and/or client teams. In some embodiments, the L2 state also can be referred to as a “known-pattern-query state.” In several embodiments, the L2 layer can include proactively updating additional patterns of data access by data clients and/or client teams in real time. In many embodiments, when one or more client teams repeatedly conduct aggregations and/or transformations or a specific kind of data access, such as, accessing the same query from the L1 tables, a new table can be created proactively in L2, where the newly created table in L2 can be saved to compute and/or can be pre-computed for access at a later date or time. For example, client teams (e.g., data clients) can access the L1 table and the L2 table to run a number of registered queries for data responsive to the queries. In this example, client teams can directly access the L1 tables if they believe that a use case would benefit from granular data or if the L2 tables do not support the transformations and/or aggregations requested. In this example, a client team can directly access L1 tables then perform the transformations on their own. In another example, client teams can access and/or use the known-pattern-query state of L2 when a use case includes one of the known patterns in the known-pattern-query state, thus the client teams can run one or more queries using the L2 state to access already transformed and/or processed states. In another example, client teams that have specific transformations previously stored as data in the L3 tables, the client teams can access L3 directly. In another example, client teams can submit queries to a query planning engine to optimize multiple queries. In this example, multiple client teams can run different queries on a particular L2 table also in parallel. In one such example, client teams submit their queries to the query planning engine with a selected dataset that can include a number of specific transformations, filters, and/or fields from the dataset with an expected run time and completion time. In another such example, a query planning engine can collect multiple such registered queries and/or requests from client teams, then schedule each job by bundling all data from registered queries associated with a specific dataset. In following this example, the execution of running the registered query can include reading each row from the dataset once, where all the registered queries are executed on the row of data in parallel, which can advantageously minimize data access times. In various embodiments, block420also can include a block540of periodically updating the second layer as additional queries of the access patterns exceed the predetermined threshold. In a number of embodiments, block420additionally can include a block550of generating, based on the access patterns, a third layer of the dataset layers including transformed data specific to one or more of the data clients. In several embodiments, the transformed data can include the event records from another one of the dataset layers. The term third layer can be used interchangeably with an L3 layer and/or an L3 table layer. Block550can be similar or identical to the activities described below in connection with a block650(FIG.6). In various embodiments, the L3 Table layer can be available for any client team and/or data clients to store transformed data specific to a client team and/or data client. In some embodiments, the L3 table can allow the client teams to avail all the advantages that come with being part of the data object model, such advantages can include privacy, access control, and/or another suitable advantage while simultaneously providing the freedom to the client teams to store data in whichever format selected. Turning ahead in the drawings,FIG.6illustrates a flow chart for a method600of running a single table scan of a dataset homogenous in schema, according to an embodiment. Method600can illustrate defragmenting event records received from one or more producers, including translating the event records into a NoSQL schema. Method600also can illustrate bundling multiple registered queries of a dataset. Method600can be similar to method400(FIG.4) and/or block420(FIG.5), and various activities of method600can be similar or identical to various activities of method400(FIG.4) and/or block420(FIG.5). Method600can be employed in many different embodiments and/or examples not specifically depicted or described herein. In some embodiments, the procedures, the processes, and/or the activities of method600can be performed in the order presented or in parallel. In other embodiments, the procedures, the processes, and/or the activities of method600can be performed in any suitable order. In still other embodiments, one or more of the procedures, the processes, and/or the activities of method600can be combined or skipped. In several embodiments, system300(FIG.3) and query planning system310(FIG.3) can be suitable to perform method600and/or one or more of the activities of method600. In these or other embodiments, one or more of the activities of method600can be implemented as one or more computing instructions configured to run at one or more processors and configured to be stored at one or more non-transitory computer-readable media. Such non-transitory computer-readable media can be part of a computer system such as query planning system310. The processor(s) can be similar or identical to the processor(s) described above with respect to computer system100(FIG.1). In various embodiments, method600can include block625of defragmenting data, which can include defragmenting data received from multiple data producers, such as data producers605,610,615, and/or620. In a number of embodiments, after defragmenting the data, block625can include assigning identifiers to data mapped to a user or a group of users. Blocks625can be similar or identical to the activities described in connection with block410(FIG.4). In some embodiments, method600can proceed after block625to a block630of generating a first NoSQL table L1. In various embodiments, block630can include generating the L1 table of data defragmented from the data producers where the rows of each table can be associated with an identifier of a user. In some embodiments, generating the L1 table can include using table schema based on data client access patterns. In various embodiments, method600can proceed after block630to a block640and/or a block650. In some embodiments, block630can skip block640and go directly to block650, as described further below. In some embodiments, method600can include block640of generating a second NoSQL table L2. In several embodiments, block640can include generating an L2 table of known pattern-query state data. In various embodiments, the L2 tables can include data aggregations, transformations, and/or joins. In many embodiments, joins can be pre-calculated based on known query-access patterns and/or known-pattern query states. In several embodiments, method600can proceed after block640to a block650of generating a third NoSQP table L3. In various embodiments, block650can include generating an L3 table of data with client specific logic. In various embodiments, one or more data clients can be grouped together, or represented, as client teams, such as client teams660,665, and/or670. In several embodiments, the client teams can access one or more data layers (L1, L2, or L3 tables) by one or more data clients in parallel. For example, client team660can access the L1 table data while client team665accesses the L2 table data, and while client team670accesses the L3 table data, in parallel. Returning back in the drawings,FIG.3illustrates a block diagram of query planning system310. Query planning system310is merely exemplary and is not limited to the embodiments presented herein. Query planning system310can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, certain elements or systems of query planning system310can perform various procedures, processes, and/or acts. In other embodiments, the procedures, processes, and/or acts can be performed by other suitable elements or systems. In a number of embodiments, a scheduling system311can at least partially perform block430(FIG.4) of bundling multiple registered queries of a dataset using a scheduling technique. In several embodiments, scanning system312can at least partially perform block440(FIG.4) of running a single table scan of the dataset to process the multiple registered queries of the dataset in parallel. In various embodiments, generating system313can at least partially perform block450(FIG.4) of generating a respective output responsive to each of the multiple registered queries, block630(FIG.6) of generating a first NoSQL table, block640(FIG.6) of generating a second NoSQL table, and/or block650(FIG.6) of generating a third NoSQP table. In some embodiments, communication system314can at least partially perform block450(FIG.4) of generating a respective output responsive to each of the multiple registered queries. In a number of embodiments, defragmenting system315can at least partially perform block410(FIG.4) of defragmenting event records received in event streams from one or more producers by assigning user identifiers of users to the event records in a customer domain object model, and/or block625(FIG.6) of defragmenting data can include defragmenting data received from multiple data producers and/or multiple sources. In several embodiments, translating system316can at least partially perform block420(FIG.4) of translating the event records into a non-relational (NoSQL) schema, block510(FIG.5) of determining access patterns of data clients, wherein the dataset layers are based on the access patterns of the data clients, block520(FIG.5) of generating, based on the access patterns, a first layer of the dataset layers comprising user profiles of the users and historical interactions of the users, block530(FIG.5) of generating, based on the access patterns, a second layer of the dataset layers comprising intermediate states for a subset of queries of the access patterns that exceed a predetermined threshold, block540(FIG.5) of periodically updating the second layer as additional queries of the access patterns exceed the predetermined threshold, block550(FIG.5) of generating, based on the access patterns, a third layer of the dataset layers comprising transformed data specific to one or more of the data clients, block630(FIG.6) of generating a first NoSQL table, block640(FIG.6) of generating a second NoSQL table, and/or block650(FIG.6) of generating a third NoSQP table. In many embodiments, the techniques described herein can provide several technological improvements. In some embodiments, the techniques described herein can provide for running a single table scan of a dataset to process multiple registered queries using a NoSQL schema for large data systems in parallel. In a number of embodiments, the techniques described herein can advantageously enable efficient utilization of a query planning system, such as310, which can beneficially result in a reduction in processor use and memory cache. In many embodiments, the techniques described herein can be used continuously at a scale that cannot be handled using manual techniques. For example, each full table scan can be run on a dataset that can exceed 10 billion rows of data. In a number of embodiments, the techniques described herein can solve a technical problem that arises only within the realm of computer networks, as running a single table scan based on bundled registered queries in parallel does not exist outside the realm of computer networks. Moreover, the techniques described herein can solve a technical problem that cannot be solved outside the context of computer networks. Specifically, the techniques described herein cannot be used outside the context of computer network. Various embodiments can include a system including one or more processors and one or more non-transitory computer-readable media storing computing instructions configured to run on the one or more processors and perform certain acts. The acts can include bundling multiple registered queries of a dataset using a scheduling technique. The dataset can be homogenous in schema. The acts also can include running a single table scan of the dataset to process the multiple registered queries of the dataset in parallel. The acts further can include generating a respective output responsive to each of the multiple registered queries. A number of embodiments can include a method being implemented via execution of computing instructions configured to run at one or more processors and stored at one or more non-transitory computer-readable media. The method can include bundling multiple registered queries of a dataset using a scheduling technique. The dataset can be homogenous in schema. The method also can include running a single table scan of the dataset to process the multiple registered queries of the dataset in parallel. The method additionally can include generating a respective output responsive to each of the multiple registered queries, Although optimizing scans using query planning on batch data has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made without departing from the spirit or scope of the disclosure. Accordingly, the disclosure of embodiments is intended to be illustrative of the scope of the disclosure and is not intended to be limiting. It is intended that the scope of the disclosure shall be limited only to the extent required by the appended claims. For example, to one of ordinary skill in the art, it will be readily apparent that any element ofFIGS.1-6may be modified, and that the foregoing discussion of certain of these embodiments does not necessarily represent a complete description of all possible embodiments. For example, one or more of the procedures, processes, or activities ofFIGS.3-6may include different procedures, processes, and/or activities and be performed by many different modules, in many different orders, and/or one or more of the procedures, processes, or activities ofFIGS.3-6may include one or more of the procedures, processes, or activities of another different one ofFIGS.3-6. As another example, the systems within query planning system310, such as scheduling system311, generating system313, communication system314, defragmenting system315, and/or translating system316. Additional details regarding query planning system310, scheduling system311, generating system313, communication system314, defragmenting system315, and/or translating system316(seeFIG.3), can be interchanged or otherwise modified. Replacement of one or more claimed elements constitutes reconstruction and not repair. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims, unless such benefits, advantages, solutions, or elements are stated in such claim. Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents.
53,695
11860868
DETAILED DESCRIPTION DBaaS involves client devices that store and manage data on a server, where the server provides the service. Typically, the server cannot be trusted, and so a trusted proxy is used to encrypt and/or decrypt the data between the client devices and the server. Existing encryption techniques for providing data confidentiality in databases with dynamic content include property-preserving encryption schemes. Such schemes enable range queries with logarithmic amortized overhead. Order-Preserving Encryption (OPE) preserves order between cipher texts, but otherwise reveal nothing else about the plaintexts. OPE is required to be stateful and also requires mutations (i.e., re-encryptions of previously encrypted data) to achieve the claimed security. Mutations in OPE are computationally expensive because, for example, they can cause periodic re-encryption of a block of user data that already exists in a database, increase the number of reads and writes on the system, and modify user data (as opposed to just the OPE state). Mutations can also conflict with concurrent ongoing insertions (such as by blocking reads and writes on the user data being re-encrypted and increasing the number of failed transactions) and hamper throughput and latency of the system. By way of example, a typical system using IND-OCPA (Indistinguishability under Ordered Chosen Plaintext Attack) compliant OPE generally has over three mutations per encryption. Other systems for IND-OCPA compliant OPE re-encrypt the entire database periodically to redistribute the cipher text. This translates to re-encrypting an entire column in an RDBMS (Relational Database Management System) and re-encrypting the entire database for a Key-Value (KV) store. As described herein, an exemplary embodiment includes building a secure DBaaS that allows support for performing range queries on an encrypted database for a dynamically changing dataset and support for creating indexes over encrypted data to improve the performance of runtime processing. The secure DBaaS, in some embodiments, provides IND-OCPA security guarantees and enables mutation-less query processing with practical storage overheads. Some embodiments can include generating variable bit length cipher text from a fixed-length OPE cryptosystem. The parameters for the OPE order preserving encryption system can be dynamically changed on each call to an encrypt function, based on the implicit statistics. At least one embodiment includes performing hyper-parameter tuning based on local in-memory statistics. Multiple independent OPE instances may be used simultaneously for encrypting values belonging to a single user database column, where the number of OPE instances, theoretically in the worst case, are linear in the number of unique elements encrypted. FIG.1is a diagram illustrating a system architecture in accordance with exemplary embodiments. By way of illustration,FIG.1depicts a client101which is configured to execute one or more applications102. The one or more applications102enable the client101to securely store and manage data in the one or more databases104based at least in part on the encryption manager system105. For example, the one or more databases104and the encryption manager system105, in at least some embodiments, are configured to use a mutation-less OPE scheme that enables variable-length cipher texts, as discussed in more detail elsewhere herein. FIG.2is a block-diagram of an example architecture for an encryption manager system200, which may correspond to the encryption manager system105, for example. The encryption manager system200includes a multi-OPE manager202, multiple variable-length OPE modules204-1. . .204-N (referred to collectively herein as variable-length OPE modules204), a hyperparameter tuner210, a fixed-length OPE module214, a signaling manager216, state manager218, a mapping manager220, a metadata manager222, a modeling manager224, and a rebalance service226. The encryption manager system200, in some embodiments, can be associated with a fully managed, distributed JSON document database architecture (e.g., a Cloudant architecture). Generally, the signaling manager216, the state manager218, the mapping manager220, the metadata manager222, the modeling manager224, and the rebalance service226may operate in an expected manner. For example, the state manager218manages the state of the OPE scheme and stores it in a database (e.g., the one or more databases104), and the mapping manager220is responsible for mapping how data is mapped to plaintext keys of the encryption scheme. As an example, if the data corresponding to two different documents have the same value for an attribute, then the mapping manager220determines whether the two documents will be assigned the same plaintext with respect to the encryption scheme or be assigned different plaintexts. The metadata manager222manages the metadata associated with encryption. For example, the metadata may include encryption keys, an encryption algorithm, and the location of the state. If multiple secondary indexes have been defined for given data, then the metadata manager222can enable these secondary indexes to have independent encryption keys and encryption algorithms, for example. For an RDBMS, the metadata manager222can enable different columns to have different encryption keys and encryption algorithms. The modeling manager224manages the serialization and deserialization of the state into the databases where the cipher texts are stored. The modeling manager224also tracks the parts of the state being accessed, their respective mappings, and the number of conflicts. Based on these inputs, the modeling manager224may dynamically decide the mapping for new parts of state and dynamically remap existing parts of the state. It is noted that this does not affect other data (e.g., user data). Accordingly, the mapping manager220is responsible for converting the user data to OPE plaintext. To do this the mapping manager220tracks various parameters including the conversion used for various parts of data, size of the state, and data statistics, for example. When a new user data key needs to be converted, the mapping manager220checks these parameters and decides on the conversion. The fixed-length OPE module214assumes the OPE is a fixed-length construction (e.g., a fixed plaintext size and a fixed cipher text size). Accordingly, all cipher texts have the same bit length, and this is required for OPE to correctly work. The cipher texts are eventually stored in at least one database, which supports variable-length data types. As examples, in FoundationDB value is a byte array up to 10 KB, and Db2 provides a varbinary data type, which is a variable-length binary string data type. Thus, in some embodiments, some form of ordering amongst variable-length byte arrays can be present in each database. A given one of the variable-length OPE modules204in the encryption manager system200defines an order preserving encryption scheme whose cipher texts are of variable bit length. These variable bit length cipher texts have the property that the cipher texts are order preserving under a specific rule for comparing data of different bit lengths, which can vary from instantiation to instantiation. The comparison rule may be the same as the rule followed by the database system which will eventually be used to store the cipher texts. The variable-length OPE modules204may include respective variable length adapters (VLAs)206-1. . .206-N (collectively referred to herein as VLAs206) and respective norm calculators208-1. . .208-N (collectively referred to as norm calculators208). The functionality of the VLAs206and the norm calculators208are discussed in more detail below. The VLAs206bridge the gap between fixed length OPE encryption schemes and the databases that are capable of storing variable-length data. This is accomplished by calling the fixed length OPE encryption scheme with appropriate parameters, which are computed independently for each call to the encrypt function of the fixed length OPE encryption scheme. The logic for using the fixed length OPE encryption scheme to derive variable-length cipher texts which are order preserving is encapsulated in the VLAs206. The fixed length OPE encryption scheme is unaware of this transformation, which allows, for example, existing fixed length OPE encryption schemes to be used by the VLAs206. The parameters which are used by a given one of the VLAs206during the call to the fixed length OPE encryption scheme are computed in such a manner so to preserve the IND-OCPA security guarantee of the encryption manager system200. Each of the VLAs206may use only the order of the plain text for its computations, thus ensuring that the encryption manager system200remains IND-OCPA compliant. Cipher texts in a fixed length OPE scheme generally have long bit lengths in order to reduce mutations. The optimal bit length of the cipher text which will reduce or eliminate mutations depends on the data that will be encrypted and the order in which the data will be encrypted. Since this information is not available initially, a fixed length OPE scheme is instantiated with the cipher text having a length that is exponential to a length of the plain text in order to guarantee no mutation. However, this is infeasible in practical scenarios. Another option is to set the bit length of the cipher text to be equal to the maximum size of data supported by the underlying database that will eventually be used to store the cipher text. This option minimizes the number of mutations for a fixed length OPE scheme but results in impractical overheads (e.g., an overhead of 1280 times is needed for encrypting and storing a four-byte integer in FoundationDB). Accordingly, in some embodiments, a given one of the VLAs206starts with a cipher text having a small size and then increases the cipher text size (of a subset of data that is encrypted) as needed to reduce a number of mutations in the encryption manager system200. This has the advantage of ensuring that storage overhead due to variable-length OPE is kept at a minimum, while also reducing the number of mutations encountered by the encryption manager system200. Consider a situation where the encryption manager system200needs to encrypt some data, x, which has not been previously encrypted. In some embodiments, a given one of the VLAs206can retrieve the maximum OPE assigned to plaintext that is less than x (denoted x_prev_ope) and the minimum OPE assigned to plaintext that is greater than x (denoted x_next_ope). The x_prev_ope and x_next_ope can be retrieved from the state maintained by the OPE for cases where the given VLA206is OPE specific, or by using a fixed-length OPE and querying a user database, where the given VLA206is OPE agnostic. The OPE length for data x is computed based on a function of x_prev_ope and x_next_ope, which may be expressed as x_ope_length=F(x_prev_ope, x_next_ope). The function in some embodiments, determines if the bit length of x_prev_ope is equal to the bit length of x_next_ope. If they are equal, then the bit length for x_prev_ope is returned if space is available for a new cipher text. If not, then the bit length of x (the return value) is set to the bit length of x_prev_ope and is incremented by a number of bytes, where the number of bytes is specified (e.g., via a configuration parameter). If the bit length of x_prev_ope is not equal to the bit length of x_next_ope, then x_prev_ope and x_next_ope are set to the same bit length. This is done by transforming the value out of x_prev_ope and x_next_ope having the lower cipher text bit length. This transformation is done in accordance with the comparison rule of the underlying database which will be used for storing the cipher text. In one example embodiment, the lower bit length value is prepended with an appropriate number of zeroes to make its length the same as the length of the other value. The function is then recursively performed to compute the OPE length of x. The process then includes instantiating and calling an instance of a fixed length OPE with x_ope_length as the cipher text length to obtain x_ope. The process then returns x_ope to the caller. For OPE, a function is used for selecting the cipher text from an available range. For example, x_ope is selected given x_prev_ope and x_next_ope. Such a function effects when and how many mutations will be encountered. OPE implementations have a defined way of choosing cipher text, which may be calculated as a midpoint of x_prev_ope and x_next_ope, for example. One or more embodiments described herein modify, for example, existing OPE implementations so that the OPE chooses cipher text based on a normalization (“norm”) parameter. The value of the norm parameter is referred to herein as norm. A given norm, in some embodiments, is calculated by one of the norm calculators208. A norm is specified while encrypting a value. In embodiments that require selecting a midpoint of x_prev_ope and x_next_ope as x_ope, a norm may be equal to 0.5. The optimal norm for a given encryption is data dependent, including the data that will be encrypted after this encryption. A given one of the norm calculators208can calculate the norm for each encryption independently. The optimal value of norm for each encryption depends not only on the data that has been encrypted till now but also on the data that will be encrypted in future. Since the future encryptions are not known in advance, the norm calculator208may use implicit statistics to compute the norm value for each encryption. The implicit statistics correspond to the OPE assigned to the neighbors of the current plain text being encrypted (data x). As an example, the implicit statistics may comprise a reverse sorted list of cipher text assigned to plaintext less than x, which can be called “history left,” and a sorted list of cipher text assigned to plaintext greater than x can be called “history right.” The implicit statistics can be retrieved from the state maintained by the OPE for cases where the variable-length OPE modules204are OPE specific, or by using a fixed-length OPE and querying a user database, where the variable-length OPE modules204are OPE agnostic. For example, the norm computation can be different for each encryption operation as the implicit statistics are different for each encryption. The norm computation also adapts to different workloads as the implicit statistics depend on the workload, and the same data inserted in a different order will produce a different norm trail. A norm trail refers a list of norm values that were used as parameters for a given call to the encrypt function. If the same set of values are inserted in two different orders, then it is possible that the value of norm used for encrypting a value in insert order 1 is different from the norm value used for encrypting the same value in insert order 2. In some examples, the norm computation is performed in such a manner such to maintain the security guarantee (e.g., IND-OCPA) of the system. In such embodiments, the norm computation uses only the order information from the implicit statistics, as described in more detail below. An example of a process for varying OPE parameters in accordance with one or more embodiments is now described. Assume that encryption manager system200needs to encrypt some data, x, and computes the OPE length for x (x_ope_length) using a given one of the VLAs206, as described in more detail elsewhere herein. Next, the norm is computed by a corresponding one of the norm calculators208. More specifically, the norm calculator208can obtain, as inputs, a history size indicating how much historical data is to be considered and a history weight to be assigned to the historical data. The norm calculator208then retrieves the history left up to the history size and the history right up the history size. The norm is computed as a function of the history left, history right, and history weight. A fixed-length OPE module (e.g., corresponding to fixed-length OPE module214) is instantiated and called with the x_ope_length and calculated norm to obtain x_ope. The history left and history right can be obtained either by querying the user database, where the variable-length OPE modules204are OPE agnostic or by accessing the state maintained by the OPE for case where the variable-length OPE modules204are OPE specific. A parameter length is computed as the maximum length of OPE cipher text in history right and history left. Then two parameters, leftDensity (which represents the density of OPE cipher text in the history left) and rightDensity (which represents the density of OPE cipher text in history right) are computed. As an example, the density parameters may be calculated using the following function: length-log⁡(high⁢ope-low⁢ope#⁢Values), which normalizes the density when variable-length encoding is used. High ope and low ope are the maximum and minimum OPE cipher text in the history whose density is being computed and #values is the number of elements in the corresponding history. The process may further include determining a difference in the density parameters (referred to as densityDiff) using the following equation: leftDensity-rightDensitymaxDensityDiff, which splits the region between low ope and high ope into (leftDensity−rightDensity) slots. maxDensityDiff in this equation refers to the maximum possible difference between the history right and history left in the current instantiation of OPE. The norm can be calculated as follows: densityDiff(maxCtLength-lengthmaxCtLength)historyWeight, where the norm determines how far the cipher text for the current encryption will be from the midpoint. The encoding or OPE cipher text can be set to (low ope+N/2)+(N/2*norm), where N=high ope−low ope. In this example, (low ope+N/2) corresponds to the midpoint, and (N/2*norm) corresponds to the distance from the midpoint based on the norm. The hyperparameter tuner210includes an in-memory statistics module212that maintains statistics about data that has been encrypted up to the present time. The statistics are used for tuning the parameters used by the variable-length OPE modules204, including for example, history size and/or history weight. The statistics can be used to infer the type of workload (e.g., a sequential workload or a uniform workload). Also in some embodiments, the statistics are used to create a time-series series model that can predict future inserts, which are then used to tune the parameters. By way of example, if sequential inserts are present, then the historical data should have more weight; if uniform inserts are present, then historical data should be given less weight; if bit lengths of the cipher text assigned are increasing, then more weight should be given to the historical data; and if there is high variance in the density of history left and history right (e.g., based on one or more thresholds), then the history size should be increased. In some implementations, multiple independent encryption manager systems200can be hosted on different servers (e.g., for load balancing), and each such system can maintain its own in-memory statistics. The statistics from the servers can be aggregated periodically to improve the parameter tuning. By way of example, each server can maintain minimum value and maximum value pairs (denoted as [min_value, max_value]) for the previous k inserts, and m additional [min_value, max_value] pairs for the previous: k−1 to 2k inserts, 2k−1 to 3k inserts, . . . , (m−1)k−1 to mk inserts. The minimum and maximum values across the m entries can be used to change the history weight. The history weight can be changed according to one or more rules or criteria. For example, if the variance of the minimum and maximum values is within a particular threshold, then the history weight can be kept substantially constant, and the history weight can be incremented if the variance is increasing or decreasing. The combination of variable-length cipher texts, variable OPE parameters, and hyperparameter tuning, in accordance with embodiments described herein, can substantially reduce mutations, however, it is still possible to have one or more mutations in a worst-case scenario. It is noted that if OPE is IND-OCPA compliant, then mutations cannot be eliminated from the OPE itself. Mutations are further eliminated, in some embodiments, using the multi-OPE manager202. Generally, the multi-OPE manager202uses multiple independent OPE instances (e.g., corresponding to variable-length OPE modules204). The cipher text output from each instance is order preserving, and in some embodiments, it is assumed there are no guarantees that cipher text will be order preserving across the OPE instances. The multi-OPE manager202identifies the OPE instance corresponding to each cipher text using the additional data or metadata that is stored in the one or more databases104. As examples, separate namespaces can be maintained in FoundationDB for each OPE instance and an extra column can be maintained to specify the identifier of OPE instance in an RDBMS (such as Db2). The data that is returned to the caller of encryption function can include the cipher text (x_ope) and the identifier (x_ope_id). In at least some embodiments, a range query operation now becomes a union of ranges along with their associated ope_ids. For example, consider a user query specifying that x is between 10 and 50. The user query is then transformed, where the actual transformation depends on the underlying data store. For example, the transformed user query can be: (x_ope between 1010 and 1500 and ope_id=1) OR (x_ope between 100 and 150 and ope_id=2). The variable-length cipher text and varying OPE parameters may be implemented for each OPE instance, and the hyperparameter can be tuned across the OPE instances (as is the case in theFIG.2embodiment). Also, in some embodiments, one or more of the OPE instances can be merged in order to improve performance. For example, merging the OPE instances may include merging two sorted lists, which is optionally performed during a maintenance window. The merging of two OPE instances is generally equivalent to merging two sorted lists (since each OPE instance can return its ciphertexts in sorted order). The merging of OPE instances can lead to mutation in user data and can be performed during a regular maintenance window. FIG.3shows an example of a process for managing multiple OPE instances in accordance with exemplary embodiments. The process depicted inFIG.3may be performed by the multi-OPE manager202, for example. Step302includes determining encryption is needed for some data, x, and step304includes a test to determine whether x is present in any OPE instance. If yes, then step306includes returning x-ope and x_ope_id. If no, then step308includes a test to determine if x can be added to an existing OPE instance without mutation. If no, then step310includes creating a new OPE instance, inserting x into the new OPE instance, and returning x_ope and x_ope_id. If yes, then step312includes inserting x into the existing OPE instance and returning x_ope and x_ope_id. An example of a process performed by multi-OPE manager202for processing a range query also can include converting the low value of the range (q_low) and converting the high value of range of the query (q_high) by iterating over each of the variable-length OPE modules204to obtain the transformation of q_low and q_high. The transformation of q_low and q_high along with an identifier of the associated OPE instance is added as a conjunction to the transformed range query. FIG.4is a flow diagram illustrating techniques in accordance with exemplary embodiments. Step402includes obtaining, by a database service, data associated with one or more client devices to be stored in at least one encrypted database. Step404includes encrypting, without mutation and in accordance with one or more security requirements, at least a portion of the data using an order preserving encryption scheme, wherein the encrypting comprises (i) computing a set of encryption parameters for the portion of the data and (ii) applying a process that converts a fixed-length cipher text corresponding to the portion of the data to a variable-length cipher text. Step406includes storing the encrypted data in the at least one encrypted database, wherein the database service enables one or more indexes to be built over the encrypted data to improve performance of query processing. The process may include a step of maintaining, by the database service, statistics corresponding to previously encrypted data over one or more time periods. The maintaining may include maintaining statistics for predicting data that will be inserted in the at least one encrypted database over one or more future time periods, wherein the predicted data is based at least in part on a time-series model that is generated using the statistics corresponding to previously encrypted data. Computing the set of encryption parameters for the portion of the data may include computing a normalization parameter, wherein the normalization parameter determines a cipher text length to be used for the portion of the data based at least in part on a first list of cipher text lengths assigned to data larger than the portion of the data and a second list of cipher text lengths assigned to data smaller than the portion of the data. The normalization parameter may be computed based at least in part on a first hyperparameter indicating an amount of previously encrypted data to be considered and a second hyperparameter indicating an amount of weight given to the previously encrypted data. The process may include the following steps: inferring at least one of: a type of workload determined based on the statistics; an amount of variance of maximum cipher text lengths between the one or more time periods; and an amount of variance of minimum cipher text lengths between the one or more time periods; and dynamically adjusting, by the database service, at least one of the first and second hyperparameters based on the inferring. The process may include a step of executing, by the database service, multiple independent order preserving encryption instances that simultaneously encrypt data corresponding to a single column of the at least one encrypted database. The process may include a step of triggering a new order preserving encryption instance in response to determining that none of the multiple independent order preserving encryption instances can encrypt the portion of the obtained data without mutation. A maximum number of the multiple independent order preserving encryption instances may be less than a number of unique elements encrypted. The process may include the following steps: processing a range query of a user by transforming the range query independently for each of the multiple independent order preserving encryption instances; and adding transformations resulting from the multiple independent order preserving encryption instances to the transformed range query. The one or more security requirements may correspond to indistinguishability under ordered chosen plaintext attack requirements. The techniques depicted inFIG.4can also, as described herein, include providing a system, wherein the system includes distinct software modules, each of the distinct software modules being embodied on a tangible computer-readable recordable storage medium. All of the modules (or any subset thereof) can be on the same medium, or each can be on a different medium, for example. The modules can include any or all of the components shown in the figures and/or described herein. In an embodiment of the present disclosure, the modules can run, for example, on a hardware processor. The method steps can then be carried out using the distinct software modules of the system, as described above, executing on a hardware processor. Further, a computer program product can include a tangible computer-readable recordable storage medium with code adapted to be executed to carry out at least one method step described herein, including the provision of the system with the distinct software modules. Additionally, the techniques depicted inFIG.4can be implemented via a computer program product that can include computer useable program code that is stored in a computer readable storage medium in a data processing system, and wherein the computer useable program code was downloaded over a network from a remote data processing system. Also, in an embodiment of the present disclosure, the computer program product can include computer useable program code that is stored in a computer readable storage medium in a server data processing system, and wherein the computer useable program code is downloaded over a network to a remote data processing system for use in a computer readable storage medium with the remote system. An exemplary embodiment or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and configured to perform exemplary method steps. Additionally, an embodiment of the present disclosure can make use of software running on a computer or workstation. With reference toFIG.5, such an implementation might employ, for example, a processor502, a memory504, and an input/output interface formed, for example, by a display506and a keyboard508. The term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processor” may refer to more than one individual processor. The term “memory” is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device (for example, hard drive), a removable memory device (for example, diskette), a flash memory and the like. In addition, the phrase “input/output interface” as used herein, is intended to include, for example, a mechanism for inputting data to the processing unit (for example, mouse), and a mechanism for providing results associated with the processing unit (for example, printer). The processor502, memory504, and input/output interface such as display506and keyboard508can be interconnected, for example, via bus510as part of a data processing unit512. Suitable interconnections, for example via bus510, can also be provided to a network interface514, such as a network card, which can be provided to interface with a computer network, and to a media interface516, such as a diskette or CD-ROM drive, which can be provided to interface with media518. Accordingly, computer software including instructions or code for performing the methodologies of the present disclosure, as described herein, may be stored in associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and implemented by a CPU. Such software could include, but is not limited to, firmware, resident software, microcode, and the like. A data processing system suitable for storing and/or executing program code will include at least one processor502coupled directly or indirectly to memory elements504through a system bus510. The memory elements can include local memory employed during actual implementation of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during implementation. Input/output or I/O devices (including, but not limited to, keyboards508, displays506, pointing devices, and the like) can be coupled to the system either directly (such as via bus510) or through intervening I/O controllers (omitted for clarity). Network adapters such as network interface514may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems and Ethernet cards are just a few of the currently available types of network adapters. As used herein, including the claims, a “server” includes a physical data processing system (for example, system512as shown inFIG.5) running a server program. It will be understood that such a physical server may or may not include a display and keyboard. An exemplary embodiment may include a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out exemplary embodiments of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform embodiments of the present disclosure. Embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It should be noted that any of the methods described herein can include an additional step of providing a system comprising distinct software modules embodied on a computer readable storage medium; the modules can include, for example, any or all of the components detailed herein. The method steps can then be carried out using the distinct software modules and/or sub-modules of the system, as described above, executing on a hardware processor502. Further, a computer program product can include a computer-readable storage medium with code adapted to be implemented to carry out at least one method step described herein, including the provision of the system with the distinct software modules. In any case, it should be understood that the components illustrated herein may be implemented in various forms of hardware, software, or combinations thereof, for example, application specific integrated circuit(s) (ASICS), functional circuitry, an appropriately programmed digital computer with associated memory, and the like. Given the teachings provided herein, one of ordinary skill in the related art will be able to contemplate other implementations of the components. Additionally, it is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (for example, networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (for example, country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (for example, web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (for example, host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (for example, mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (for example, cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.6, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.6are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.7, a set of functional abstraction layers provided by cloud computing environment50(FIG.6) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.7are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and providing a secure database-as-a-service96, in accordance with the one or more embodiments of the present disclosure. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of another feature, step, operation, element, component, and/or group thereof. At least one embodiment of the present disclosure may provide a beneficial effect such as providing improvements to the performance and security of DBaaS systems by, for example, enabling support for variable-length cipher texts, substantially eliminating or reducing mutations, and ensuring security guarantees. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
51,908
11860869
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION Various techniques for performing queries to a consistent data set across query engine types are described. Data sets are often stored in systems and/or formats that are accessible to a specific database engine (or specific database engine type). For instance, OLTP database data may be stored in row-oriented format, to enable OLTP type query engines to perform efficient updates to records or insertions of new data. OLAP database data, however, may be stored in a different format, column-oriented format, to efficiently perform a query across multiple values from many records over a single column. In scenarios where a data set were stored and accessible via one type of query engine, like OLTP, some of the performance benefits achieved by another query engine, like OLAP, would not be easily available. For example, the data might have to be moved to a new data store to be accessible by the other query engine, and yet, data movement creates many costs that might render such data movement infeasible, such as lagging data freshness, performance overhead costs on exporting the data set (e.g., using features such as MySQL's binlog replication technique), and the management or overhead of implementing and maintaining a system to facilitate data movement. In various embodiments, performing queries to a consistent data set across query engine types can avoid or remove the high costs of switching between types of query engines, improving the performance of client applications by utilizing for the client applications the different, optimized features of different types of query engines (e.g., by providing a significant increase in the performance of OLAP style queries to a data set stored for an OLTP database). For example, performing queries across query engine types may include applying automatic transactional consistency between different query engines (e.g., without implementing a data movement plan that has to maintain consistency between two copies of a data set stored for different query engines). Instead of utilizing multiple copies of a data set (e.g., moved according to the data movement plan), a single, common copy of the data set may be accessed by both query engines, which may ensure that either query engine is operating upon the freshest data (and removing the lag when copying between multiple copies of a data set). In some embodiments, a common interface or endpoint may support queries for different types of query engines, simplifying client application development. FIG.1is a logical block diagram illustrating performing queries to a consistent data set across query engine types, according to some embodiments. Different types of query engines, such as query engine110and query engine130, may receive, parse, plan, and execute queries against data set122. For example, query engine110may be a query engine for a relational database and query engine130may be a type of query engine for a non-relational (e.g., NoSQL) database. Both query engine110and query engine130may access the same copy of data set122stored in data store120. However, in some embodiments, the format of data set122in data store120may be optimized for one of the types of query engines (e.g., for query engine110). Performing queries to a consistent data set across query engine types may allow for one query engine to plan and request a query from another query engine. For example, inFIG.1, query engine110may receive query140and generate an initial plan to perform query140(as discussed below with regard toFIGS.5,6, and9). Query engine110may then apply one or more cross-type optimization rules to identify a portion (or all) of query140that could be optimally performed by another type of query engine, query engine130. Query engine110may then send a request152to query engine130to perform the portion of the query (e.g., a query formulated in a query language supported by query engine130). Additionally query engine110may provide a consistent view154of data set122to query engine130in order to perform the portion of the query—as query engine110may also support multiple concurrent updates or transactions to data122in addition to receiving and performing query140. For example, query engine110may provide undo log records that can be applied to return item values from data set122to a prior state that is consistent with the view of data set122being queried. In some embodiments, data store120could receive an indication from query engine110of the consistent view so that data store120could send data164to query engine130that is within the consistent view (e.g., by first applying one or more undo log records, which may also be stored as part of data set122, in order to generate a new value of items included in the returned portion164). Query engine130may request162and receive164data for the portion of the query from data store120. Query engine130may, in some embodiments, utilize a separate computing layer, tier or service from query engine130to obtain data from data set122, which may also perform further optimizations such as reformatting the obtained data into a format optimized for query engine130(e.g., instead of the present format optimized for query engine110). Although not depicted inFIG.1, query engine130may utilize other copies of data set122stored in other locations (e.g., other data stores) in addition to data store120, and thus may perform a query across different data stores (as well as query engine types), as discussed below with regard toFIGS.4and7. Data from other data sets (e.g., only accessible by query engine130and not query engine110) could be accessed and/or included in a result142of query140, in some embodiments. Query engine130may return a result156of the portion of the query to query engine110, in some embodiments. Query engine110may then provide the result142based on the portion result156as a stand alone result (e.g., in scenarios where all of the query was performed by query engine130) or in combination with results obtained as a result of query engine110executing another portion of query140. For example, query engine110may request and receive data for another portion of the query to obtain results for query140and incorporate or analyze these results with the result156obtained from query engine130before sending a final result142. Please note,FIG.1is provided as a logical illustration of query engines, a data store, and respective interactions and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices to implement such features. Also note that in some embodiments, a different query could be sent directly to query engine130, which could perform a query to data set122in data store120with a request sent to query engine110, or query engine130could implement similar features to those of query engine110to perform cross type optimization and send a portion of a query to query engine110for execution. The specification first describes an example of a provider network that may implement different query engines as part of different network-based services, according to various embodiments. Included in the description of the example network-based services to perform queries to a consistent data set across query engine types implemented in the different services. The specification then describes a flowchart of various embodiments of methods for performing queries to a consistent data set across query engine types. Next, the specification describes an example system that may implement the disclosed techniques. Various examples are provided throughout the specification. FIG.2is a block diagram illustrating a provider network offering network-based services implementing different query engine types that can perform queries to a consistent data set across the query engine types, according to some embodiments. Provider network200may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients250. Provider network200may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system2000described below with regard toFIG.11), needed to implement and distribute the infrastructure and services offered by the provider network200. In some embodiments, provider network200may implement various network-based services, including database service(s)210, a storage service(s)220, data warehouse service(s)230and/or one or more other virtual computing services240(which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services). Database service(s)210may implement various types of database systems and formats (e.g., relational, non-relational, graph, document, time series, etc.) and the respective types of query engines to perform queries to those databases. For example, database service(s)210may implement an OLTP query engine212to provide fast and efficient transaction processing for a relational database stored as database data222in storage service(s)210. Data warehouse service(s)230may implement various types of data warehouses that support various kinds of analytics and other data processing, such as OLAP query engine232. Storage service(s)220may include many different types of data stores, including a log-structured storage service and object storage service as discussed below with regard toFIGS.3and4, in some embodiments. Clients250may access these various services offered by provider network200via network260. Likewise network-based services may themselves communicate and/or make use of one another to provide different services. For example, storage service220may store data222for databases managed by database service210, in some embodiments. It is noted that where one or more instances of a given component may exist, reference to that component herein may be made in either the singular or the plural. However, usage of either form is not intended to preclude the other In various embodiments, the components illustrated inFIG.2may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components ofFIG.2may be implemented by a system that includes a number of computing nodes (or simply, nodes), each of which may be similar to the computer system embodiment illustrated inFIG.8and described below. In various embodiments, the functionality of a given service system component (e.g., a component of the database service or a component of the storage service) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one database service system component). Generally speaking, clients250may encompass any type of client configurable to submit network-based services requests to network-based services platform200via network260, including requests for database services (e.g., a request to execute a transaction or query with respect to a database, a request to manage a database, such as a request to enable or disable performing queries across different types of query engines, etc.). For example, a given client250may include a suitable version of a web browser, or may include a plug-in module or other type of code module that can execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client250(e.g., a database service client) may encompass an application, a web server, a media application, an office application or any other application that may make use of provider network200to store and/or access one or more databases. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client250may be an application that can interact directly with network-based services platform200. In some embodiments, client250may generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In some embodiments, a client250(e.g., a database service or data warehouse service client) may provide access to a database hosted in database service210or data warehouse hosted in data warehouse service230to other applications in a manner that is transparent to those applications. For example, client250may integrate with an operating system or file system to provide storage in accordance with a suitable variant of the storage models described herein. However, the operating system or file system may present a different storage interface to applications, such as a conventional file system hierarchy of files, directories and/or folders, in one embodiment. In such an embodiment, applications may not need to be modified to make use of the storage system service model. Instead, the details of interfacing to provider network200may be coordinated by client250and the operating system or file system on behalf of applications executing within the operating system environment. In some embodiments, clients of database service(s)210, data warehouse service(s)230, and storage service(s)220may be other systems, components, or devices implemented as part of or internal to provider network200(e.g., a virtual machine or other compute instance hosted as part of a virtual computing service may act as a client application of database service(s)210and data warehouse service(s)230). Client(s)250may convey network-based services requests (e.g., a request to query a database) to and receive responses from services implemented as part of provider network200via network260, in some embodiments. In various embodiments, network260may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients250and provider network200. For example, network260may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network260may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client250and provider network200may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network260may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client250and the Internet as well as between the Internet and provider network200. It is noted that in some embodiments, clients250may communicate with provider network200using a private network rather than the public Internet. In such a case, clients250may communicate with provider network200entirely through a private network (e.g., a LAN or WAN that may use Internet-based communication protocols but which is not publicly accessible). Services within provider network200(or provider network200itself) may implement one or more service endpoints to receive and process network-based services requests, such as requests to access data pages (or records thereof), in various embodiments. For example, provider network200services may include hardware and/or software to implement a particular endpoint, such that an HTTP-based network-based services request directed to that endpoint is properly received and processed, in one embodiment. In one embodiment, provider network200services may be implemented as a server system to receive network-based services requests from clients250and to forward them to components of a system within database service210, storage service220and/or another virtual computing service230for processing. In some embodiments, provider network200(or the services of provider network200individually) may implement various user management features. For example, provider network200may coordinate the metering and accounting of user usage of network-based services, including storage resources, such as by tracking the identities of requesting clients250, the number and/or frequency of client requests, the size of data tables (or records thereof) stored or retrieved on behalf of user, overall storage bandwidth used by users or clients250, class of storage requested by users or clients250, or any other measurable user or client usage parameter, in one embodiment. In one embodiment, provider network200may also implement financial accounting and billing systems, or may maintain a database of usage data that may be queried and processed by external systems for reporting and billing of client usage activity. In some embodiments, provider network200may be to collect, monitor and/or aggregate a variety of storage service system operational metrics, such as metrics reflecting the rates and types of requests received from clients250, bandwidth utilized by such requests, system processing latency for such requests, system component utilization (e.g., network bandwidth and/or storage utilization within the storage service system), rates and types of errors resulting from requests, characteristics of stored and requested data pages or records thereof (e.g., size, data type, etc.), or any other suitable metrics. In some embodiments such metrics may be used by system administrators to tune and maintain system components, while in other embodiments such metrics (or relevant portions of such metrics) may be exposed to clients250to enable such clients to monitor their usage of database service210, storage service220and/or data warehouse service230(or the underlying systems that implement those services). In some embodiments, provider network200may also implement user authentication and access control procedures. For example, for a given network-based services request to access a particular database, provider network200may implement administrative or request processing components that may ascertain whether the client250associated with the request is authorized to access the particular database. Provider network200may determine such authorization by, for example, evaluating an identity, password or other credential against credentials associated with the particular database, or evaluating the requested access to the particular database against an access control list for the particular database. For example, if a client250does not have sufficient credentials to access the particular database, provider network200may reject the corresponding network-based services request, for example by returning a response to the requesting client250indicating an error condition, in one embodiment. Various access control policies may be stored as records or lists of access control information by database service210, storage service220and/or other virtual computing services230, in one embodiment. FIG.3is a block diagram illustrating various components of database service implementing a query engine type that supports queries across query engine types, according to some embodiments. Database service210may implement one or more different types of database systems (e.g., a database instance) with respective types of query engines for accessing database data as part of the database. In the example database system implemented as part of database service210, a database engine head node310may be implemented for each of several databases and a log-structured storage service350(which may or may not be visible to the clients of the database system). Clients of a database may access a database head node310(which may be implemented in or representative of a database instance) via network utilizing various database access protocols (e.g., Java Database Connectivity (JDBC) or Open Database Connectivity (ODBC)). However, log-structured storage service350, which may be employed by the database system to store data pages of one or more databases (and redo log records and/or other metadata associated therewith) on behalf of clients, and to perform other functions of the database system as described herein, may or may not be network-addressable and accessible to database clients directly, in different embodiments. For example, in some embodiments, log-structured storage service350may perform various storage, access, change logging, recovery, log record manipulation, and/or space management operations in a manner that is invisible to clients of a database engine head node310. A database hosted in database service210may include a single database engine head node310that implements a query engine320that receives requests, like request312, which may include queries or other requests such as updates, deletions, etc., from various client programs (e.g., applications) and/or subscribers (users), then parses them, optimizes them, and develops a plan to carry out the associated database operation(s), such as the plan discussed below with regard toFIG.6. Query engine320may return a response314to the request (e.g., results to a query) to a database client, which may include write acknowledgements, requested data pages (or portions thereof), error messages, and or other responses, as appropriate. As illustrated in this example, database engine head node310may also include a storage service engine330(or client-side driver), which may route read requests and/or redo log records to various storage nodes within log-structured storage service350, receive write acknowledgements from log-structured storage service350, receive requested data pages from log-structured storage service350, and/or return data pages, error messages, or other responses to query engine320(which may, in turn, return them to a database client). In this example, query engine320, or another database system management component implemented at database engine head node310(not illustrated) may manage a data page cache, in which data pages that were recently accessed may be temporarily held. Query engine320may be responsible for providing transactionality and consistency in the database instance of which database engine head node310is a component. For example, this component may be responsible for ensuring the Atomicity, Consistency, and Isolation properties of the database instance and the transactions that are directed to the database instance, such as determining a consistent view of the database applicable for a query, applying undo log records to generate prior versions of tuples of a database from dirty tuples received from storage nodes, as discussed below, or providing undo or other consistency information to another query engine, as discussed below with regard toFIG.5. As illustrated inFIG.3, query engine320may manage an undo log to track the status of various transactions and roll back any locally cached results of transactions that do not commit. As illustrated inFIG.3, query engine320may manage an undo log to track the status of various transactions and roll back any locally cached results of transactions that do not commit. FIG.3illustrates various interactions to perform various requests, like request312. For example, a request312that includes a request to write to a page may be parsed and optimized to generate one or more write record requests321, which may be sent to storage service engine330for subsequent routing to log-structured storage service350. In this example, storage service engine330may generate one or more redo log records335corresponding to each write record request321, and may send them to specific ones of the storage nodes360of log-structured storage service350. Log-structured storage service350may return a corresponding write acknowledgement337for each redo log record335(or batch of redo log records) to database engine head node310(specifically to storage service engine330). Storage service engine330may pass these write acknowledgements to database tier components560(as write responses323), which may then send corresponding responses (e.g., write acknowledgements) to one or more client clients as a response314. In another example, a request that is a query may cause data pages to be read and returned to query engine320for evaluation and processing or a request to perform query processing at log-structured storage service350may be performed. For example, a query could cause one or more read record requests325, which may be sent to storage service engine330for subsequent routing to log-structured storage service350. In this example, storage service engine330may send these requests to specific ones of the storage nodes360of log-structured storage service350, and log-structured storage service350may return the requested data pages339to database engine head node310(specifically to storage service engine330). Storage service engine330may send the returned data pages to query engine320as return data records327, and query engine may then evaluate the content of the data pages in order to determine or a generate a result of a query sent as a response314. In another example, a request312that is a query may cause computations associated with query processing331may be sent to the storage nodes360for processing (e.g., the processing may be distributed across the storage nodes). As illustrated, results from the processing (e.g., in the form of tuple stream results333) may be provided back the database engine, in embodiments. For instance, query processing requests331may uses the message passing framework of storage service engine330. In some embodiments, all communication may be initialized from the storage service engine330. In some embodiments, communication may not be initialized from the storage node side. In some embodiments, storage service engine330may use a “long poll” mechanism for creating a storage level query processing session (e.g., for performing parallel query processing across multiple storage nodes) for each query processing operation331sent, in some embodiments. In some embodiments, the same message framework may be used for receiving periodic progress updates (e.g., heart-beats) from each storage node while the query is being processed (not illustrated inFIG.3). In some embodiments, a storage node360may notify the query engine320when query processing (e.g., on a batch) has been completed, for example, providing a handle that the query engine can use to pull the results from storage nodes360. In some embodiments, the flow control may be implemented on the head node side. In some embodiments, the message format may implement a header containing control metadata, and data (rows/tuples) sent as raw data. If a storage node fails (this may include less-than-complete failures, e.g. a process failure for query processing), the corresponding request (e.g., to process a batch of pages or tuples) may be resubmitted to another storage node that stores the data (e.g., a full segment from the same protection group (PG)). In another example, if a previous storage node fails while transmitting the tuple stream results333back to the head node310, the head node310may keep the results already received, and only transfer the remainder from the new storage node, in some embodiments. In such embodiments, storage nodes360may provide results in a deterministic order, in some embodiments. In some embodiments, it may be tolerable for data to be processed in any order convenient and/or in parallel. In some embodiments, storage nodes360may spill results to persistent storage if, for example, a memory buffer for results becomes full, while in other embodiments that cannot spill results to persistent storage processing may pause until further results can be fit in storage (or the process may be aborted). In some embodiments, various error and/or data loss messages341may be sent from log-structured storage service350to database engine head node310(specifically to storage service engine330). These messages may be passed from storage service engine330to query engine320as error and/or loss reporting messages329, and then to one or more clients as a response314. In some embodiments, the APIs331-341of log-structured storage service350and the APIs321-329of storage service engine330may expose the functionality of the log-structured storage service350to database engine head node310as if database engine head node310were a client of log-structured storage service350. For example, database engine head node310(through storage service engine330) may write redo log records or request data pages through these APIs to perform (or facilitate the performance of) various operations of the database system implemented by the combination of database engine head node310and log-structured storage service350(e.g., storage, access, change logging, recovery, and/or space management operations). Note that in various embodiments, the API calls and responses between database engine head node310and log-structured storage service350(e.g., APIs321-329) and/or the API calls and responses between storage service engine330and query engine320(e.g., APIs331-341) inFIG.3may be performed over a secure proxy connection (e.g., one managed by a gateway control plane), or may be performed over the public network or, alternatively, over a private channel such as a virtual private network (VPN) connection. These and other APIs to and/or between components of the database systems described herein may be implemented according to different technologies, including, but not limited to, Simple Object Access Protocol (SOAP) technology and Representational state transfer (REST) technology. For example, these APIs may be, but are not necessarily, implemented as SOAP APIs or RESTful APIs. SOAP is a protocol for exchanging information in the context of Web-based services. REST is an architectural style for distributed hypermedia systems. A RESTful API (which may also be referred to as a RESTful web service) is a web service API implemented using HTTP and REST technology. The APIs described herein may in some embodiments be wrapped with client libraries in various languages, including, but not limited to, C, C++, Java, C# and Perl to support integration with database engine head node310and/or log-structured storage service350. In some embodiments, database data for a database of database service210may be organized in various logical volumes, segments, and pages for storage on one or more storage nodes360of log-structured storage service350. For example, in some embodiments, each database may be represented by a logical volume, and each logical volume may be segmented over a collection of storage nodes360. Each segment, which lives on a particular one of the storage nodes, may contain a set of contiguous block addresses, in some embodiments. In some embodiments, each segment may store a collection of one or more data pages and a change log (also referred to as a redo log) (e.g., a log of redo log records) for each data page that it stores. Storage nodes360may receive redo log records and to coalesce them to create new versions of the corresponding data pages and/or additional or replacement log records (e.g., lazily and/or in response to a request for a data page or a database crash). In some embodiments, data pages and/or change logs may be mirrored across multiple storage nodes, according to a variable configuration (which may be specified by the client on whose behalf the databases is being maintained in the database system). For example, in different embodiments, one, two, or three copies of the data or change logs may be stored in each of one, two, or three different availability zones or regions, according to a default configuration, an application-specific durability preference, or a client-specified durability preference. In some embodiments, a volume may be a logical concept representing a highly durable unit of storage that a user/client/application of the storage system understands. A volume may be a distributed store that appears to the user/client/application as a single consistent ordered log of write operations to various user pages of a database, in some embodiments. Each write operation may be encoded in a log record (e.g., a redo log record), which may represent a logical, ordered mutation to the contents of a single user page within the volume, in some embodiments. Each log record may include a unique identifier (e.g., a Logical Sequence Number (LSN)), in some embodiments. Each log record may be persisted to one or more synchronous segments in the distributed store that form a Protection Group (PG), to provide high durability and availability for the log record, in some embodiments. A volume may provide an LSN-type read/write interface for a variable-size contiguous range of bytes, in some embodiments. In some embodiments, a volume may consist of multiple extents, each made durable through a protection group. In such embodiments, a volume may represent a unit of storage composed of a mutable contiguous sequence of volume extents. Reads and writes that are directed to a volume may be mapped into corresponding reads and writes to the constituent volume extents. In some embodiments, the size of a volume may be changed by adding or removing volume extents from the end of the volume. In some embodiments, a segment may be a limited-durability unit of storage assigned to a single storage node. A segment may provide a limited best-effort durability (e.g., a persistent, but non-redundant single point of failure that is a storage node) for a specific fixed-size byte range of data, in some embodiments. This data may in some cases be a mirror of user-addressable data, or it may be other data, such as volume metadata or erasure coded bits, in various embodiments. A given segment may live on exactly one storage node, in some embodiments. Within a storage node, multiple segments may live on each storage device (e.g., an SSD), and each segment may be restricted to one SSD (e.g., a segment may not span across multiple SSDs), in some embodiments. In some embodiments, a segment may not be required to occupy a contiguous region on an SSD; rather there may be an allocation map in each SSD describing the areas that are owned by each of the segments. As noted above, a protection group may consist of multiple segments spread across multiple storage nodes, in some embodiments. In some embodiments, a segment may provide an LSN-type read/write interface for a fixed-size contiguous range of bytes (where the size is defined at creation). In some embodiments, each segment may be identified by a segment UUID (e.g., a universally unique identifier of the segment). In some embodiments, a page may be a block of storage, generally of fixed size. In some embodiments, each page may be a block of storage (e.g., of virtual memory, disk, or other physical memory) of a size defined by the operating system, and may also be referred to herein by the term “data block”. A page may be a set of contiguous sectors, in some embodiments. A page may serve as the unit of allocation in storage devices, as well as the unit in log pages for which there is a header and metadata, in some embodiments. In some embodiments, the term “page” or “storage page” may be a similar block of a size defined by the database configuration, which may typically a multiple of 2, such as 4096, 8192, 16384, or 32768 bytes. As discussed above, log-structured storage service350may perform some database system responsibilities, such as the updating of data pages for a database, and in some instances perform some query processing on data. As illustrated inFIG.3, storage node(s)360may implement data page request processing361, query processing363, and data management365to implement various ones of these features with regard to the data pages367and redo log369among other database data in a database volume stored in log-structured storage service. For example, data management365may perform at least a portion of any or all of the following operations: replication (locally, e.g., within the storage node), coalescing of redo logs to generate data pages, snapshots (e.g., creating, restoration, deletion, etc.), log management (e.g., manipulating log records), crash recovery, and/or space management (e.g., for a segment), and in some instances, the application of undo log records generated to reverse or otherwise undo the effect of a transaction's change to item values in order to provide tuple stream results333or data pages339within a consistent view (e.g., pages with an LSN of an update applied to the page or item less than a logical sequence number indicated as the LSN associated with the consistent view). Each storage node may also have multiple attached storage devices (e.g., SSDs) on which data blocks may be stored on behalf of clients (e.g., users, client applications, and/or database service subscribers), in some embodiments. Data page request processing361may handle requests to return data pages of records from a database volume, and may perform operations to coalesce redo log records or otherwise generate a data pages responsive to be returned responsive to a request. Query processing363may handle requests to return values from a database (e.g., tuples) with various query processing operations applied before returning the values (e.g., filtering, aggregating, sorting, etc.). In at least some embodiments, storage nodes360may provide multi-tenant storage so that data stored in part or all of one storage device may be stored for a different database, database user, account, or entity than data stored on the same storage device (or other storage devices) attached to the same storage node. Various access controls and security mechanisms may be implemented, in some embodiments, to ensure that data is not accessed at a storage node except for authorized requests (e.g., for users authorized to access the database, owners of the database, etc.). In some embodiments, user-influenced input (query, perhaps in some processed form) and data pages may shift query processing from a single-tenant environment (e.g., a database head node) to a multi-tenant environment (e.g., storage node). In order to provide additional security, query processing363may be done in a standalone process, with a software “jail” built around it, using a downgraded security context, seccomp, cgroups, and potentially other hostile code execution mitigation techniques, in embodiments. The attack surface may be minimized by using a minimal subset of query processing code, and performing the initial query parsing on the query engine320, in some embodiments. In some embodiments, query processing363should not disrupt regular processing of access requests to read or obtain data pages339or write redo log records335. In some embodiments, a process (e.g., a daemon) for query processing may have a hard limit of the memory and CPU footprint, to guard against resource drain, for example. In embodiments, query processing may be performed in a separate address space in order to provide failure isolation. In this way, a bug in query processing363would not impact regular page request or management operations (e.g., storing redo log records, returning data pages, coalesce operations, etc.), in some embodiments. Such precautions may isolate memory leaks and runaway resource consumption in general. Query processing363at storage nodes360may only process tuples that are known to be safe to process on the storage nodes360(e.g., visible to a database query), and send other tuples directly to the head node without processing, in some embodiments. In embodiments, query processing363may be performed in a streaming fashion (e.g., for efficiency). In some embodiments, materialization of query processing results (e.g., in-memory or other storage) may facilitate blocking query operations, like hash partition, sort, and group aggregation (although group aggregation may decomposable so group aggregation operations may not necessarily materialize the entire result). In another example, if the head node is consuming query processing results slowly or unevenly, materialization can be a form of buffering. In yet another example embodiment, materialization on storage nodes can allow storage nodes to complete processing and release or advance garbage collection point in times sooner, without waiting for the head node to consume the entire result. In this way, garbage collection may not be delayed, in some embodiments. In some embodiments, materialization on a storage may coincide with embodiments that ensure cleanup. In embodiments, materialization on the storage node may be part of the existing volume. In other embodiments, materialization may coincide with creation of a new temporary space for storing query processing results. As discussed above with regard toFIG.1, queries may be performed across different types of query engines. Thus, query engine320of database engine head node310may perform cross-query engine execution316another query engine, such as another query engine in another service like a data warehouse processing cluster.FIG.4is a block diagram illustrating a data warehouse service that implements a query engine type that supports queries across query engine types, according to some embodiments. FIG.4is a block diagram illustrating a data warehouse service that implements a query engine type that supports queries across query engine types, according to some embodiments. Processing cluster410may be data warehouse service cluster that distributes execution of a query among multiple processing nodes (e.g., to implement an OLAP engine as discussed above with regard toFIG.2). As illustrated in this example, a processing cluster410may include a leader node420and compute nodes430, which may communicate with each other over a network (not illustrated). Leader node420may implement query planning422to generate query plan(s) and instructions424for executing queries on processing cluster410that perform data processing on local data (not illustrated) stored on attached storage and can utilize remote query processing resources for remotely stored data, such as database data stored for another query engine like a database engine head node310inFIG.3. Note that in at least some embodiments, query processing capability may be separated from compute nodes, and thus in some embodiments, additional components may be implemented for processing queries. Additionally, it may be that in some embodiments, no one node in processing cluster410is a leader node as illustrated inFIG.4, but rather different nodes of the nodes in processing cluster410may act as a leader node or otherwise direct processing of queries to data stored in processing cluster410. While nodes of processing cluster may be implemented on separate systems or devices, in at least some embodiments, some or all of processing cluster may be implemented as separate virtual nodes or instance on the same underlying hardware system (e.g., on a same server). In at least some embodiments, processing cluster410may be implemented as part of a data warehouse service230, as discussed above with regard toFIG.2. Leader node420may manage communications with clients, which may be external clients (e.g., clients250) or internal clients within provider network200, which may include a database engine head node performing cross-query engine execution316. For example, leader node420may be a server that receives a query from various client programs (e.g., database engine head node or other applications) and/or subscribers (users), then parses them and develops an execution plan (e.g., query plan(s)) to carry out the associated database operation(s)). The query may be directed to data that is stored locally within processing cluster410(e.g., at one or more of compute nodes430), data stored remotely (which may be accessible by external data retrieval service440), and/or both local and external data. Leader node420may also manage the communications among compute nodes430instructed to carry out database operations for data stored in the processing cluster410(or data being processed by processing cluster410). For example, node-specific query instructions424may be generated or compiled code that is distributed by leader node420to various ones of the compute nodes430to carry out the steps needed to perform a query, including executing the code to generate intermediate results of the query at individual compute nodes that may be sent back to the leader node420. Leader node420may receive data and query responses or results from compute nodes430in order to determine a final result for a query. A database schema, data format and/or other metadata information for the data stored among the compute nodes, such as the data tables stored in the cluster, may be managed and stored by leader node420. Query planning422may account for remotely stored data by generating node-specific query instructions that include remote operations to be directed by individual compute node(s)430. Processing cluster410may also include compute nodes430. Compute nodes430, may for example, be implemented on servers or other computing devices, such as those described below with regard to computer system2000inFIG.11, and each may include individual query processing “slices” defined, for example, for each core of a server's multi-core processor, including query execution432to execute the instructions424or otherwise perform the portions of the query plan assigned to the compute node. Query execution432may access a certain memory and disk space in order to process a portion of the workload for a query (or other database operation) that is sent to one or more of the compute nodes430. Query execution may access attached storage to perform local operation(s) (not illustrated). For example, query execution432may scan data in attached storage, access indexes, perform joins, semi joins, aggregations, or any other processing operation assigned to the compute node430(similar operations could be applied to external data). Attached storage for a compute node430may be implemented as one or more of any type of storage devices and/or storage system suitable for storing data accessible to the compute nodes, including, but not limited to: redundant array of inexpensive disks (RAID) devices, disk drives (e.g., hard disk drives or solid state drives) or arrays of disk drives such as Just a Bunch Of Disks (JBOD), (used to refer to disks that are not implemented according to RAID), optical storage devices, tape drives, RAM disks, Storage Area Network (SAN), Network Access Storage (NAS), or combinations thereof. In various embodiments, disks may be formatted to store database tables (e.g., in column oriented data formats or other data formats). Query planning422may also direct the execution of remote data processing operations, by providing remote operations to query execution432. Query execution432may be implemented by a client library, plugin, driver or other component that sends requests for external data434to data retrieval service440. In some embodiments, data retrieval service440may implement a common network endpoint to which requests434are directed, and then may dispatch the requests to respective retrieval nodes450. Query execution432may read, process, or otherwise obtain reformatted external data436from processing nodes450, formatted by format converter452to a data format for processing cluster410(e.g., converting data from row oriented to column oriented format). Other operations may be applied by retrieval nodes450for external data, such aggregation operations or filtering operations) performed upon retrieved data and returned436. Compute nodes430may send intermediate results from queries back to leader node420for final result generation (e.g., combining, aggregating, modifying, joining, etc.). Query execution432may retry external data requests434that do not return within a retry threshold. As external data retrieval service440may be stateless, processing operation failures at retrieval node(s)450may not be recovered or taken over by other retrieval nodes450, query execution432may track the success or failure of requested external data434, and perform retries when needed. External data retrieval service440may receive requests to retrieve data stored in another storage service in order to provide that for performing a query within processing cluster410, such as database data stored in log-structured storage service350at various storage nodes360or database snapshots462stored in object storage service460. Retrieval requests may be received from a client, such as requests for external data434, and handle among one or multiple retrieval node(s)450. In some embodiments, data from other databases, tables, warehouses, etc. may be retrieved by external data retrieval service440that is not part of a database stored in storage nodes360or snapshots462but which may be included in a result for cross-query engine execution316. In some embodiments, data warehouse service230may generate query results by performing external data requests434and then storing the final result generated by leader node420as a migrated result set stored in another data store (e.g., not illustrated). Retrieval node(s)450may be implemented as separate computing nodes, servers, or devices, such as computing systems2000inFIG.11, to perform data processing operations on behalf of clients, like compute nodes430. Retrieval node(s)450may implement stateless, in-memory processing to execute retrieval and other data processing operations, in some embodiments. In this way, retrieval node(s)450may have fast data processing rates. Retrieval node(s)450may implement client authentication/identification to determine whether a client has the right to access external data in a storage service. For example, client authentication/identification may evaluate access credentials, such as a username and password, token, or other identity indicator by attempting to connect with a storage service using the provided access credentials. If the connection attempt is unsuccessful, then the data processing node may send an error indication to remote data processing client. Retrieval node(s)450may implement query processing422or other features of a query engine which may perform multiple different processing operations in addition to retrieving the data from external sources and support multiple different data formats. For example, query processing422may implement separate tuple scanners for each data format which may be used to perform scan operations that scan data and which may filter or project from the scanned data, search (e.g., using a regular expression) or sort (e.g., using a defined sort order) the scanned data, aggregate values in the scanned data (e.g., count, minimum value, maximum value, and summation), and/or group by or limit results in the scanned data. Requests for external data may include an indication of the data format for data so that retrieval node(s)450may use the corresponding tuple scanner for data. Retrieval node(s)450may, in some embodiments, transform results of operations into a different data format or schema according to a specified output data format in the external data request. In some embodiments, external data may be stored in encrypted or compressed format. Retrieval node(s)450may implement compression engine(s) to decompress data according to a compression technique identified for data, such as lossless compression techniques like run-length encoding, Lempel-Ziv based encoding, or bzip based encoding. Retrieval node(s)450may implement encryption engine(s) to decrypt data according to an encryption technique and/or encryption credential, such as a key, identified for data, such as symmetric key or public-private key encryption techniques. Retrieval node(s)450may implement storage access to format, generate, send and receive requests to access data in storage service350(e.g. a feature or interface similar to storage service engine330inFIG.3). For example, retrieval nodes may generate requests to obtain data according to a programmatic interface for log-structured storage service350at storage nodes360, such as a request for database data442to receive database data444, or object storage service460, such as a request for database snapshot data446and receive database snapshot data448. In some embodiments, other storage access protocols, such as internet small computer interface (iSCSI), may be implemented to access data. In various embodiments, retrieval nodes450may obtain database volume mapping information to request data pages or send query processing requests to obtain tuple streams according to the appropriate storage nodes360identified by the mapping data. Cross-query engine execution316may be implemented between database engine head nodes and data warehouse processing clusters discussed above. In order to coordinate the performance of such queries, planning techniques to identify and optimize the operations performed by each query engine (if any) may be implemented.FIG.5is a block diagram illustrating cross-type query engine planning and execution, according to some embodiments. A query engine510for a database engine head node may implement cross-type query engine planner/optimizer520. When a query is received, one or more cross-type optimization rules may be applied to identify portions (or all) of a query that could be performed by data warehouse processing cluster (e.g., according to the techniques discussed below with regard toFIG.6andFIG.8). For example, size classifiers may be applied to determine whether a query is likely to run longer than a threshold value. In some embodiments, query features may indicate which type of query engine is capable of supporting the claimed features (e.g., analytics features (or features performed faster/more efficiently by analytics engines) may be handled by an OLAP style query engine, such as a data warehouse processing cluster such as correlated subqueries). Hints or other indicators may be included in the query that identify which type of query engine should handle that portion (or all) of the query, in some embodiments. Query engine planner/optimizer520may be able to switch to a single query engine mode, in some embodiments, to generate a plan for a query (e.g., in the event that a previously generated cross-type query engine plan failed) in order to perform a query, in some embodiments. A cross-type query plan522may be provided to local query executor530. Local query executor530may interpret or generate instructions to perform the plan, and direct the performance of the plan. For example, for portions of the cross type plan that are remotely performed, local query executor530may provide a portion that is the remote query plan532(or instructions to implement the remote query plan) to remote query executor540. Local query executor530also perform instructions to execute any portion of the cross-type query plan that is performed at query engine510, by performing various requests to get database data (e.g., as discussed above with regard to requests between a query engine320, storage service engine330, and log-structured storage service350inFIG.3). Local query executor530may also combine, integrate, or otherwise evaluate remote results534in order to generate a final result514sent response to query512, in some embodiments. Remote query executor540may implement query translator542to translate the remote query plan532into a query that may be understood by data warehouse processing cluster560(e.g., changing query language, syntax, features, parameters, or hints). In some embodiments, query translator542may instead implement a query planner for the remote query engine (e.g., data warehouse processing cluster560) and generate a query plan for that remote query engine—which the remote query engine may accept instead of a query). Remote query executor540may submit the query552to data warehouse processing cluster560, which may get database data562perform query552. As discussed in detail below with regard toFIG.7, some queries may be performed using both data retrieved from the data store of query engine510and snapshots of the database which may be consistent with the consistent view identified for the query. To perform query552, data warehouse processing cluster560may obtain various information or metadata to perform the query554. In some embodiments, various data and/or metadata may be provided as part of the initial request to perform the query552. In some embodiments, remote query executor540may implement an interface and feature to respond to requests for metadata, such as query metadata distributor544. For example, such metadata that may be provided may include log sequence numbers (LSN) indicating the consistent view point of the database for the query, storage/head node connection info, schema, available statistics (e.g., data distribution, min max, etc.), projection and filter. To perform query552, data warehouse processing cluster560may obtain a consistent view556of the database data from remote query executor540. For example, remote consistency management546may be implemented to provide consistency information such as undo log records generated when a transaction is performed at the query engine head node in order to undo transactions that are not committed or applied to a database (e.g., because of conflicts with other transactions) or other information that can be used to roll-back, remove, or otherwise undo transactions not visible in an identified view for the query (e.g., determined when the query plan is created). Data warehouse processing cluster560may apply the consistency information when retrieved database data needs to be modified in order to be consistent with the view of the database for the query. Results of the query may be returned534to local query executor530. In some embodiments, not illustrated, remote query executor540may provide an consistent view indication to log-based storage service330so that when requests to get database data562are sent from data warehouse processing cluster560, the log-based storage service330may return data with, for example, undo log records applied when a current value is not within the consistent view for a query. Please note that although the query512and planning features are depicted as implemented in database engine head node query engine510, similar features could be implemented within data warehouse processing cluster560. For example, a query could be sent to data warehouse processing cluster560, which may perform query planning and begin execution, obtaining query metadata554and consistent view data556from query engine510for a head node. In some embodiments, when cross-type query performance is enabled for a database, a network endpoint for both the database engine head node and data warehouse processing cluster may be provided for a client application to use as a target for sending queries (e.g., OLTP queries to the database engine head node and OLAP queries to the data warehouse processing cluster) which may both access the same data set in the same data store (e.g., log-structured storage service350). Different types of query plans may result from optimizing for performing queries across query engine types.FIG.6is a block diagram illustrating possible cross-type query engine plan optimizations, according to some embodiments. An initial query plan610may be generated for a query that assumes only use of the local query engine. Then, cross-type query engine optimization620may be applied to evaluate the initial query plan to determine which portions of a query can be performed by different types of query engines. However, in some scenarios, only a single type of query engine plan may be output from optimization620. For example, local query execution type plan632may only utilize the initial query engine to receive the query (e.g., a database engine head node). This may occur because features included in the query may not be supported by the other query engine, or a predicted performance of the query at the local query engine may be better than the predicted performance achieved remotely. Alternatively, a remote query engine type plan634may be generated according to an evaluation that recognizes that performance of the query may be better if only performed on the remote query engine (although results may still be sent back to the local query engine before being passed to a client). A cross-type plan636could also be generated which utilizes both local and remote query engines, and thus includes local plan portions642and a remote plan644. Database service(s)210and data warehouse service(s)230may implement interface features to enable (and disable) performing queries across types of query engines for databases or warehouses hosted within these services. For example, database service210may implement cross-type query processing as a feature of a database instance that is deployed on behalf of a user when creating a new database in the service (or updating an existing database to the new database instance that includes this feature). In some embodiments, data warehouse service230may implement an interface to provision a serverless (e.g., fully managed) data warehouse cluster, which may be provided as an endpoint to handle queries to a database hosted by database service210via the cross-type query engine techniques discussed above (e.g., by allowing the data warehouse processing cluster to access the database volumes of a database in database service210). For instance, the provisioned, requested, or deployed endpoint for the data warehouse processing cluster may be provided to a database instance of database service210to allow both the data warehouse processing cluster to access the database volume and to allow the database instance (e.g., via the database engine head node) to perform portions or all of queries sent to the database instance on the data warehouse processing cluster when more optimal for performance. As noted above, some cross type queries may take advantage of additional copies of a data set stored in another location, such as snapshots taken of a database. In this way, the retrieval and processing of external data at a data warehouse processing cluster can be speed up even further if the retrieval and processing work can be divided amongst additional retrieval nodes to access the data set stored in the other location. A database migrator may be implemented to facilitate the generation and storage of snapshots to support utilizing additional copies of a data set to perform a query across query engine types with a consistent view of the data set.FIG.7is a block diagram illustrating a migrator that moves database data from one storage service to another storage service accessible to support performing queries to a consistent data set across query engine types, according to some embodiments. Migrator710may be implemented as a standalone service or provider network200, or may be implemented as part of a database service210or data warehouse service230. In some embodiments, migrator710may be deployed when cross type queries are enabled for a database instance or data warehouse cluster. Migrator710may request database data to generate a database snapshot752from log-structured storage service350(e.g., by request data pages or tuple streams of database730). The database data may be returned754and migrator710may generate the snapshot. For instance, migrator710may reformat the database snapshot to be optimized for the data warehouse processing cluster (e.g., in column-oriented format). Migrator710may collect statistics or other features of the database snapshot which may be used to perform query planning and execution at data warehouse processing cluster720. Migrator710may store warehouse formatted snapshots756, as respective database snapshot objects, such as database snapshots740a,740b,740c, and so on, in object storage service460. Migrator701may also provide an indication of available database snapshots758to data warehouse processing cluster720(e.g. by LSN associated with the snapshot) so that data warehouse processing cluster720can develop query plans to use database snapshots. A similar indication could be provided to a database engine head node (not illustrated) for similar query planning purposes. Migrator710may be configured by a user interface (e.g., graphical, command line or programmatic) to generate snapshots according to various rules or criteria that trigger snapshot generation events. For example, when a migrator710is deployed for a database instance with cross type query performance enabled, the user interface may expose options to a user to define time periods for generating snapshots, or amounts of change to a database that trigger generating a snapshot. The various provider network services discussed inFIGS.2through7provide examples of a system that may perform queries to a consistent data set across query engine types. However, various other types of query engines (e.g., non-relational), data stores (e.g., non-log structured) or other architectures or environments may implement performing queries to a consistent data set across query engine types.FIG.8is a high-level flow chart illustrating methods and techniques for performing queries to a consistent data set across query engine types, according to some embodiments. Various different systems and devices may implement the various methods and techniques described below, either singly or working together. For example, a database engine head node or storage node may implement the various methods. Alternatively, a combination of different systems and devices. Therefore, the above examples and or any other systems or devices referenced as performing the illustrated method, are not intended to be limiting as to other different components, modules, systems, or configurations of systems and devices. Different types of query engines may be implemented that can access a same data set stored in a same data store. For example, first query engine802and second query engine804may support different query languages, different operations, different underlying data formats, or other differences and yet may also perform queries across the two different query engines. For instance, as indicated at810, a query to a data set stored in a data store may be received at first query engine802, in some embodiments. The query may be formatted according to a query language or features supported by first query engine802or may include or be formatted according to a query language or features supported by second query engine804. As indicated at820, a portion (or all) of the query may be identified to be performed at a different type of query engine, in some embodiments. For example, the different features, operations, or language of the second query engine804may be recognized by a parser, optimizer, or other query engine component. In some embodiments, query hints may identify sections of queries and a corresponding type of query engine to be used instead of the receiving query engine. In some embodiments, cost estimations or predications may be used to weight alternative query plans, including plans that implement different query engine types, in order to select a plan according to the estimated performances of the plans. As indicated at830, a consistent view of the data asset identified for the query may be provided to the different type of query engine, in some embodiments. For example, a consistent view of the database may be assigned according to an arrival time of the query at first query engine802(e.g., represented by a timestamp or LSN). The consistent view may be indicated to second query engine804as part of a request to perform the portion of the query, in some embodiments. The consistent view may be provided by sending, copying, or transferring undo or other information used to generate versions of data in the data consistent with the consistent view of the database (e.g., a value of the data at the same time as of the consistent view), in some embodiments. As indicated at840, the consistent view of the data set may be applied when evaluating data obtained from the data set at the data store as part of performing the portion of the query, in some embodiments. For example, as data is obtained that is not consistent with the consistent view, then the consistency information may be used to generate a value of the data that is consistent with the consistent view. A result of the portion of the query may be returned to the query engine, as indicated at850, in some embodiments. A result of the query based on the received result of the portion of the query may be returned, in some embodiments. For example, the result may be used to join or filter data obtained by first query engine802to produce a final result. FIG.9is a high-level flow chart illustrating methods and techniques for generating a query plan for performing queries to a consistent data set across query engine types, according to some embodiments. As indicated at910, an initial local query engine plan may be generated, in some embodiments, for a query received at a query engine. For example, various parsing, analyzing, and optimizing rules may be applied to generate the initial local query plan as if no other query engine were available to perform the query. As indicated at920, cross-type optimization rules may be applied to evaluate the initial local query engine plan, in some embodiments. In this way, portion(s) of the query plan may be identified for a remote query engine instead of the local query engine, as indicated at930. Cross-type optimization rules can be dependent upon the supported features and characteristics of the local and remote query engines under consideration. For example, short queries (e.g., identifying a small number of records or individual records in a data set) may be identified for local query processing if the local query engine is an OLTP type of query engine, whereas queries that involve a large number of records may be OLAP queries and beneficially performed on a remote query engine that is an OLAP type of query engine. Cross-type optimization rules may include supported languages, features, operations, hints, or other information specific to each query engine type. Estimation techniques for each query engine type to perform plan analysis may be implemented, including machine-learning based performance estimation models that can take as input the features of the initial query plan and classify portions of the query plan for the different types of query engines according to a machine learning model (e.g., a classification technique that utilizes a feature vector based comparison of a feature vector generated for a query). Selection between features supported by both query engine types may be handled according to performance estimates or models for the features maintained for each query engine type (e.g., in order to select the more performant query engine type). Query hints, database statistics, or other information that can influence the prediction or selection of operations in a query plan, and thus the query engine type that may perform them, may also be considered by cross-type optimization rules. If no portions of the query are identified for remote performance, then the initial local query engine plan may be performed, as indicated at940. However, if portions of the query plan are identified for a remote query engine, then an updated query plan that includes a plan for the remote query engine may be generated as indicated at950. For example, the plan may be query in a language or format supported by the remote query engine which the remote query engine can parse and optimize to generate its own query plan). The remote query plan portion may include operators or instructions understandable by the remote query engine (e.g., similar to an output of a planner/optimizer at the remote query engine) to begin performance of the identified portion. The plan may include operations to provide additional metadata and/or consistency information to the remote query engine, as discussed above with regard toFIG.5, in some embodiments. Then as indicated at960, the updated query plan may be performed by the local query engine, which may include directing various operations and requests to the remote query engine, combining or forwarding results from the remote query engine, among other query plan operations, in some embodiments. FIG.10is a high-level flow chart illustrating methods and techniques for applying a consistent view of a data set to perform queries to a consistent data set across query engine types, according to some embodiments. As indicated at1010, a query may be received from another type of query engine, in some embodiments. The query may be represented according to a query language, protocol, API, or other format supported by the query engine, including a query plan. A determination may be made as to whether the metadata is needed to perform the query, as indicated at1020, in some embodiments. For example, logical sequence numbers (LSN) indicating the consistent view point of the database for the query, storage/head node connection info, schema, available statistics, among other metadata may be provided. If one or more portions of needed (or desirable) to generate a plan (or execute a plan) to perform the query is identified, then as indicated at1030, the metadata may be obtained form the query engine, in some embodiments. For example, an API request or other interface may be used to request and receive specific metadata (or all available metadata) from the other query engine. As indicated at1040, plan to perform the query may be generated, in some embodiments. If a plan was received as the query, then a physical plan (e.g., identifying which storage nodes, data pages or blocks, etc.) may be generated. If the query was received as query statement, then a logical plan (e.g., including various query plan operations, such as scans, joins, etc.) and then a physical plan may be generated. As indicated at1050, database data may be obtained from a same data store as accessible to the other query engine, in various embodiments. For example, the data retrieval nodes of data retrieval service450as discussed above inFIG.4may be used, or other types of data reading or scanning techniques may be implemented. In addition to the data from the same data store, other copies of the data set may be used (e.g., snapshots stored in another data store) to supplement or reduce time to scan the data for performing the query, in some embodiments. As indicated at1060, a check may be made as to whether the obtained data is within the consistent view of the data set for the query, in some embodiments. For example, an LSN or timestamp value associated with the consistent view may be compared with an LSN or timestamp associated with a data page (or value with the data page). If the LSN/timestamp for the page is later than the LSN/timestamp value for the consistent view, then the data is not consistent. As indicated at1070, consistency information may be obtained from the other query engine to update the database to the consistent view, in some embodiments. For example, undo log records for the data page (or value within the data page) may be returned to the query engine. Performance of the query may continue, as indicated by the positive exit from180until no more data remains to be obtained. The query engine may perform various evaluations and operations as specified by the query and return a result for the query determined from the database data1090(including the updated database data), in various embodiments. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as inFIG.8) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may be implement the functionality described herein (e.g., the functionality of various servers and other components that implement the database services/systems and/or storage services/systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. FIG.11is a block diagram illustrating a computer system that may implement at least a portion of the systems described herein, according to various embodiments. For example, computer system2000may implement a database engine head node of a database tier, or one of a plurality of storage nodes of a separate distributed storage system that stores databases and associated metadata on behalf of clients of the database tier, in different embodiments. Computer system2000may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device. Computer system2000includes one or more processors2010(any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory2020via an input/output (I/O) interface2030. Computer system2000further includes a network interface2040coupled to I/O interface2030. In various embodiments, computer system2000may be a uniprocessor system including one processor2010, or a multiprocessor system including several processors2010(e.g., two, four, eight, or another suitable number). Processors2010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors2010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors2010may commonly, but not necessarily, implement the same ISA. The computer system2000also includes one or more network communication devices (e.g., network interface2040) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.). For example, a client application executing on system2000may use network interface2040to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the database systems described herein. In another example, an instance of a server application executing on computer system2000may use network interface2040to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems2090). In the illustrated embodiment, computer system2000also includes one or more persistent storage devices2060and/or one or more I/O devices2080. In various embodiments, persistent storage devices2060may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device. Computer system2000(or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices660, as desired, and may retrieve the stored instruction and/or data as needed. For example, in some embodiments, computer system2000may host a storage node, and persistent storage2060may include the SSDs attached to that server node. Computer system2000includes one or more system memories2020that may store instructions and data accessible by processor(s)2010. In various embodiments, system memories2020may be implemented using any suitable memory technology, (e.g., one or more of cache, static random-access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory). System memory2020may contain program instructions2025that are executable by processor(s)2010to implement the methods and techniques described herein for performing queries to a consistent data set across query engine types. In various embodiments, program instructions2025may be encoded in platform native binary, any interpreted language such as Java™ byte-code, or in any other language such as C/C++, Java™, etc., or in any combination thereof. For example, in the illustrated embodiment, program instructions2025include program instructions executable to implement the functionality of a database engine head node, nodes of a data warehouse processing cluster, a migrator, storage nodes of a storage service, in different embodiments. In some embodiments, program instructions2025may implement multiple separate clients, server nodes, and/or other components. In some embodiments, program instructions2025may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, Solaris™, MacOS™, Windows™, etc. Any or all of program instructions2025may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non-transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to computer system2000via I/O interface2030. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system2000as system memory2020or another type of memory. In other embodiments, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface2040. In some embodiments, system memory2020may include data store2045, which may be implemented as described herein. For example, the information described herein as being stored by the database tier (e.g., on a database engine head node), such as a transaction log, an undo log, cached page data, or other information used in performing the functions of the database tiers described herein may be stored in data store2045or in another portion of system memory2020on one or more nodes, in persistent storage2060, and/or on one or more remote storage devices2070, at different times and in various embodiments. Similarly, the information described herein as being stored by the storage tier (e.g., redo log records, coalesced data pages, and/or other information used in performing the functions of the distributed storage systems described herein) may be stored in data store2045or in another portion of system memory2020on one or more nodes, in persistent storage2060, and/or on one or more remote storage devices2070, at different times and in various embodiments. In general, system memory2020(e.g., data store2045within system memory2020), persistent storage2060, and/or remote storage2070may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, database configuration information, and/or any other information usable in implementing the methods and techniques described herein. In one embodiment, I/O interface2030may coordinate I/O traffic between processor2010, system memory2020and any peripheral devices in the system, including through network interface2040or other peripheral interfaces. In some embodiments, I/O interface2030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory2020) into a format suitable for use by another component (e.g., processor2010). In some embodiments, I/O interface2030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface2030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface2030, such as an interface to system memory2020, may be incorporated directly into processor2010. Network interface2040may allow data to be exchanged between computer system2000and other devices attached to a network, such as other computer systems2090(which may implement one or more storage system server nodes, database engine head nodes, and/or clients of the database systems described herein), for example. In addition, network interface2040may allow communication between computer system2000and various I/O devices2050and/or remote storage2070. Input/output devices2050may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems2000. Multiple input/output devices2050may be present in computer system2000or may be distributed on various nodes of a distributed system that includes computer system2000. In some embodiments, similar input/output devices may be separate from computer system2000and may interact with one or more nodes of a distributed system that includes computer system2000through a wired or wireless connection, such as over network interface2040. Network interface2040may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface2040may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface2040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In various embodiments, computer system2000may include more, fewer, or different components than those illustrated inFIG.20(e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.) It is noted that any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more web services. For example, a database engine head node within the database tier of a database system may present database services and/or other types of data storage services that employ the distributed storage systems described herein to clients as web services. In some embodiments, a web service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A web service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the web service in a manner prescribed by the description of the web service's interface. For example, the web service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations. In various embodiments, a web service may be requested or invoked through the use of a message that includes parameters and/or data associated with the web services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a web services request, a web services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the web service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). In some embodiments, web services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques. For example, a web service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message. The various methods as illustrated in the figures and described herein represent example embodiments of methods. The methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
93,900
11860870
DETAILED DESCRIPTION The following discussion omits or only briefly describes conventional features of the data processing environment, which are apparent to those skilled in the art. It is noted that various embodiments are described in detail with reference to the drawings, in which like reference numerals represent like drawing elements throughout the figures. Reference to various embodiments does not limit the scope of the claims attached hereto. Additionally, any examples set forth in this specification are intended to be non-limiting and merely set forth some of the many possible embodiments for the appended claims. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations. The objectives and advantages of the claimed subject matter will become more apparent from the following detailed description of these embodiments in connection with the accompanying drawings. Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc. It is also noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless otherwise specified, and that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence or addition of one or more other features, aspects, steps, operations, elements, components, and/or groups thereof. Moreover, the terms “couple,” “coupled,” “operatively coupled,” “operatively connected,” and the like should be broadly understood to refer to connecting devices or components together either mechanically, electrically, wired, wirelessly, or otherwise, such that the connection allows the pertinent devices or components to operate (e.g., communicate) with each other as intended by virtue of that relationship. Embodiments of the disclosure relate generally to database systems, and more particularly, to job optimization, involving effective data retrieval across multiple data sources, via an externalized query pattern. Embodiments that optimize jobs via externalized query patterns are described below with reference to the figures. FIG.1is a functional block diagram of a data processing environment100.FIG.1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications of the depicted environment may be made by those skilled in the art without departing from the scope of the claims. In one or more cases, the data processing environment100includes a server104, which operates a query optimization system102(hereinafter “system102”), a data storage repository108, and one or more computing devices, such as computing device110and computing device112, coupled over a network106. The server104, system102, data storage repository108, and computing devices110and112can each be any suitable computing device that includes any hardware or hardware and software combination for processing and handling information, and transmitting and receiving data among the server104, system102, data storage repository108, and computing devices110and112. The server104, system102, data storage repository108, and computing devices110and112can each include one or more processors, one or more field-programmable gate arrays (FPGAs), one or more application-specific integrated circuits (ASICs), one or more state machines, digital circuitry, and any other suitable circuitry capable of performing the operations of process300. The network106interconnects the server104, the data storage repository108, and one or both of the devices110and112. In general, the network106can be any combination of connections and protocols capable of supporting communication between the server104, the data storage repository108, one or both of the computing devices110and112, and the system102. For example, the network106may be a WiFi® network, a cellular network, a Bluetooth® network, a satellite network, a wireless local area network (LAN), a network utilizing radio-frequency (RF) communication protocols, a Near Field Communication (NFC) network, a wireless Metropolitan Area Network (MAN) connecting multiple wireless LANs, a wide area network (WAN), or any other suitable network. In one or more cases, the network106may include wire cables, wireless communication links, fiber optic cables, routers, switches, firewalls, or any combination that can include wired, wireless, or fiber optic connections. In one or more cases, the server104hosts the system102. In one or more cases, the server104represents a computing system utilizing clusters of computing nodes and components (e.g., database server computer, application server computers, etc.) that act as a single pool of seamless resources, such as in a cloud computing environment, when accessed within data processing environment100. In other cases, the server104can be a data center, which includes a collection of networks and servers, such as virtual servers and applications deployed on virtual servers, providing an external party access to the system102. In some other cases, the server104may be a web server, a blade server, a mobile computing device, a laptop computer, a tablet computer, a netbook computer, a personal computer (PC), a desktop computer, or any programmable electronic device or computing system capable of receiving and sending data, via the network106, and performing computer-readable program instructions. In one or more cases, the data storage repository108may represent virtual instances operating on a computing system utilizing clusters of computing nodes and components (e.g., database server computer, application server computers, etc.) that act as a single pool of seamless resources when accessed within data processing environment100. In one or more other cases, the data storage repository108may be one of, a web server, a mobile computing device, a laptop computer, a tablet computer, a netbook computer, a personal computer (PC), a desktop computer, or any programmable electronic device or computing system capable of receiving, storing, sending data, and performing computer readable program instructions capable of communicating with the server104, and computing devices110and112, via network106. In one or more cases, the data storage repository108may be a storage device that is remote from the server104. In one or more other cases, the data storage repository108may be a local storage device on the server104, for example the storage repository108may be local on the one or more computing nodes. In one or more cases, computing devices110and112are clients to the server104. The computing devices110and112may be, for example, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant (PDA), a smart phone, a thin client, a digital assistant, or any other electronic device or computing system capable of communicating with server104through network102. For example, computing device110may be a desktop computer capable of connecting to the network106to send a query request to a processing engine210of the system102. In one or more cases, one or both of the computing devices110and112may be any suitable type of mobile device capable of running mobile applications, including smart phones, tablets, slate, or any type of device that runs a mobile operating system. It is noted that data processing environment100includes computing devices110and112capable of interacting with system102, but it should be understood that any number of computing devices may interact with system102in a same or similar manner as computing devices110and112. In one or more cases, one or both of the computing devices110and112includes a user interface for providing an end user with the capability to interact with the system102. For example, an end user of the computing device110may access the system102through the user interface to send a query request to the system102. A user interface refers to the information (such as graphic, text, and sound) a program presents to a user and the control sequences the user employs to control the program. The user interface can be a graphical user interface (GUI). A GUI may allow users to interact with electronic devices, such as a keyboard and mouse, through graphical icons and visual indicators, such as secondary notations, as opposed to text-based interfaces, typed command labels, or text navigation. FIG.2is a functional block diagram illustrating components of the data processing environment100ofFIG.1. In one or more cases, the data storage repository108includes a holistic view, i.e., full data sets, of data related to the system102. For example, the data storage repository108includes data corresponding to descriptive information of items offered for sale on an e-commerce website. For example, the descriptive information of an item may include, for example, but not limited to, a title of the item, a brand of the item, descriptive phrases of the item, size of the item, color of the item, usage instructions for the item, item ingredients, and the like. In another example, the data storage repository108includes data corresponding to a history of items purchased by a customer, e.g., historical transaction data indicating when and how often customers purchased an item, and/or history of item interactions by customers indicating how many times customers interacted with the item on the e-commerce website, e.g., by viewing the item, placing the item in the customer's online shopping cart, and other like interactions. In another example, the data storage repository108includes data corresponding to information regarding a status of one or more items. The status information may include, for example, but not limited to, a list of items within a certain store or a group of stores, a list of items that are ready for delivery, a list of items that are not ready for delivery, a list of items that qualify for special shipping (e.g., one-day shipping), and other like information regarding the status of an item. In yet another example, the data storage repository108includes data corresponding to an item setup orchestrator (ISO). In yet other example, the data storage repository108includes data corresponding to offers related to items, e.g., a current or past sale's price for an item. It is noted that a query submitted by a user may include a request for information based on a time period, one of the types of data described herein, and/or a combination of the types of data described herein. In one or more cases, the system102includes the processing engine210, a search engine218, a distributed database management system220, a distributed streaming engine224, an indexing engine222, a conduit engine226, a file system228, and a bedrock engine238. In one or more examples, one or more of the processing engine210, the search engine218, the distributed database management system220, the distributed streaming engine224, the indexing engine222, the conduit engine226, the file system228, and the bedrock engine238may be implemented in hardware. In one or more examples, one or more of the processing engine210, the search engine218, the distributed database management system220, the distributed streaming engine224, the indexing engine222, the conduit engine226, the file system228, and bedrock engine238may be implemented as one or more executable programs maintained in a tangible, non-transitory memory, such as instruction memory407ofFIG.4, which may be executed by one or processors, such as processor401ofFIG.4. In one or more cases, the processing engine210may be a distributed data processing engine that runs on one computer node or a cluster of computer nodes. The processing engine210may be configured to perform batch processing, streaming, distributed task dispatching, and scheduling; to provide responses to queries and utilize machine learning; and/or perform input/output functionalities. For example, the processing engine210may be an Apache Spark Core™ engine or other like engines. In one or more cases, the processing engine210may include cluster of computing nodes, such as a master computing node212, a gateway computing node216, and one or more worker computing nodes, such as worker computing node214a, worker computing node214b, and worker computing node214c. The gateway computing node216may be configured to receive a query, e.g., query 1, from a computing device, such as computing device110. The gateway computing node216may prepare the query request as a job, for example, but not limited to, a Spark™ job. The gateway computing node216may provide the job to the master computing node212for processing. The gateway computing node216may be, for example, a Chroniton™. In one or more cases, the master computing node210analyzes the query request and creates a job (e.g., threepl-iml-feed for query 1) based on the query request. Having created the job, the master computing node210determines a number of processing cores and a size of memory needed to complete the job. In one or more cases, the number of processing cores and memory size may be predetermined for a corresponding job. For instance, when the master computing node210creates job threepl-iml-feed, the master computing node210may retrieve the number of processing cores and memory size from a lookup table that includes the number of processing cores and memory size for a corresponding job. Based on the number of processing cores and a size of the memory, the master computing node212allocates worker computing nodes with enough processing cores and memory size to complete the job. For example, the master computing node212may consume worker computing nodes214aand214bfrom the computing cluster, in which worker computing nodes214aand214bare capable of providing thirty (30) processing cores with a memory size of 120 GB to processed the threepl-iml-feed job. It should be noted that two worker computing nodes are described herein as completing the example job; however, it should be noted that one worker computing node or any number of worker computing nodes may be used to complete a job provided by the master computing node212. To process the job, the processing engine210, via one or more of the worker nodes214a,214b, and214c, may read and write data to the search engine218, the distributed database management system220, and/or the file system228as discussed herein. In one or more cases, the search engine218may be a scalable and fault-tolerant search engine. The search engine218provides distributed indexing and searching large scale data, such as text-centric data. For instance, the search engine218may be, for example, Apache Solr™ Elasticsearch™, or the like. In one or more cases, the search engine218may receive the job from the processing engine210. Having received the job from the processing engine210, the search engine218determines whether the job corresponds to an indexed identifier in the search engine218. In one or more cases, the search engine218may search a lookup table to determine whether the job corresponds to an indexed identifier within the lookup table. The indexed identifier may be any number, character, sequence of numbers, sequence of characters, or sequence of a combination of numbers and characters to identify or refer to a query and/or job corresponding to the query that is indexed within the search engine218. For the cases in which the search engine218determines that an indexed identifier corresponds to the query and/or respective job, the search engine218provides the determined index identifier and corresponding attributes to the processing engine210. In one or more cases, the distributed database management system220may be a scalable peer-to-peer distributed system of a cluster of computing nodes configured to handle large volumes of data (e.g. unstructured, structured, and semi-structured data) within the data storage repository108. In one or more cases, the distributed database management system220may be a No Sequel (NoSQL) database management system, for example, but not limited to, Apache Cassandra™. One or more computing nodes within the peer-to-peer distributed system can accept a read or write request. For example, a computing node of the distributed database management system220may receive, from the processing engine210a read request for an example job that does not correspond to an indexed identifier within the search engine218. Having received the read request for the job, the distributed database management system220may access the conduit engine226to retrieve the requested information for the job. The conduit engine226may write the requested information to the distributed database management system220. The distributed database management system220provides the requested information, e.g., one or more attributes of the corresponding job to the processing engine210. In one or more cases, the conduit engine226may provide access to full data sets within the data storage repository108. The conduit engine226may include a conduit that channels messages from the data storage repository108to a singular destination, such as the distributed database management system220. Having received a read request from the distributed database management system220, the conduit engine226submits a read request to the distributed streaming engine224for the requested job. The conduit engine226may be, for example, but not limited to, an Uber Conduit™. In one or more cases, the conduit engine226may provide access to data sets within the data storage repository108, via the distributed streaming engine224. In one or more cases, the distributed streaming engine224is a distributed streaming platform, which is configured to publish and subscribe to streams of records. The distributed streaming engine224may be for example, but not limited to, Apache Kafka™. The distributed streaming engine224may be configured to publish and subscribe to data within the data storage repository108. For example, the distributed streaming engine224can define and subscribe to one or more topics of data. Topics may include, for example, but not limited to, products202, offers204, ISO206, and one or more other topics208that the distributed streaming engine224can define and subscribe. The data storage repository108may transfer records onto the corresponding defined topic. A record may include one or more attributes describing the corresponding data. For instance, attributes of a record may include key attributes, value attributes, timestamp attributes, and header attributes. The value attributes may be provided in for example, but not limited to, plain text format or JavaScript Object Notation (JSON) format. Having received the read request, the distributed streaming engine224may process the records corresponding to the topic of the read request and provide the records to the conduit engine226. The conduit engine226may write the records and the one or more attributes describing the corresponding data of the record to the distributed database management system220. In one or more cases, the indexing engine222is configured to scan the topics, defined by the distributed streaming engine224, for records corresponding to responses to query requests. For example, the indexing engine222may be configured to scan the topics for records corresponding to the most common query requests to the system102. In one or more cases, the most common query requests may be those requests that are frequently submitted to the system102. Having found one or more records in the corresponding topics, the indexing engine222writes the one or more records and the one or more corresponding attributes to the search engine218. In one or more cases, the search engine218indexes the records and the one or more corresponding attributes as responses to a corresponding request. In one or more cases, the file system228may be configured to receive and store responses to query requests from the processing engine210. For example, the file system228may store the results from one or more jobs executed by the processing engine210. In one or more cases, the file system228may include one or more storage systems, for example, but not limited to, a Hadoop™ distributed file system (HDFS)230, Google Storage™ (GS)232, distributed object storage234(e.g., OpenStack Swift™), and Azure™ file storage236, and other like storage systems. The file system228may receive requests from one or more external or internal data querying tools. In one or more cases, the file system228allows an external user to submit data queries to system102. In one or more cases, the file system228receives responses to the queries from the system102, without allowing the external user access to one or more other components of the system102, for example, but not limited to, the processing engine210, the search engine218, the distributed database management system220, the distributed streaming engine224, the indexing engine222, and the conduit engine226. In one or more cases, an internal data querying tool may include a Bedrock™ engine238. The Bedrock™ engine238may be a downstream engine that is internal to the system102. The Bedrock™ engine228is configured to receive and process a request by uploading data stored in the file system228to one or more servers that are external to the system102. In one or more cases, external data querying tools may include, for example, but not limited to, a distributed sequential (SQL) querying engine240, a web-based notebook242, and the like. The distributed SQL query engine240and web-based notebook242are each configured to provide interactive analytical querying on data stored within the file system228. For instance, the distributed SQL query engine240may provide an end user the ability to submit a query to the distributed SQL query engine240. The distributed SQL query engine240queries across HDFS230, GS232, distributed object storage234, and Azure file storage236, and returns the combined data for one or more of these storage systems as a response to the query. In one or more cases, the distributed SQL query engine240may operate on a distributed cluster of computer nodes, in which the cluster of computer nodes scale in size based on the submitted query. The distributed SQL query engine240may be, for example, but not limited to, Presto™. In another instance, the web-based notebook242is a browser-based notebook that may provide the end user the ability to submit a query to the web-based notebook242, which in turn searches the file system228for responses to the query. The web-based notebook242may be, for example, but not limited to, Apache Zeppelin™. In one or more cases, the distributed SQL query engine240and the web-based notebook242may be implemented in hardware. In one or more other cases, the distributed SQL query engine240and the web-based notebook242may be implemented as an executable program maintained in a tangible, non-transitory memory, which may be executed by one or processors of an external user's computing device. FIG.3is a flowchart illustrating a process300of data storage and querying optimization. A query request is received (302), preferably by the processing engine210. In one or more cases, the gateway computing node216of the processing engine210receives the query request from a computing device, such as computing device112. For example, a user from a grouping team may submit a query (e.g., query 1), via the computing device112, to the processing engine210. In another example, another user from a LIMO team may submit a query (e.g., query 2), to the processing engine210. Having received the query request, the master computing node212creates a job. For example, the master computing node212may create job grouping-cp-feed for the query 2 request submitted by the grouping team. In another example, the master computing node212creates job threepl-iml-feed for the query 1 request submitted by the LIMO team. Computing power is allocated to process the received request (304), preferably by the processing engine210. In one or more cases, the master computing node212determines a number or processing cores and a size of memory need to complete the job. For example, the master computing node212determines that 80 cores with a memory size of 320 GB are needed to complete job grouping-cp-feed for query 2. In another example, the master computing node212determines that 30 cores with a memory size of 120 GB are needed to complete job threepl-iml-feed for query 1. Based on the number of processing cores and a size of the memory, the master computing node212allocates worker computing nodes with enough processing cores and memory size to complete the job. For example, the master computing node212allocates two worker nodes to the job threepl-iml-feed and three worker nodes to job grouping-cp-feed. A determination is made (306), preferably by the search engine218, as to whether a job the received query request corresponds to an indexed identifier. In one or more cases, the worker nodes may submit a read request to the search engine218to determine whether an indexed identifier corresponds to the received query request. In one or more cases, the search engine218may search a lookup table to determine whether the job corresponds to an indexed identifier within the lookup table. For the cases in which the search engine218determines that the job for the received request corresponds to an indexed identifier (306: YES), the search engine218retrieves one or more attributes corresponding to the indexed identifier (308). For example, the search engine218may determine that the threepl-iml-feed job for query 1 has a corresponding indexed identifier in the search engine218. The search engine218provides the one or more attributes of the corresponding indexed identifier to the processing engine210as a response to the received query request and completes the job. In an example, the system102may process the threepl-iml-feed job within two hours. In another example, if the system102were unable to process the job, via determining that the threepl-iml-feed job did not have a corresponding indexed identifier in the search engine218, the system102may determine the one or more attributes of the threepl-iml-feed job, in a manner as described herein. However, as opposed to taking two hours with 30 cores and a memory size of 120 GB to complete the job for the cases in which the job has a corresponding indexed identifier, the system102may process the threepl-iml-feed job in five hours with 60 cores and 240 GB of memory. For the cases in which the search engine218determines that the job for the received request does not corresponds to an indexed identifier (306: NO), the search engine218sends a notification to the processing engine210that there is not an indexed identifier in the search engine218that corresponds to the query request. For example, the search engine218may determine that grouping-cp-feed job for query 2 does not have a corresponding indexed identifier in the search engine218. Having received the notification, attributes corresponding to the received query request are determined (310), preferably by the distributed database management system220. In one or more cases, a computing node of the distributed database management system220may receive, from the processing engine210a read request for the job that did not correspond to an indexed identifier within the search engine218. The distributed database management system220accesses the conduit engine226to retrieve the requested information for the job. The conduit engine226submits a read request to the distributed streaming engine224for the requested job. The distributed streaming engine224may process one or more records of data within the data storage repository108as described herein. For instance, the distributed streaming engine224may process the records corresponding to the topic of the read request and provide the records to the conduit engine226. The conduit engine226may write the records and the one or more attributes describing the corresponding data of the record to the distributed database management system220. Having received the one or more attributes, the distributed database management system220provides the attributes to the processing engine210. In an example, the system102may process the grouping-cp-feed within six hours. In another example, if the system102were able to process the job, via determining that the grouping-cp-feed did have a corresponding indexed identifier in the search engine218, the system102may determine the one or more attributes of the grouping-cp-feed, in a manner as described herein. However, as opposed to taking six hours with 80 cores and a memory size of 320 GB to complete the job for the cases in which the job does not have a corresponding indexed identifier, the system102may process the grouping-cp-feed job in 30 minutes with 50 cores and 200 GB of memory. In one or more cases, the processing engine210provides a response to the received query request (312). In one or more cases, the processing engine210provides a response to the received query request by providing the one or more attributes corresponding to the indexed identifier to the file systems228. The file system228may receive and store the response to the query request in the one or more storage systems of the file system228, for example, but not limited to, HDFS230, GS232, distributed object storage234, and Azure™ file storage236. The file system228may receive and process requests from one or more external or internal data querying tools, as described herein. FIG.4depicts a block diagram of components of a computing device capable of performing the processes described herein. In particular,FIG.4illustrates an example computing device, such as computing device118, capable of interacting with the system102ofFIG.1. Computing device118can include one or more processors401, working memory402, one or more input/output devices403, instruction memory407, a transceiver404, one or more communication ports409, and a display406, all operatively coupled to one or more data buses408. Data buses408allow for communication among the various devices. Data buses408can include wired, or wireless, communication channels. Processors401can include one or more distinct processors, each having one or more cores. Each of the distinct processors can have the same or different structure. Processors401can include one or more central processing units (CPUs), one or more graphics processing units (GPUs), application specific integrated circuits (ASICs), digital signal processors (DSPs), and the like. Processors401can be configured to perform a certain function or operation by executing code, stored on instruction memory407, embodying the function or operation. For example, processors401can be configured to perform one or more of any function, method, or operation disclosed herein. Instruction memory407can store instructions that can be accessed (e.g., read) and executed by processors401. For example, instruction memory407can be a non-transitory, computer-readable storage medium such as a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), flash memory, a removable disk, CD-ROM, any non-volatile memory, or any other suitable memory. Processors401can store data to, and read data from, working memory402. For example, processors401can store a working set of instructions to working memory402, such as instructions loaded from instruction memory407. Processors401can also use working memory402to store data created during the operation of system102. Working memory402can be a random access memory (RAM) such as a static random access memory (SRAM) or dynamic random access memory (DRAM), or any other suitable memory. Input-output devices403can include any suitable device that allows for data input or output. For example, input-output devices403can include one or more of a keyboard, a touchpad, a mouse, a stylus, a touchscreen, a physical button, a speaker, a microphone, or any other suitable input or output device. Communication port(s)409can include, for example, a serial port such as a universal asynchronous receiver/transmitter (UART) connection, a Universal Serial Bus (USB) connection, or any other suitable communication port or connection. In some examples, communication port(s)409allows for the programming of executable instructions in instruction memory407. In some examples, communication port(s)409allow for the transfer (e.g., uploading or downloading) of data, such as transaction data. Display406can display user interface405. User interfaces405can enable user interaction with, for example, computing device112or118. For example, user interface405can be a user interface for an application of a retailer that allows a customer to purchase one or more items from the retailer. In some examples, a user can interact with user interface405by engaging input-output devices403. In some examples, display406can be a touchscreen, in which the touchscreen displays the user interface405. Transceiver404allows for communication with a network, such as the network106ofFIG.1. For example, if network106ofFIG.1is a cellular network, transceiver404is configured to allow communications with the cellular network. In some examples, transceiver404is selected based on the type of network106system102will be operating in. Processor(s)401is operable to receive data from, or send data to, a network, such as network106ofFIG.1, via transceiver404. Although the embodiments discussed herein are described with reference to the figures, it will be appreciated that many other ways of performing the acts associated with the embodiments can be used. For example, the order of some operations may be changed, and some of the operations described may be optional. In addition, the embodiments described herein can be at least partially implemented in the form of computer-implemented processes and apparatus. The disclosed embodiments may also be at least partially implemented in the form of tangible, non-transitory machine-readable storage media encoded with computer program code. For example, the processes described herein can be implemented in hardware, in executable instructions executed by a processor (e.g., software), or a combination of the two. The media may include, for example, RAMs, ROMs, CD-ROMs, DVD-ROMs, BD-ROMs, hard disk drives, flash memories, or any other non-transitory machine-readable storage medium. When the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the embodiments. The embodiments may also be at least partially implemented in the form of a computer into which computer program code is loaded or executed, such that, the computer becomes a special purpose computer for practicing the embodiments. When implemented on a general-purpose processor, the computer program code segments configure the processor to create specific logic circuits. The embodiments may alternatively be at least partially implemented in application specific integrated circuits for performing the embodiments. The foregoing is provided for purposes of illustrating, explaining, and describing embodiments of this disclosure. Modifications and adaptations to the embodiments will be apparent to those skilled in the art and may be made without departing from the scope or spirit of the disclosure.
35,850
11860871
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION Embodiments allow applications to execute database queries by specifying a name or an identifier for the database queries in the application. An online system stores a mapping from database query identifiers to database queries. An API request is received by the online system that identifies the database query by its query identifier. Multiple versions of a database queries may be stored, each associated with a version identifier. The API request may specify a version identifier along with a database query identifier to refer to a specific version of the database query. According to an embodiment, the system exposes an end point for versioned and parameterized database queries. Applications may access the end point by sending API requests, for example, REST API requests. For example, the database query may be exposed using a URL and applications may execute a database query by specifying the URL for the database query. The URL identifies the name of the database query, a version of the database query and may provide parameters for the database query if the database query is parameterized. Applications that use the database queries do not include the query definition but only identify the database query using a query name and a version identifier. A version of a database query is treated as immutable by the system. Accordingly, once a version of the database query is created, the query definition for that version is not modified by the system. If the database query is modified, a new version of the database query is created. A version identifier for the new version is generated. A user can provide a user defined tag to identify the version instead of using the automatically generated version identifier. Versioning of the database queries simplifies development of database queries as well as deployment of database queries for applications. The system facilitates continuous integration (CI) and continuous delivery (CD) of database queries by enabling deployment of changes to database queries to applications with minimum downtime of the applications and with minimal changes to the application code. Furthermore, the database queries maybe optimized for execution before being deployed. The query definition may be updated by the online system without affecting the applications that request execution of the database queries. As a result, queries can be upgraded and tested using applications in a testing or staging environment before the upgraded query is made available to applications in a production environment. Furthermore, such upgrades may be made without modifying the application code. This allows continuous delivery of database query upgrades to applications without requiring modifications to the applications. Furthermore, applications are able to execute the database queries without requiring a client-side library that is used for handling database query requests. This is so because the client application can execute a database query by simply sending a request that identifies the query, for example, an HTTP (hypertext transfer protocol) request that specifies the query identifier. The client application does not include the query definition. As a result invocation of database queries is simplified for the client applications. Embodiments store a query definition as a block of code specified using a database query language that may be referred to herein as a query lambda. A query lambda may be executed by client applications using REST APIs (REpresentational State Transfer application programming interfaces). Query lambdas allow client applications to query data without needing any special client-side software and without needing to establish and manage database sessions or connections. A client application can simply hit a REST endpoint to execute a database query. Embodiments allow query lambdas to be created and updated using a console or by using the REST API directly. A query lambda is tied to a specific query text and parameter set. The system allows developers to set default values for query parameters. Alternatively one or more query parameters may be specified as mandatory for each execution. The system maintains a version history for a query lambda, thereby allowing easier development of query lambdas. Any update to a query lambda automatically creates a new version, which allows developers to build and test changes without affecting production queries. Furthermore, the system tracks and provides execution metrics for each query lambda, for example, time of last execution, user ID associated, time of last execution error and associated error messages. As a result, embodiments improve upon existing technology of building applications that request execution of database queries to access data stored in databases. Conventional designs of applications embed database queries in the application code, thereby requiring the application code to be modified when there are changes to database queries. Accordingly, the application may have to be recompiled and reinstalled. In contrast, the use of query lambdas allows database queries to be modified without requiring changes to the application if the interface of the database queries is not modified. As a result, there is less maintenance overhead when handling applications. For example, a client application may be deployed in hundreds of thousands of client devices. Conventional techniques require all hundreds of thousands of installations of the client applications to be upgraded when the database queries are upgraded. The disclosed techniques allow the client applications to continue to be used without modifications when a database query invoked by the client application is upgraded. As a result, the disclosed techniques simplify continuous delivery of database query upgrades. Furthermore, conventional architecture/design of applications requires a client-side library on the client device that is linked with the application to process client-side instructions. In contrast, the disclosed techniques allow the application to execute without requiring a client-side library, thereby reducing the maintenance overhead of the applications, for example, by making upgrades of application simpler. Furthermore, the footprint of the client application is reduced due to use of fewer client-side libraries. Applications written in any programming language or system are able to execute the database queries using the disclosed system, so long as the application can execute HTTP requests. This simplifies the process of developing and testing the database queries since the database queries are decoupled from the applications and can be developed and tested independent of the applications and reused across multiple applications that may be written in different programming languages. It is possible to execute a new client application written in a new language, even if the programming language was never tested with the database query. Furthermore, conventional techniques that embed database queries in the application are vulnerable to security attacks from malicious users, for example, SQL injection attacks. A malicious attacker can modify the client application to change the database query to retrieve more data than was originally intended. For example, if a malicious actior modifies a database query “select address from table EMPLOYEE where employee_id=?” to a new database query “select * from table EMPLOYEE” and executes the application the modified database query may expose all columns of all rows of the table EMPLOYEES. The disclosed techniques decouple database queries from the application, thereby making the applications impervious to SQL injection attacks or other types of attacks that exploit vulnerabilities in database queries. Since there is no query definition stored in a client application, a malicious actor is unable to modify the client application to again unauthorized access to the data stored in a database. As a result, the disclosed techniques improve the security of applications. Furthermore, conventional architectures of applications that embed database queries in the application make it difficult to optimize queries since the result must be returned within a short time, for example, within few milliseconds. In contrast, the disclosed techniques allow the database queries to be optimized before being deployed. New query versions are created and stored and optimized before being deployed to applications. This improves the execution performance of the database queries and also the performance of the applications invoking the database queries. System Environment FIG.1is a block diagram of a system environment105in which an online system operates, in accordance with an embodiment. The system environment105comprises an online system100, one or more client devices110, and a network170. The system environment105may include multiple client devices110. Other embodiments may have more of fewer systems within the system environment105. Functionality indicated as being performed by a particular system or a module within a system may be performed by a different system or by a different module than that indicated herein. FIG.1and the other figures use like reference numerals to identify like elements. A letter after a reference numeral, such as “110A,” indicates that the text refers specifically to the element having that particular reference numeral. A reference numeral in the text without a following letter, such as “110,” refers to any or all of the elements in the figures bearing that reference numeral (e.g. “110” in the text refers to reference numerals “110a” and/or “110n” in the figures). The online system100includes a query store155that stores database queries. The database queries may be specified using a query language, for example, SQL (structured query language) but are not limited to SQL syntax. The query store155maps query identifiers125to database queries135. For example, query identifier125arepresents database query135a, query identifier125brepresents database query135b, and query identifier125crepresents database query135c. A client application115running on a client device identifies a database query using the query identifier125. For example, client application115aidentifies the database query135ausing the query identifier125aand client application115bidentifies the database query135busing the query identifier125b. Accordingly, the client applications115do not embed the query definition in the client application code. The client application code includes the query identifier125. As a result, the query definition135may be modified without requiring changes to the client application. A client application115can refer to a modified database query by using the appropriate query identifier125. The query identifier may be obtained by the client application using a configuration file or a system parameter, thereby making the client application code independent of the database query definition. This allows client applications to be upgraded to new query definition without having to make significant changes to the client application. If the client application is designed appropriately, the client application may be upgraded without requiring any changes to the client application code. For example, the client application may be configured so that it executes the latest version of a database query. The database query may be upgraded to a new version and the client application automatically starts executing the new version of the database query. In some embodiments, the query optimization module260generates an execution plan for a version of the database query and stores the execution plan in association with the version identifier, for example, as part of the record storing information describing the query version. In response to receiving an API request from an application, for example, a REST API request specifying the query name and version identifier, the online system100accesses the execution plan of the query and executes it. This allows the online system to reuse the execution plan every time a request to execute that version of the database query is received. This makes execution of the database query more efficient comparing to a process that generates the execution plan after receiving the database query from the application since there is no overhead of generating the execution plan. Furthermore, the execution plan can be optimized before the database query is made available for user by client applications. For example, the system may collect up-to-date statistics of tables processed by the database query so that a cost based optimizer can generate an optimized execution plan. Furthermore, the system may analyze the database query to determine if any indexes will speed up the execution of the database query. The system generates the required indexes and generates an optimized execution plan that uses the generated indexes. In an embodiment, the online system100allows client applications to invoke database queries using REST APIs. Accordingly, the client device executes the database query by sending a request, for example, an HTTP request that specifies the query identifier as an argument. A database query has a URL (uniform resource locator), for example, an HTTP address that can be used by an application to execute the query. Accordingly, a client application can execute the query by sending a request to the URL of the query. Furthermore, sending the HTTP request does not require a client-side database library. Any client application, for example, the internet bowser that can execute an HTTP request is able to execute the database query remotely. The online system100executes the database query and returns the result as a response to the request sent by the client application. This allows client applications to query data without needing any special client-side library and without needing to establish and manage database sessions or connections. In an embodiment, the online system100provides a client interface to allow users to update the database queries. A database query has a specific query text and parameter set, and developers can set default values for query parameters or make them mandatory for each execution. In an embodiment, the data processed by the database queries is stored in a relational database but could be other type of database, for example, a key-value store. A system based on a key-value store that may be used to store data is described in U.S. patent application Ser. No. 16/160,477, filed on Oct. 15, 2018, which is hereby incorporated by reference in its entirety. A client device110is a computing device such as a personal computer (PC), a desktop computer, a laptop computer, a notebook, or a tablet PC. The client device110can also be a personal digital assistant (PDA), mobile telephone, smartphone, wearable device, etc. The client device110can also be a server or workstation within an enterprise datacenter. The client device executes a client application115for interacting with the search system100, for example, a browser. Although,FIG.1shows two client devices, the system environment105can include many more client devices110. The network170enables communications between various systems within the system environment105, for example, communications between the client device110and the search system100, communications between the third party system130and the cloud storage system120, and so on. In one embodiment, the network uses standard communications technologies and/or protocols. The data exchanged over the network can be represented using technologies and/or formats including, the HTML, the XML, JSON, and so on. System Architecture FIG.2shows the system architecture of an online system, in accordance with an embodiment. The online system100comprises an API server210, a query engine220, a query store155, a data store230, a query versioning module240, and a query optimization module260. Other embodiments of a search system100may include more of fewer modules. The API server210receives requests to execute database queries and executes them. In an embodiment, the API server exposes a REST end point for a database query. the API server210receives REST API requests directed to the REST end points and executes them. For example, the online system100associates each database query with a URL and the API server receives a request sent by a client application via the URL. The URL corresponding to a database query includes query identifier for uniquely identifying the database query. For example, the URL may specify the query name and query version for identifying the database query. The API sever210receives the API request, identifies the database query identified by the API request, and invokes the query engine150for executing the database query. If the database query is parameterized, the API request for executing the database query specifies the query parameters. The API sever210extracts the query parameters from the API request, for example, by parsing the URL if the API request is sent via a URL. The API server210provides the parameters to the query engine150for executing the database query. The API server210receives the result of the query and provides the results to the application that sent the API request. The API sever210may receive requests from client applications that may be written in different programming languages, so long as the application is configured to access URLs using HTTP protocols. Accordingly, the REST API interface provided by the API server allows client applications to access the database queries independent of the programming language used to write the application. For example, the API sever210may receive and process a request from a client application written in programming language P1to execute a database query Q1. Next the API sever210may receive and process a request from a client application written in programming language P2to execute a database query Q1or any other database query Q2. The query engine150receives request to execute database queries and executes them. In an embodiment, the query engine receives query identifiers for identifying the queries, for example, a query name and a query version. The query engine150may also receive parameters of the database query. The query engine150accesses the query definition for the identified database query from the query store155and executes the database query. The query store155stores query definitions for database queries. In an embodiment, the query store155maps information identifying a query to query definitions. For example, the query store may store a query name, a query version, and a query definition for database queries. The query versioning module240manages versions of queries. A database query identified by a query name may have multiple query versions. In an embodiment, every time a query is modified, the query versioning module240generates a new query version identifier. The query versioning module240stores the new version of the query in the query store155. The query console manager250configures and presents a user interface to users, for example, a console for allowing users such as developers to build queries, modify queries, execute queries, and so on. A developer may interact with the online system via a user interface or programmatically using the REST APIs. The user may provide query definitions, user defined version tags, and so on, using the console. The query optimization module260analyzes the performance of a version of the database query and performs actions for optimizing the database query. The query optimization module260generates statistics used for optimization of the database query. The query optimization module260uses the statistics to generate an optimized execution plan for the database query, for example, using cost based optimization techniques. The query optimization module260may make recommendations to a developer or database administrator for optimizing the database query. Alternatively, the query optimization module260may automatically take appropriate actions. For example, the query optimization module260may determine that the query performance may be improved by adding one or more indexes to a table processed by the query. The query optimization module260makes recommendation to create the indexes to a developer. Alternatively, the query optimization module260executes database commands that create the required indexes to improve the query performance. Processes FIG.3is a flowchart of the process of development of a database query, in accordance with an embodiment. The steps shown in this process can be executed in an order different from those shown in the flowcharts. Furthermore, various steps can be executed by modules other than those indicated herein. The online system100receives and stores310a database query, for example, in the query store. The online system100may receive a query definition for a new database query. The query versioning module240generates an initial version identifier for the database query. The online system100also receives a query name for the database query. The online system100stores a record in the query store with information describing the query including the query name, the initial query version, and the query definition. The online system100repeats the following steps320,330,340,350, and360for each modification to the database query that may be received by the online system. The modifications to the database query may be performed by a developer using a console of the online system100. Each version of the database query is immutable. Accordingly, whenever a query is modified, it is stored as a new version. The online system100does not allow modifications to an existing version of the database query. The online system100receives320a request to update the database query. The request to update the database query provides a new query definition for the database query. The query versioning module240generates330a version identifier for a new version of the database query. The version identifier is an automatically generated value that uniquely identifies the version of the query. The query name and the query version identifier uniquely identify the query definition that is received from the user. For example, query versioning module240may maintain a counter and increment the counter for each new version of the query to generate the query version identifier. The online system100provides340the automatically generated query version identifier of the database query to client device or application that provided the query definition. The online system100receives350a user defined tag for the version. The user defined tag is provided by the user, for example, a developer to be able to identify the query version using a user friendly identifier. For example, the user defined tag may describe a feature of the new version of the query. The online system100stores360a record in a metadata table describing the query version. The record may store the query name, the automatically generated version identifier, the user defined version tag, and query text representing the query definition. FIG.4shows a flowchart of the process for executing queries, in accordance with an embodiment. The steps shown in this process can be executed in an order different from those shown in the flowcharts. Furthermore, various steps can be executed by module other than those indicated herein. The online system100receives410a request to execute a database query from a client device. The request may be received via a REST API. The request specifies a name of a database query, a version identifier of the database query, and one or more parameters for the database query. The version identifier may be the automatically generated version identifier or a user defined tag. The query engine220retrieves420the query definition of the database query that matches the name and the version identifier from the query store155. In an embodiment, the query engine220retrieves a stored previously optimized execution plan for executing the database query. The query engine executes430the database query by providing the one or more parameters as input. The query engine provides440the result of execution of the query to the client device that sent the API request. FIG.5shows a flowchart of the process for continuous delivery of database queries for applications, in accordance with an embodiment. The steps shown in this process can be executed in an order different from those shown in the flowcharts. Furthermore, various steps can be executed by module other than those indicated herein. API requests mentioned in the following description ofFIG.5may be REST API requests or another type of API request that specifies the query name, query version, and any parameters of the query. The online system100stores510in the query store155, a version V1of a database query identified by a name Q1. The online system may repeat the steps520and530multiple times. The API server210receives520an API request from a production version of an application. The API request may be a REST API request received via a REST end point. The API request specifies (1) the queryname Q1(2) the query version V1, and (3) one or more parameters of the database query. The query version may be specified using the automatically generated version identifiers or using a user defined version tag. The query engine220executes the version V1of the database query Q1in response to the received request and provides the result to the client device or application that sent the API request. The online system10receives540a request to modify the query. The request provides a modified query definition of query Q1. The query versioning module240generates a new version V2of the database query Q1and stores550the modified database query as version V2of query Q1in the query store155. The online system performs560testing and validation of the version V2of the database query Q1via API requests received from a staging version of the application. The testing and validation may comprise executing the database query with various parameter values, executing the database query with different data stored in the tables being queried, and so on. The testing of the database query may be performed using client devices configured for performing testing that run a test instance of the application. The database query may be tested using different applications configured to execute the same database query using REST APIs. The database query may be tested under different load conditions, for example, the same database query may be sent multiple times or a set of different database queries including the version V2of database query Q1may be executed on the database, The API request for testing the version V2of the database query Q1identifies the database query by specifying the database query name Q1and version V2. Responsive to the version V2passing the test/validation criteria of the staging version of the application, the online system makes the version V2of the database query Q1available for production applications. The criteria for passing the test may comprise comparing the test results of execution of the queries with predetermined results. If the results of executing the query match the predetermined results, the database query is determined to have passed the test criteria. If one or more results of execution of the query do not match the predetermined results, a failure to pass the tests may be returned. A developer may investigate the database query text to determine the cause of the failure. The developer may modify the query text if necessary. The process of testing is continued until the database query passes the test criteria. The online system further repeats steps570and580. The API server210receives570an API request from the production version of an application. The API request specifies (1) the query name Q1(2) the query version V2. The query engine240executes580the version V2of the database query Q1in response to received request from the production version of the application and provides the result to the requestor. Accordingly, the production version is migrated from version V1of query Q1to version V2of query Q1by changing the API request to the online system. In an embodiment, the system optimizes the query version V2before making it available to a production system for execution. For example, the system may generate statistics describing the tables processed by the version V2of the database query. The system generates an execution plan optimized based on the generated statistics for the version V2of the database query. In an embodiment, the system generates an index of a table processed by the version V2of the database query and generates an execution query plan using the generated index for the version V2of the database query. Architecture of Computer FIG.6is a high-level block diagram illustrating an example of a computer600for use as one or more of the entities illustrated inFIG.1, according to one embodiment. Illustrated are at least one processor602coupled to a memory controller hub620, which is also coupled to an input/output (I/O) controller hub622. A memory606and a graphics adapter612are coupled to the memory controller hub622, and a display device618is coupled to the graphics adapter612. A storage device608, keyboard610, pointing device614, and network adapter616are coupled to the I/O controller hub. The storage device may represent a network-attached disk, local and remote RAID, or a SAN (storage area network). A storage device608, keyboard610, pointing device614, and network adapter616are coupled to the I/O controller hub622. Other embodiments of the computer600have different architectures. For example, the memory is directly coupled to the processor in some embodiments, and there are multiple different levels of memory coupled to different components in other embodiments. Some embodiments also include multiple processors that are coupled to each other or via a memory controller hub. The storage device608includes one or more non-transitory computer-readable storage media such as one or more hard drives, compact disk read-only memory (CD-ROM), DVD, or one or more solid-state memory devices. The memory holds instructions and data used by the processor602. The pointing device614is used in combination with the keyboard to input data into the computer600. The graphics adapter612displays images and other information on the display device618. In some embodiments, the display device includes a touch screen capability for receiving user input and selections. One or more network adapters616couple the computer600to a network. Some embodiments of the computer have different and/or other components than those shown inFIG.6. For example, the database system can be comprised of one or more servers that lack a display device, keyboard, pointing device, and other components, while a client device acting as a requester can be a server, a workstation, a notebook or desktop computer, a tablet computer, an embedded device, or a handheld device or mobile phone, or another type of computing device. The requester to the database system also can be another process or program on the same computer on which the database system operates. The computer600is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program instructions and/or other logic used to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules formed of executable computer program instructions are stored on the storage device, loaded into the memory, and executed by the processor. Additional Considerations The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a tangible computer readable storage medium or any type of media suitable for storing electronic instructions, and coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention.
35,316
11860873
DETAILED DESCRIPTION Exemplary methods, apparatus, and products for dashboard loading using a filtering query from a cloud-based data warehouse cache in accordance with the present invention are described with reference to the accompanying drawings, beginning withFIG.1.FIG.1sets forth a block diagram of automated computing machinery comprising an exemplary data access computing system (152) configured for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. The data access computing system (152) ofFIG.1includes at least one computer processor (156) or ‘CPU’ as well as random access memory (168) (‘RAM’) which is connected through a high speed memory bus (166) and bus adapter (158) to processor (156) and to other components of the data access computing system (152). Stored in RAM (168) is an operating system (154). Operating systems useful in computers configured for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention include UNIX™, Linux™, Microsoft Windows™, AIX™, IBM's i OS™, and others as will occur to those of skill in the art. The operating system (154) in the example ofFIG.1is shown in RAM (168), but many components of such software typically are stored in non-volatile memory also, such as, for example, on data storage (170), such as a disk drive. Also stored in RAM is the dashboard module (126), a module for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. The data access computing system (152) ofFIG.1includes disk drive adapter (172) coupled through expansion bus (160) and bus adapter (158) to processor (156) and other components of the data access computing system (152). Disk drive adapter (172) connects non-volatile data storage to the data access computing system (152) in the form of data storage (170). Disk drive adapters useful in computers configured for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention include Integrated Drive Electronics (‘IDE’) adapters, Small Computer System Interface (‘SCSI’) adapters, and others as will occur to those of skill in the art. Non-volatile computer memory also may be implemented for as an optical disk drive, electrically erasable programmable read-only memory (so-called ‘EEPROM’ or ‘Flash’ memory), RAM drives, and so on, as will occur to those of skill in the art. The example data access computing system (152) ofFIG.1includes one or more input/output (‘I/O’) adapters (178). I/O adapters implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices such as computer display screens, as well as user input from user input devices (181) such as keyboards and mice. The example data access computing system (152) ofFIG.1includes a video adapter (209), which is an example of an I/O adapter specially designed for graphic output to a display device (180) such as a display screen or computer monitor. Video adapter (209) is connected to processor (156) through a high speed video bus (164), bus adapter (158), and the front side bus (162), which is also a high speed bus. The exemplary data access computing system (152) ofFIG.1includes a communications adapter (167) for data communications with other computers and for data communications with a data communications network. Such data communications may be carried out serially through RS-232 connections, through external buses such as a Universal Serial Bus (‘USB’), through data communications networks such as IP data communications networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a data communications network. Examples of communications adapters useful in computers configured for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired data communications, and 802.11 adapters for wireless data communications. The communications adapter (167) is communicatively coupled to a wide area network (190) that also includes a cloud-based data warehouse (192) and a client computing system (194). The cloud-based data warehouse (192) is a computing system or group of computing systems that hosts a database for access over the wide area network (190). The client computing system (194) is a computing system that accesses the database via the data access computing system (152). FIG.2shows an exemplary block diagram of a system for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. As shown inFIG.2, the system includes a data access computing system (152), a cloud-based data warehouse (192) and a client computing system (196). The data access computing system (152) includes a dashboard module (126) and a cache state (204) data structure. The cloud-based data warehouse (192) includes a database (206) and a cache (208). The client computing system (196) includes a client application (202). The client application (202) may include a web browser, dedicated software application, mobile application, or other application to access the data access computing system (152) using the client computing system (194). The database (206) is a collection of data stored in the cloud-based data warehouses (192) and management systems for the data. The management systems may receive database queries, such as structure query language (SQL) queries, and respond to queries with a data set. The cache (208) is a portion of memory that stores data for fast retrieval. For example, the cache (208) may store data sets generated in response to a query to the database (206). The dashboard module (126) is configured to provide a user accessing the data access computing system (152) (e.g., via the client application (202)) with a dashboard user interface. The dashboard may include one or more visualizations of data stored in the database (206), such as graphs, charts, tables, etc. Accordingly, to generate a dashboard for a given user, the dashboard module (126) may execute one or more predefined queries for submission to the database (206) and generate the visualizations based on a result of the one or more queries. A given dashboard may be associated with particular user accounts (e.g., each user is associated with a corresponding dashboard), particular roles or user groups (e.g., each user in the role or group is associated with a same dashboard), etc. Thus, when a client application (202) associated with a particular user accesses the data access computing system (152), the dashboard module (126) may submit the one or more queries to the database (206) required to generate the dashboard visualizations. As the queries used to generate a given dashboard may remain unchanged for extended periods of time, and as multiple users may access the same dashboard (e.g., due to belonging to the same role, company, group, etc.), the dashboard module (126) may repeatedly require data based on the same queries to generate the same dashboard. If each loading of a dashboard required the same queries to be submitted to the database (206), both the data access computing system (152) and the cloud-based data warehouse (192) would experience a significant computational and network traffic burden. Moreover, the results of these queries may remain unchanged for some period of time, requiring computational resources to be used to generate duplicate results. To address these shortcomings, the dashboard module (126) determines whether a result for a query (e.g., a query used to generate a dashboard) is stored in the cache (208) of the cloud-based data warehouse (192). For example, the user has recently accessed a given dashboard, or another user has recently accessed the same given dashboard, the results of a query used to generate a dashboard visualization may still be stored in the cache (208). To determine if the result of a query is stored in the cache (208), the dashboard module (126) maintains a cache state (204) data structure. The cache state (204) data structure may be embodied as a database or other data structure as can be appreciated. The cache state (204) data structure may indicate which queries have results stored in the cache (208). The cache state (204) data structure may also include a timestamp indicating when the results for the query were generated. Each entry in the cache state (204) data structure may be indexed by a digest of a query. The digest may include an MD5 hash or other hash of the query. The digest may also be based on a normalized form of the query. In other words, generating a digest for a query may include normalizing the query and applying a digest function to the normalized query. Determining whether a result for a query is stored in the cache (208) may include determining if an entry for the query is stored in the cache state (204). For example, determining whether a result for the query is stored in the cache (208) may include generating a digest for the query and determining if an entry corresponding to the digest is stored in the cache state (204). Where an entry is stored in the cache state (204), the dashboard module (126) may send a request for the result from the cache (208). The dashboard module (126) may determine a location in the cache (208) for the result and then request, from the could-based data warehouse (192), the results stored in the determined location. For example, the entry in the cache state (204) may indicate a location in the cache (208) for the results. As another example, the entry in the cache state (204) may include a query identifier. The query identifier may include a unique identifier generated when the query was submitted to the database (206) and the result for the query generated by the cloud-based data warehouse (192). Another data structure associating query identifiers with locations in the cache (208) may then be queried to determine a location in the cache (208) for the result. A request indicating the location in the cache (208) may then be submitted to the cloud-based data warehouse (192). As a further example, where the cache state (204) entry includes a query identifier, the query identifier may be included in a request to the cloud-based data warehouse (192). The cloud-based data warehouse (192) may maintain a data structure associating query identifiers with locations in the cache. The cloud-based data warehouse (192) may then load the result from the cache (208) based on the data structure and the query identifier from the request. The cloud-based data warehouse (192) may then send the result to the data access computing system (152). After receiving a result from the cache (208) of the cloud-based data warehouse, the dashboard module (126) may provide, based on the result, one or more dashboard visualizations. As the dashboard visualizations are provided based on cached data, the dashboard is generated faster, providing for an improved user experience. Moreover, the cloud-based data warehouse (192) experiences a reduced computational burden by not having to process queries for each loading of a dashboard. The results of queries stored in the cache (208) are associated with an age. The age of a result is the amount of time since the result was generated or first received by the data access computing system (152) (e.g., in response to submitting a query to the database (206)). The dashboard module (126) may determine that an age associated with the result exceeds a threshold (e.g., one hour). For example, an entry in the cache state (204) may indicate a time at which the result was generated or first received. Where the age exceeds the threshold, the dashboard module (126) may submit the query to the database (206) for results to the query. The dashboard module (126) may submit the query concurrent to or after requesting the result from the cache (208). After receiving results to the query from the database (206), the dashboard module (126) may update or refresh the one or more dashboard visualizations based on the updated results. Thus, the one or more dashboard visualizations are initially provided based on the cached results, and then updated based on the newly generated results. The dashboard module (126) may update an entry in the cache state (204) corresponding to the query to reflect the age of the updated results. The dashboard module (126) may also compare the age of the result to another threshold (e.g., 24 hours). Where the age of the cached result exceeds the other threshold, the dashboard module (126) may send the query to the database (206) for the results instead of requesting the results from the cache (208). Thus, when cached results are older than a certain age, the results are regenerated instead of accessed from the cache. The threshold to determine whether or not to request the results from the cache (208) may be used instead of or in combination with a threshold to determine whether to submit the query to the database (206) in addition to loading the results from the cache (208). For example, where the results in the cache (208) are less than an hour old, the results may be only loaded from the cache (208) for use in the dashboard visualizations. Where the results in the cache (208) are older than an hour but less than 24 hour old, the results may be loaded from the cache (208) for initial use in the dashboard visualizations, but the queries are also submitted to the database (206) for updating the dashboard visualizations. Where the results in the cache (208) are older than 24 hours, the query is submitted to the database (206) and the cached results are not used in presenting the dashboard visualizations. FIG.3shows an exemplary user interface for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. Shown is a graphical user interface (GUI) (300). The GUI (300) is a user interface that presents a data set and graphical elements to a user and receives user input from the user. The GUI (300) may be presented, in part, by the dashboard module (126) and displayed on a client computing system (194) (e.g., on a system display or mobile touchscreen). The GUI (300) may be encoded by an Internet application hosted on the data access computing system (152) for rendering by the client application (202) of the client computing system (194). The GUI (300) presents, in part, worksheets to a user. A worksheet is a presentation of a data set from a database. A referencing worksheet is a worksheet that is linked from another worksheet (referred to as a data source worksheet). The referencing worksheet inherits the data set presented in the data source worksheet (i.e., data not excluded from presentation). The referencing worksheet may also inherit the results of formula applied to other data but not the formulas themselves. The referencing worksheet may be limited to the data set presented or otherwise made available in the data source worksheet (unless the user generating the referencing worksheet has access to excluded data in the database). A referencing worksheet may be linked from any number of data sources, including multiple data source worksheets. The exemplary GUI (300) includes a spreadsheet structure (302) and a list structure (304). The spreadsheet structure (302) includes a data set (shown as empty rows) with six columns (column A (308A), column B (308B), column C (308C), column D (308D), column E (308E), column F (308F)). The spreadsheet structure (302) is a graphical element and organizing mechanism for the data set. The spreadsheet structure (302) displays the data within the data set as rows of data organized by columns (column A (308A), column B (308B), column C (308C), column D (308D), column E (308E), column F (308F)). The columns delineate different categories of the data in each row of the data set. The columns may also be calculations using other columns in the data set. The list structure (304) is a graphical element used to define and organize the hierarchical relationships between the columns (column A (308A), column B (308B), column C (308C), column D (308D), column E (308E), column F (308F)) of the data set. The term “hierarchical relationship” refers to subordinate and superior groupings of columns. For example, a database may include rows for an address book, and columns for state, county, city, and street. A data set from the database may be grouped first by state, then by county, and then by city. Accordingly, the state column would be at the highest level in the hierarchical relationship, the county column would be in the second level in the hierarchical relationship, and the city column would be at the lowest level in the hierarchical relationship. The list structure (304) presents a dimensional hierarchy to the user. Specifically, the list structure (304) presents levels arranged hierarchically across at least one dimension. Each level within the list structure (304) is a position within a hierarchical relationship between columns (column A (308A), column B (308B), column C (308C), column D (308D), column E (308E), column F (308F)). The keys within the list structure (304) identify the one or more columns that are the participants in the hierarchical relationship. Each level may have more than one key. One of the levels in the list structure (304) may be a base level. Columns selected for the base level provide data at the finest granularity. One of the levels in the list structure (304) may be a totals or root level. Columns selected for the totals level provide data at the highest granular level. For example, the totals level may include a field that calculates the sum of each row within a single column of the entire data set (i.e., not partitioned by any other column). The GUI (300) may enable a user to drag and drop columns (column A (308A), column B (308B), column C (308C), column D (308D), column E (308E), column F (308F)) into the list structure (304). The order of the list structure (304) may specify the hierarchy of the columns relative to one another. A user may be able to drag and drop the columns in the list structure (304) at any time to redefine the hierarchical relationship between columns. The hierarchical relationship defined using the columns selected as keys in the list structure (304) may be utilized in charts such that drilling down (e.g., double click on a bar), enables a new chart to be generated based on a level lower in the hierarchy. FIG.4shows an exemplary user interface for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. Shown is a graphical user interface (GUI) (400). The GUI (400) may be presented, in part, by the dashboard module (126) and displayed on a client computing system (194) (e.g., on a system display or mobile touchscreen). The GUI (400) may be encoded by an Internet application hosted on the data access computing system (152) for rendering by the client application (202) of the client computing system (194). The GUI (400) includes a plurality of dashboard visualizations (402). Each of the dashboard visualizations (402) are visual representations of results of queries submitted to the database (206) by the dashboard module (126). The dashboard visualizations (402) may be based on results to queries submitted to the database (206) in response to loading the GUI (400). The dashboard visualizations (402) may also be based on cached results to queries submitted to the database (206) in response to a loading another instance of the GUI (400) (e.g., loaded by another user or entity). For further explanation,FIG.5sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192). Determining (502) that a result (504) of a first query is stored in the cache (208) of the cloud-based data warehouse (192) may include accessing a data structure (e.g., a cache state (204)) indicating one or more queries with results stored in the cache (208). For example, each entry in the data structure may indicate a time at which a corresponding result was generated or received by the dashboard module (126). Each entry may include a query identifier generated when the corresponding query was submitted to a database (206) to generate the results. Each entry may be indexed by a digest or other identifier for a corresponding query. Determining (502) that the result (504) of the first query is stored in the cache (208) of the cloud-based data warehouse (192) may include determining that an entry corresponding to the first query is stored in the data structure. The method ofFIG.5also includes sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208). Sending (506) the request (508) may include calling an Application Program Interface (API) exposed by the cloud-based data warehouse (192) facilitating access to cached results. Thus, an API call or function may be used to access cached data that is different from an API call or function used to submit queries to a database (206) in the cloud-based data warehouse (192). The request (508) may indicate a particular location (e.g., address) in the cache (208) for retrieving the result (504). The request (508) may also indicate a query identifier for the first query. For example, entries of a data structure maintained by the dashboard module (126) indicating results that are stored in the cache (208) may include a query identifier. The query identifier may be accessed from the data structure and sent to the cloud-based data warehouse (192), which may maintain a data structure associating query identifiers and locations in cache (208) for corresponding results. Thus, the cloud-based data warehouse (192) can access, from the cache (208), the results for inclusion in a response to the request (508). The method ofFIG.5also includes providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The one or more dashboard visualizations may include graphs, tables, charts, etc. As the one or more dashboard visualizations are based on cached data, the one or more dashboards may be generated and/or rendered faster than if the cloud-based data warehouse (192) had to fully process the first query. For further explanation,FIG.6sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); and providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The method ofFIG.6differs fromFIG.5in that determining (502) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192) includes generating (602) (e.g., by the dashboard module (126)) a digest for the first query. Generating (602) a digest for the first query may include applying an MD5 function, hash function, or other function to the first query to generate the digest. Generating (602) the digest for the first query may also include normalizing the first query and applying a function to the normalized first query. Determining (502) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192) also includes identifying (604), based on the digest, an entry in a data structure tracking a state of the cache (208) of the cloud-based data warehouse (192). The data structure may include a cache state (204) data structure. The data structure may be encoded as a database, a table, or other data structure as can be appreciated. Each entry in the data structure may be indexed by a digest. Each entry in the data structure may include a timestamp indicating when results for the corresponding query were generated or received. Each entry in the data structure may also include a query identifier generated when the query was submitted to a database (206). Thus, identifying (604) an entry in the data structure corresponding to a digest of the first query indicates that results for the first query are stored in the cache (208) of the cloud-based data warehouse (192). For further explanation,FIG.7sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192) by generating (602) (e.g., by the dashboard module (126)) a digest for the first query and identifying (604), based on the digest, an entry in a data structure tracking a state of the cache (208) of the cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); and providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The method ofFIG.7differs fromFIG.6in that the method ofFIG.7also includes determining (702), based on the entry, a location in the cache (208) of the result. For example, the entry may indicate a location in cache (208) for the result. The location may then be included in the request (508). As another example, the entry may indicate a query identifier generated when an instance of the first query was submitted to a database (206) to generate the cached results. The query identifier may then be used to access another data structure associating query identifiers and locations in cache (208). Where the other data structure in implemented in the data access computing system (152), the dashboard module (126) may access the identified location in cache based on the query identifier. Where the other data structure associating query identifiers and locations in cache (208) is implemented by the cloud-based data warehouse (192), the query identifier may be included in the request (508) such that the cloud-based data warehouse (192) may determine the location of the result in cache (208) based on the query identifier. For further explanation,FIG.8sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); and providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The method ofFIG.8differs fromFIG.5in that the method ofFIG.8also includes determining (802) (e.g., by the dashboard module (126)) that an age associated with the result exceeds a threshold (e.g., one hour). The age of a given result is a time since the result was generated or received (e.g., in response to submission of a query to the database (206) of the cloud-based data warehouse (192)). For example, when results from submission of a query are received, the results may include a query identifier. A timestamp for the results may be included with the results or generated in response to receiving the results. A data structure (e.g., a cache structure (204)) entry corresponding to the query may be created or updated to indicate the timestamp. Thus, the age associated with a result may be determined by accessing the data structure entry associated with the query corresponding to the result. The method ofFIG.8also includes querying (804) (e.g., by the dashboard module (126)) the cloud-based data warehouse (192) with the first query (806). For example, the first query (806) may be submitted to the database (206) of the cloud-based data warehouse (192) for processing. The method ofFIG.8also includes receiving (808), in response to the first query (806), another result (810) for the first query (806). The other results (810) correspond to a more recent execution of the first query (806) when compared to the cached results (504). The method ofFIG.8also includes updating (812) the one or more dashboard visualizations based on the other result (810) for the first query (806). Thus, the one or more dashboard visualizations are initially provided using cached results (504) to allow for a fast loading and presentation of the dashboard. As the age of the cached results (504) exceed the threshold, the dashboard visualizations are subsequently updated with more recent results (810) based on a new execution of the first query (806). After receiving (808) the results (810), the dashboard module (126) may update a data structure (e.g., cache state (204) data structure) entry corresponding to the first query (806) to indicate a time at which the other result (810) was generated or received. The dashboard module (126) may update the data structure entry corresponding to the first query (806) to indicate a query identifier or location in cache (206) indicated in the other result (810) to facilitate subsequent loading of the other result (810) from cache (206). For further explanation,FIG.9sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); and providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The method ofFIG.9differs fromFIG.5in that the method ofFIG.9also includes determining (902) (e.g., by the dashboard module (126)) that a result for a second query (906) is not stored in the cache (208) of the cloud-based data warehouse (192). For example, a digest for the second query (906) may be generated and used to access a data structure (e.g., a cache state (204) data structure) indicating which queries have results stored in cache (208). A lack of an entry in the data structure corresponding to the second query (906) would then indicate that results for the second query (906) are not stored in cache (208). The method ofFIG.9also includes querying (904) the cloud-based data warehouse (192) with the second query (906). For example, the dashboard module (126) may submit the second query (906) to a database (206) of the cloud-based data warehouse (192) for processing. The method ofFIG.9also includes receiving (908), in response to the second query (906), the result (910) for the second query (906). For further explanation,FIG.10sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations; determining (902) (e.g., by the dashboard module (126)) that a result for a second query (906) is not stored in the cache (208) of the cloud-based data warehouse (192); querying (904) the cloud-based data warehouse (192) with the second query (906); and receiving (908), in response to the second query (906), the result (910) for the second query (906). The method ofFIG.10differs fromFIG.9in that the method ofFIG.10also includes storing (1002) an indication that the result (910) for the second query (906) is stored in the cache of the cloud-based data warehouse (192). For example, a new entry for the second query (906) may be added to a data structure (e.g., a cache state (204) data structure) indicating which queries have results stored in cache (208). The new entry may include a digest for the second query (906). The new entry may include a timestamp associated with the results (910) for the second query (906). The new entry may also include a query identifier (e.g., generated by the cloud-based data warehouse (192) and included with the result (910)). For further explanation,FIG.11sets forth a flow chart illustrating an exemplary method for dashboard loading from a cloud-based data warehouse cache according to embodiments of the present invention that includes determining (502) (e.g., by a dashboard module (126)) that a result for a first query is stored in a cache (208) of a cloud-based data warehouse (192); sending (506) (e.g., by the dashboard module (126)), in response to the result being stored in the cache (208), to the cloud-based data warehouse (192), a request (508) for the result from the cache (208); and providing (510) (e.g., by the dashboard module (126)), based on the result (504) for the first query, one or more dashboard visualizations. The method ofFIG.11differs fromFIG.5in that the method ofFIG.11also includes determining (1102) (e.g., by the dashboard module (126)) that a result for a second query (1104) is stored in the cache (208) of the cloud-based data warehouse (192) and associated with an age exceeding a threshold (e.g., 24 hours). For example, a digest for the second query (1104) may be generated and used to access a data structure (e.g., a cache state (204) data structure) indicating which queries have results stored in cache (208). The entry may include a timestamp used to determine the age of the cached results. The method ofFIG.11also includes querying (1106) the cloud-based data warehouse (192) with the second query (1104). For example, the dashboard module (126) may submit the second query (1104) to a database (206) of the cloud-based data warehouse (192) for processing. The method ofFIG.11also includes receiving (1108), in response to the second query (1104), another result (1108) for the second query (1104). Thus, although cached results for the second query (1104) exist, new results (1110) are generated due to the age of the cached results exceeding a threshold. For further explanation,FIG.12sets forth a flow chart illustrating an exemplary method for dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention. A filtering query is a type of query (as described above) in which matching row values are filtered in or metadata (e.g., number of occurrences) about the values from within the targeted database table or portion of a table. A filtering query may be a request for rows with a column value matching a particular criterion. The criterion may, for example, be an absolute value (e.g., name is equal to John) or a range (e.g., name starts with “J”). A filtering query may be a request for a frequency calculation (e.g., number of name values starting with “J”) or distribution (e.g., number of name values starting with “A” through “J” compared to the number of name values starting with “K” through “Z”). In response to a filtering query, the cloud-based data warehouse provides a filtered result. A filtered result is a result with matching rows filtered in or metadata about the row value from withing the targeted database table or portion of a table. A filtering query may be the query in any of the methods described above. Similarly, a filtered result may be the result in any of the methods described above. A filtering query may require a relatively large computational cost compared to less complicated queries. Therefore, the dashboard module and other systems may benefit more from storing the filtered results in the cache. The method ofFIG.12includes determining (1202) that a filtered result (1204) for a first filtering query is stored in a cache of a cloud-based data warehouse (192). As discussed above, determining (1202) that a filtered result (1204) for a first filtering query is stored in a cache of a cloud-based data warehouse (192) may include generating a digest for the first filtering query; and identifying, based on the digest, an entry in a data structure tracking a state of the cache of the cloud-based data warehouse. The method ofFIG.12further includes sending (1206), in response to the filtered result (1204) being stored in the cache, to the cloud-based data warehouse (192), a request (1208) for the filtered result (1204) from the cache; and providing (1210), based on the filtered result (1204) for the first filtering query, one or more dashboard visualizations. As discussed above, the dashboard visualization(s) may be presented on a client application of the client computing system (194). The first filtering query may be a request for rows matching a criterion, and the one or more dashboard visualizations may be a dashboard visualization of the rows matching the criterion. As described above, the criterion may be a description of a group of rows with a particular column value or a column value within a range of values. The filtered result may be a dashboard visualization of the group rows with a particular column value or a column value within a range of values. Such dashboard visualizations may be any of the dashboard visualization as described above. The first filtering query may include a request for a frequency distribution, and the one or more dashboard visualizations include a histogram distribution of values. A frequency distribution is a description of the number of times a value or range of values occurs. A frequency distribution request may include a bucket size or bucket number. A bucket is one of a collection of ranges in which to sort row values. A frequency distribution may require information such as a bucket size or bucket number. Specifically, a frequency distribution may require a particular description of the first and last value for each bucket or the number of buckets/ranges into which each value is to be sorted. As discussed above, the method ofFIG.12may further include determining, based on the entry, a location in the cache of the filtered result. Also as discussed above, the method ofFIG.12may further include determining that an age associated with the filtered result exceeds a threshold; querying the cloud-based data warehouse with the first filtering query; receiving, in response to the first filtering query, another filtered result for the first filtering query; and updating the one or more dashboard visualizations based on the other filtered result for the first filtering query. Also as discussed above, the method ofFIG.12may further include determining that a filtered result for a second filtering query is not stored in the cache of the cloud-based data warehouse; filtering querying the cloud-based data warehouse with the second filtering query; receiving, in response to the second filtering query, the filtered result for the second filtering query; and storing an indication that the filtered result for the second filtering query is stored in the cache of the cloud-based data warehouse. Also as discussed above, the method ofFIG.12may further include determining that a filtered result for a second filtering query is stored in the cache of the cloud-based data warehouse and associated with an age exceeding a threshold; filtering querying the cloud-based data warehouse with the second filtering query; and receiving, in response to the second filtering query, another filtered result for the second filtering query. In view of the explanations set forth above, readers will recognize that the benefits of dashboard loading using a filtering query from a cloud-based data warehouse cache according to embodiments of the present invention include:Improving the operation of a computing system by providing for accelerated dashboard loading using cached query results.Improving the operation of a computing system by reducing the computational burden on cloud-based data warehouses through the use of cached query results. Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for dashboard loading using a filtering query from a cloud-based data warehouse cache. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
50,962
11860874
DETAILED DESCRIPTION Embodiments are described herein according to the following outline:1.0. General Overview2.0. Overview of Data Intake and Query Systems3.0. General Overview3.1 Host Devices3.2 Client Devices3.3. Client Device Applications3.4. Data Server System3.5. Cloud-Based System Overview3.6. Searching Externally-Archived Data3.7. Data Ingestion3.7.1. Input3.7.2. Parsing3.7.3. Indexing3.8. Query Processing3.9. Pipelined Search Language3.10. Field Extraction3.11. Example Search Screen3.12. Data Models3.13. Acceleration Technique3.13.1. Aggregation Technique3.13.2. Keyword Index3.13.3. High Performance Analytics Store3.13.4. Extracting Event Data Using Posting3.13.5. Accelerating Report Generation3.14. Security Features4.0. Data Intake and Fabric System Architecture4.1. Worker Nodes4.1.1. Serialization/Deserialization4.2. Search Process Master4.2.1 Workload Catalog4.2.2 Node Monitor4.2.3 Dataset Compensation4.3. Query Coordinator4.3.1. Query Processing4.3.2. Query Execution and Node Control4.3.3. Result Processing4.4 Query Acceleration Data Store5.0. Query Data Flow6.0. Query Coordinator Flow7.0. Query Processing Flow8.0. Common Storage Architecture9.0. Ingested Data Buffer Architecture10.0 Combining Datasets10.1 Multi-Partition Determination10.2 Multi-Partition Operation11.0. Hardware Embodiment12.0. Terminology In this description, references to “an embodiment,” “one embodiment,” or the like, mean that the particular feature, function, structure or characteristic being described is included in at least one embodiment of the technique introduced herein. Occurrences of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, the embodiments referred to are also not necessarily mutually exclusive. A data intake and query system can index and store data in data stores of indexers, and can receive search queries causing a search of the indexers to obtain search results. The data intake and query system typically has search, extraction, execution, and analytics capabilities that may be limited in scope to the data stores of the indexers (“internal data stores”). Hence, a seamless and comprehensive search and analysis that includes diverse data types from external data sources, common storage (may also be referred to as global data storage or global data stores), ingested data buffers, query acceleration data stores, etc. may be difficult. Thus, the capabilities of some data intake and query systems remain isolated from a variety of data sources that could improve search results to provide new insights. Furthermore, the processing flow of some data intake and query systems are unidirectional in that data is obtained from a data source, processed, and then communicated to a search head or client without the ability to route data to different destinations. The disclosed embodiments overcome these drawbacks by extending the search and analytics capabilities of a data intake and query system to include diverse data types stored in diverse data systems internal to or external from the data intake and query system. As a result, an analyst can use the data intake and query system to search and analyze data from a wide variety of dataset sources, including enterprise systems and open source technologies of a big data ecosystem. The term “big data” refers to large data sets that may be analyzed computationally to reveal patterns, trends, and associations, in some cases, relating to human behavior and interactions. In particular, introduced herein is a data intake and query system that that has the ability to execute big data analytics seamlessly and can scale across diverse data sources to enable processing large volumes of diverse data from diverse data systems. A “data source” can include a “data system,” which may refer to a system that can process and/or store data. A “data storage system” may refer to a storage system that can store data such as unstructured, semi-structured, or structured data. Accordingly, a data source can include a data system that includes a data storage system. The system can improve search and analytics capabilities of previous systems by employing a search process master and query coordinators combined with a scalable network of distributed nodes communicatively coupled to diverse data systems. The network of distributed nodes can act as agents of the data intake and query system to collect and process data of distributed data systems, and the search process master and coordinators can provide the processed data to the search head as search results. For example, the data intake and query system can respond to a query by executing search operations on various internal and external data sources to obtain partial search results that are harmonized and presented as search results of the query. As such, the data intake and query system can offload search and analytics operations to the distributed nodes. Hence, the system enables search and analytics capabilities that can extend beyond the data stored on indexers to include external data systems, common storage, query acceleration data stores, ingested data buffers, etc. The system can provide big data open stack integration to act as a big data pipeline that extends the search and analytics capabilities of a system over numerous and diverse data sources. For example, the system can extend the data execution scope of the data intake and query system to include data residing in external data systems such as MySQL, PostgreSQL, and Oracle databases; NoSQL data stores like Cassandra, Mongo DB; cloud storage like Amazon S3 and Hadoop distributed file system (HDFS); common storage; ingested data buffers; etc. Thus, the system can execute search and analytics operations for all possible combinations of data types stored in various data sources. The distributed processing of the system enables scalability to include any number of distributed data systems. As such, queries received by the data intake and query system can be propagated to the network of distributed nodes to extend the search and analytics capabilities of the data intake and query system over different data sources. In this context, the network of distributed nodes can act as an extension of the local data intake in query system's data processing pipeline to facilitate scalable analytics across the diverse data systems. Accordingly, the system can extend and transform the data intake and query system to include data resources into a data fabric platform that can leverage computing assets from anywhere and access and execute on data regardless of type or origin. The disclosed embodiments include services such as new search capabilities, visualization tools, and other services that are seamlessly integrated into the DFS system. For example, the disclosed techniques include new search services performed on internal data stores, external data stores, or a combination of both. The search operations can provide ordered or unordered search results, or search results derived from data of diverse data systems, which can be visualized to provide new and useful insights about the data contained in a big data ecosystem. Various other features of the DFS system introduced here will become apparent from the description that follows. First, however, it is useful to consider an example of an environment and system in which the techniques can be employed, as will now be described. 1.0. General Overview The embodiments disclosed herein generally refer to an environment that includes data intake and query system including a data fabric service system architecture (“DFS system”), services, a network of distributed nodes, and distributed data systems, all interconnected over one or more networks. However, embodiments of the disclosed environment can include many computing components including software, servers, routers, client devices, and host devices that are not specifically described herein. As used herein, a “node” can refer to one or more devices and/or software running on devices that enable the devices to provide execute a task of the system. For example, a node can include devices running software that enable the device to execute a portion of a query. FIG.1Ais a high-level system diagram of an environment10in which an embodiment may be implemented. The environment10includes distributed external data systems12-1and12-2(also referred to collectively and individually as external data system(s)12). The external data systems12are communicatively coupled (e.g., via a LAN, WAN, etc.) to worker nodes14-1and14-2of a data intake and query system16, respectively (also referred to collectively and individually as worker node(s)14). The environment10can also include a client device22and applications running on the client device22. An example includes a personal computer, laptop, tablet, phone, or other computing device running a network browser application that enables a user of the client device22to access any of the data systems. The data intake and query system16and the external data systems12can each store data obtained from various data sources. For example, the data intake and query system16can store data in internal data stores20(also referred to as an internal storage system), and the external data systems12can store data in respective external data stores22(also referred to as external storage systems). However, the data intake and query system16and external data systems12may process and store data differently. For example, as explained in greater detail below, the data intake and query system16may store minimally processed or unprocessed data (“raw data”) in the internal data stores20, which can be implemented as local data stores20-1, common storage20-2, or query acceleration data stores20-3. In contrast, the external data systems12may store pre-processed data rather than raw data. Hence, the data intake and query system16and the external data systems12can operate independent of each other in a big data ecosystem. The worker nodes14can act as agents of the data intake and query system16to process data collected from the internal data stores20and the external data stores22. The worker nodes14may reside on one or more computing devices such as servers communicatively coupled to the external data systems12. Other components of the data intake and query system16can finalize the results before returning the results to the client device22. As such, the worker nodes14can extend the search and analytics capabilities of the data intake and query system16to act on diverse data systems. The external data systems12may include one or more computing devices that can store structured, semi-structured, or unstructured data. Each external data system12can generate and/or collect generated data, and store the generated data in their respective external data stores22. For example, the external data system12-1may include a server running a MySQL database that stores structured data objects such as time-stamped events, and the external data system12-2may be a server of cloud computing services such as Amazon web services (AWS) that can provide different data types ranging from unstructured (e.g., s3) to structured (e.g., redshift). The internal data stores20are said to be internal because the data stored thereon has been processed or passed through the data intake and query system16in some form. Conversely, the external data systems12are said to be external to the data intake and query system16because the data stored at the external data stores22has not necessarily been processed or passed through the data intake and query system16. In other words, the data intake and query system16may have no control or influence over how data is processed, controlled, or managed by the external data systems12. The external data systems12can process data, perform requests received from other computing systems, and perform numerous other computational tasks independent of each other and independent of the data intake and query system16. For example, the external data system12-1may be a server that can process data locally that reflects correlations among the stored data. The external data systems12may generate and/or store ever increasing volumes of data without any interaction with the data intake and query system16. As such, each of the external data system12may act independently to control, manage, and process the data they contain. Data stored in the internal data stores20and external data stores22may be related. For example, an online transaction could generate various forms of data stored in disparate locations and in various formats. The generated data may include payment information, customer information, and information about suppliers, retailers, and the like. Other examples of data generated in a big data ecosystem include application program data, system logs, network packet data, error logs, stack traces, and performance data. The data can also include diagnostic information and many other types of data that can be analyzed to perform local actions, diagnose performance problems, monitor interactions, and derive other insights. The volume of generated data can grow at very high rates as the number of transactions and diverse data systems grows. A portion of this large volume of data could be processed and stored by the data intake and query system16while other portions could be stored in any of the external data systems12. In an effort to reduce the vast amounts of raw data generated in a big data ecosystem, some of the external data systems12may pre-process the raw data based on anticipated data analysis needs, store the pre-processed data, discard some or all of theremaining raw data, or store it in a different location that data intake and query system16does not have access to. However, discarding or not making the massive amounts of raw data available can result in the loss of valuable insights that could have been obtained by searching all of the raw data. In contrast, the data intake and query system16can address some of these challenges by collecting and storing raw data as structured “events,” as will be described in greater detail below. In some embodiments, an event includes a portion of raw data and is associated with a specific point in time. For example, events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system) that are associated with successive points in time. In some embodiments, the external data systems12can store raw data as events that are indexed by timestamps but are also associated with predetermined data items. This structure is essentially a modification of conventional database systems that require predetermining data items for subsequent searches. These systems can be modified to retain the remaining raw data for subsequent re-processing for other predetermined data items. Specifically, the raw data can be divided into segments and indexed by timestamps. The predetermined data items can be associated with the events indexed by timestamps. The events can be searched only for the predetermined data items during search time; the events can be re-processed later in time to re-index the raw data, and generate events with new predetermined data items. As such, the data systems of the system10can store related data in a variety of pre-processed data and raw data in a variety of structures. A number of tools are available to search and analyze data contained in these diverse data systems. As such, an analyst can use a tool to search a database of the external data system12-1. A different tool could be used to search a cloud services application of the external data system12-2. Yet another different tool could be used to search the internal data stores20. Moreover, different tools can perform analytics of data stored in proprietary or open source data stores. However, existing tools cannot obtain valuable insights from data contained in a combination of the data intake and query system16and/or any of the external data systems12. Examples of these valuable insights may include correlations between the structured data of the external data stores22and raw data of the internal data stores20. The disclosed techniques can extend the search, extraction, execution, and analytics capabilities of data intake and query systems to seamlessly search and analyze multiple diverse data of diverse data systems in a big data ecosystem. The disclosed techniques can transform a big data ecosystem into a big data pipeline between external data systems and a data intake and query system, to enable seamless search and analytics operations on a variety of data sources, which can lead to new insights that were not previously available. Hence, the disclosed techniques include a data intake and query system16extended to search external data systems into a data fabric platform that can leverage computing assets from anywhere and access and execute on data regardless of type and origin. In addition, the data intake and query system16facilitates implementation of both iterative searches, to read datasets multiple times in a loop, and interactive or exploratory data analysis (e.g., for repeated database-style querying of data). 2.0. Overview of Data Intake and Query Systems As indicated above, modern data centers and other computing environments can comprise anywhere from a few host computer systems to thousands of systems configured to process data, service requests from remote clients, and perform numerous other computational tasks. During operation, various components within these computing environments often generate significant volumes of machine data. Machine data is any data produced by a machine or component in an information technology (IT) environment and that reflects activity in the IT environment. For example, machine data can be raw machine data that is generated by various components in IT environments, such as servers, sensors, routers, mobile devices, Internet of Things (IoT) devices, etc. Machine data can include system logs, network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc. In general, machine data can also include performance data, diagnostic information, and many other types of data that can be analyzed to diagnose performance problems, monitor user interactions, and to derive other insights. A number of tools are available to analyze machine data. In order to reduce the size of the potentially vast amount of machine data that may be generated, many of these tools typically pre-process the data based on anticipated data-analysis needs. For example, pre-specified data items may be extracted from the machine data and stored in a database to facilitate efficient retrieval and analysis of those data items at search time. However, the rest of the machine data typically is not saved and is discarded during pre-processing. As storage capacity becomes progressively cheaper and more plentiful, there are fewer incentives to discard these portions of machine data and many reasons to retain more of the data. This plentiful storage capacity is presently making it feasible to store massive quantities of minimally processed machine data for later retrieval and analysis. In general, storing minimally processed machine data and performing analysis operations at search time can provide greater flexibility because it enables an analyst to search all of the machine data, instead of searching only a pre-specified set of data items. This may enable an analyst to investigate different aspects of the machine data that previously were unavailable for analysis. However, analyzing and searching massive quantities of machine data presents a number of challenges. For example, a data center, servers, or network appliances may generate many different types and formats of machine data (e.g., system logs, network packet data (e.g., wire data, etc.), sensor data, application program data, error logs, stack traces, system performance data, operating system data, virtualization data, etc.) from thousands of different components, which can collectively be very time-consuming to analyze. In another example, mobile devices may generate large amounts of information relating to data accesses, application performance, operating system performance, network performance, etc. There can be millions of mobile devices that report these types of information. These challenges can be addressed by using an event-based data intake and query system, such as the SPLUNK® ENTERPRISE system developed by Splunk Inc. of San Francisco, California. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index, and search machine data from various websites, applications, servers, networks, and mobile devices that power their businesses. The data intake and query system is particularly useful for analyzing data which is commonly found in system log files, network data, and other data input sources. Although many of the techniques described herein are explained with reference to a data intake and query system similar to the SPLUNK® ENTERPRISE system, these techniques are also applicable to other types of data systems. In the data intake and query system, machine data are collected and stored as “events”. An event comprises a portion of machine data and is associated with a specific point in time. The portion of machine data may reflect activity in an IT environment and may be produced by a component of that IT environment, where the events may be searched to provide insight into the IT environment, thereby improving the performance of components in the IT environment. Events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time. In general, each event has a portion of machine data that is associated with a timestamp that is derived from the portion of machine data in the event. A timestamp of an event may be determined through interpolation between temporally proximate events having known timestamps or may be determined based on other configurable rules for associating timestamps with events. In some instances, machine data can have a predefined format, where data items with specific data formats are stored at predefined locations in the data. For example, the machine data may include data associated with fields in a database table. In other instances, machine data may not have a predefined format (e.g., may not be at fixed, predefined locations), but may have repeatable (e.g., non-random) patterns. This means that some machine data can comprise various data items of different data types that may be stored at different locations within the data. For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing machine data that includes different types of performance and diagnostic information associated with a specific point in time (e.g., a timestamp). Examples of components which may generate machine data from which events can be derived include, but are not limited to, web servers, application servers, databases, firewalls, routers, operating systems, and software applications that execute on computer systems, mobile devices, sensors, Internet of Things (IoT) devices, etc. The machine data generated by such data sources can include, for example and without limitation, server log files, activity log files, configuration files, messages, network packet data, performance measurements, sensor measurements, etc. The data intake and query system uses a flexible schema to specify how to extract information from events. A flexible schema may be developed and redefined as needed. Note that a flexible schema may be applied to events “on the fly,” when it is needed (e.g., at search time, index time, ingestion time, etc.). When the schema is not applied to events until search time, the schema may be referred to as a “late-binding schema.” During operation, the data intake and query system receives machine data from any type and number of sources (e.g., one or more system logs, streams of network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc.). The system parses the machine data to produce events each having a portion of machine data associated with a timestamp. The system stores the events in a data store. The system enables users to run queries against the stored events to, for example, retrieve events that meet criteria specified in a query, such as criteria indicating certain keywords or having specific values in defined fields. As used herein, the term “field” refers to a location in the machine data of an event containing one or more values for a specific data item. A field may be referenced by a field name associated with the field. As will be described in more detail herein, a field is defined by an extraction rule (e.g., a regular expression) that derives one or more values or a sub-portion of text from the portion of machine data in each event to produce a value for the field for that event. The set of values produced are semantically-related (such as IP address), even though the machine data in each event may be in different formats (e.g., semantically-related values may be in different positions in the events derived from different sources). As described above, the system stores the events in a data store. The events stored in the data store are field-searchable, where field-searchable herein refers to the ability to search the machine data (e.g., the raw machine data) of an event based on a field specified in search criteria. For example, a search having criteria that specifies a field name “UserID” may cause the system to field-search the machine data of events to identify events that have the field name “UserID.” In another example, a search having criteria that specifies a field name “UserID” with a corresponding field value “12345” may cause the system to field-search the machine data of events to identify events having that field-value pair (e.g., field name “UserID” with a corresponding field value of “12345”). Events are field-searchable using one or more configuration files associated with the events. Each configuration file includes one or more field names, where each field name is associated with a corresponding extraction rule and a set of events to which that extraction rule applies. The set of events to which an extraction rule applies may be identified by metadata associated with the set of events. For example, an extraction rule may apply to a set of events that are each associated with a particular host, source, or source type. When events are to be searched based on a particular field name specified in a search, the system uses one or more configuration files to determine whether there is an extraction rule for that particular field name that applies to each event that falls within the criteria of the search. If so, the event is considered as part of the search results (and additional processing may be performed on that event based on criteria specified in the search). If not, the next event is similarly analyzed, and so on. As noted above, the data intake and query system utilizes a late-binding schema while performing queries on events. One aspect of a late-binding schema is applying extraction rules to events to extract values for specific fields during search time. More specifically, the extraction rule for a field can include one or more instructions that specify how to extract a value for the field from an event. An extraction rule can generally include any type of instruction for extracting values from events. In some cases, an extraction rule comprises a regular expression, where a sequence of characters form a search pattern. An extraction rule comprising a regular expression is referred to herein as a regex rule. The system applies a regex rule to an event to extract values for a field associated with the regex rule, where the values are extracted by searching the event for the sequence of characters defined in the regex rule. In the data intake and query system, a field extractor may be configured to automatically generate extraction rules for certain fields in the events when the events are being created, indexed, or stored, or possibly at a later time. Alternatively, a user may manually define extraction rules for fields using a variety of techniques. In contrast to a conventional schema for a database system, a late-binding schema is not defined at data ingestion time. Instead, the late-binding schema can be developed on an ongoing basis until the time a query is actually executed. This means that extraction rules for the fields specified in a query may be provided in the query itself, or may be located during execution of the query. Hence, as a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying machine data and uses a late-binding schema for searching the machine data, it enables a user to continue investigating and learn valuable insights about the machine data. In some embodiments, a common field name may be used to reference two or more fields containing equivalent and/or similar data items, even though the fields may be associated with different types of events that possibly have different data formats and different extraction rules. By enabling a common field name to be used to identify equivalent and/or similar fields from different types of events generated by disparate data sources, the system facilitates use of a “common information model” (CIM) across the disparate data sources (further discussed with respect toFIG.7A). 3.0. General Overview FIG.1Bis a block diagram of an example networked computer environment100, in accordance with example embodiments. Those skilled in the art would understand thatFIG.1Brepresents one example of a networked computer system and other embodiments, such as the embodiment illustrated inFIG.1Amay use different arrangements. The networked computer system100comprises one or more computing devices. These one or more computing devices comprise any combination of hardware and software configured to implement the various logical components described herein. For example, the one or more computing devices may include one or more memories that store instructions for implementing the various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. In some embodiments, one or more client devices102are coupled to one or more host devices106and a data intake and query system108via one or more networks104. Networks104broadly represent one or more LANs, WANs, cellular networks (e.g., LTE, HSPA, 3G, and other cellular technologies), and/or networks using any of wired, wireless, terrestrial microwave, or satellite links, and may include the public Internet. 3.1 Host Devices In the illustrated embodiment, a system100includes one or more host devices106. Host devices106may broadly include any number of computers, virtual machine instances, and/or data centers that are configured to host or execute one or more instances of host applications114. In general, a host device106may be involved, directly or indirectly, in processing requests received from client devices102. Each host device106may comprise, for example, one or more of a network device, a web server, an application server, a database server, etc. A collection of host devices106may be configured to implement a network-based service. For example, a provider of a network-based service may configure one or more host devices106and host applications114(e.g., one or more web servers, application servers, database servers, etc.) to collectively implement the network-based application. In general, client devices102communicate with one or more host applications114to exchange information. The communication between a client device102and a host application114may, for example, be based on the Hypertext Transfer Protocol (HTTP) or any other network protocol. Content delivered from the host application114to a client device102may include, for example, HTML documents, media content, etc. The communication between a client device102and host application114may include sending various requests and receiving data packets. For example, in general, a client device102or application running on a client device may initiate communication with a host application114by making a request for a specific resource (e.g., based on an HTTP request), and the application server may respond with the requested content stored in one or more response packets. In the illustrated embodiment, one or more of host applications114may generate various types of performance data during operation, including event logs, network data, sensor data, and other types of machine data. For example, a host application114comprising a web server may generate one or more web server logs in which details of interactions between the web server and any number of client devices102is recorded. As another example, a host device106comprising a router may generate one or more router logs that record information related to network traffic managed by the router. As yet another example, a host application114comprising a database server may generate one or more logs that record information related to requests sent from other host applications114(e.g., web servers or application servers) for data managed by the database server. 3.2 Client Devices Client devices102represent any computing device capable of interacting with one or more host devices106via a network104. Examples of client devices102may include, without limitation, smart phones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, and so forth. In general, a client device102can provide access to different content, for instance, content provided by one or more host devices106, etc. Each client device102may comprise one or more client applications110, described in more detail in a separate section hereinafter. 3.3. Client Device Applications In some embodiments, each client device102may host or execute one or more client applications110that are capable of interacting with one or more host devices106via one or more networks104. For instance, a client application110may be or comprise a web browser that a user may use to navigate to one or more websites or other resources provided by one or more host devices106. As another example, a client application110may comprise a mobile application or “app.” For example, an operator of a network-based service hosted by one or more host devices106may make available one or more mobile apps that enable users of client devices102to access various resources of the network-based service. As yet another example, client applications110may include background processes that perform various operations without direct interaction from a user. A client application110may include a “plug-in” or “extension” to another application, such as a web browser plug-in or extension. In some embodiments, a client application110may include a monitoring component112. At a high level, the monitoring component112comprises a software component or other logic that facilitates generating performance data related to a client device's operating state, including monitoring network traffic sent and received from the client device and collecting other device and/or application-specific information. Monitoring component112may be an integrated component of a client application110, a plug-in, an extension, or any other type of add-on component. Monitoring component112may also be a stand-alone process. In some embodiments, a monitoring component112may be created when a client application110is developed, for example, by an application developer using a software development kit (SDK). The SDK may include custom monitoring code that can be incorporated into the code implementing a client application110. When the code is converted to an executable application, the custom code implementing the monitoring functionality can become part of the application itself. In some embodiments, an SDK or other code for implementing the monitoring functionality may be offered by a provider of a data intake and query system, such as a system108. In such cases, the provider of the system108can implement the custom code so that performance data generated by the monitoring functionality is sent to the system108to facilitate analysis of the performance data by a developer of the client application or other users. In some embodiments, the custom monitoring code may be incorporated into the code of a client application110in a number of different ways, such as the insertion of one or more lines in the client application code that call or otherwise invoke the monitoring component112. As such, a developer of a client application110can add one or more lines of code into the client application110to trigger the monitoring component112at desired points during execution of the application. Code that triggers the monitoring component may be referred to as a monitor trigger. For instance, a monitor trigger may be included at or near the beginning of the executable code of the client application110such that the monitoring component112is initiated or triggered as the application is launched, or included at other points in the code that correspond to various actions of the client application, such as sending a network request or displaying a particular interface. In some embodiments, the monitoring component112may monitor one or more aspects of network traffic sent and/or received by a client application110. For example, the monitoring component112may be configured to monitor data packets transmitted to and/or from one or more host applications114. Incoming and/or outgoing data packets can be read or examined to identify network data contained within the packets, for example, and other aspects of data packets can be analyzed to determine a number of network performance statistics. Monitoring network traffic may enable information to be gathered particular to the network performance associated with a client application110or set of applications. In some embodiments, network performance data refers to any type of data that indicates information about the network and/or network performance. Network performance data may include, for instance, a URL requested, a connection type (e.g., HTTP, HTTPS, etc.), a connection start time, a connection end time, an HTTP status code, request length, response length, request headers, response headers, connection status (e.g., completion, response time(s), failure, etc.), and the like. Upon obtaining network performance data indicating performance of the network, the network performance data can be transmitted to a data intake and query system108for analysis. Upon developing a client application110that incorporates a monitoring component112, the client application110can be distributed to client devices102. Applications generally can be distributed to client devices102in any manner, or they can be pre-loaded. In some cases, the application may be distributed to a client device102via an application marketplace or other application distribution system. For instance, an application marketplace or other application distribution system might distribute the application to a client device based on a request from the client device to download the application. Examples of functionality that enables monitoring performance of a client device are described in U.S. patent application Ser. No. 14/524,748, entitled “UTILIZING PACKET HEADERS TO MONITOR NETWORK TRAFFIC IN ASSOCIATION WITH A CLIENT DEVICE”, filed on 27 Oct. 2014, and which is hereby incorporated by reference in its entirety for all purposes. In some embodiments, the monitoring component112may also monitor and collect performance data related to one or more aspects of the operational state of a client application110and/or client device102. For example, a monitoring component112may be configured to collect device performance information by monitoring one or more client device operations, or by making calls to an operating system and/or one or more other applications executing on a client device102for performance information. Device performance information may include, for instance, a current wireless signal strength of the device, a current connection type and network carrier, current memory performance information, a geographic location of the device, a device orientation, and any other information related to the operational state of the client device. In some embodiments, the monitoring component112may also monitor and collect other device profile information including, for example, a type of client device, a manufacturer, and model of the device, versions of various software applications installed on the device, and so forth. In general, a monitoring component112may be configured to generate performance data in response to a monitor trigger in the code of a client application110or other triggering application event, as described above, and to store the performance data in one or more data records. Each data record, for example, may include a collection of field-value pairs, each field-value pair storing a particular item of performance data in association with a field for the item. For example, a data record generated by a monitoring component112may include a “networkLatency” field (not shown in the Figure) in which a value is stored. This field indicates a network latency measurement associated with one or more network requests. The data record may include a “state” field to store a value indicating a state of a network connection, and so forth for any number of aspects of collected performance data. 3.4. Data Server System FIG.2is a block diagram of an example data intake and query system108, in accordance with example embodiments. System108includes one or more forwarders204that receive data from a variety of input data sources202, and one or more indexers206that process and store the data in one or more data stores208. These forwarders204and indexers206can comprise separate computer systems, or may alternatively comprise separate processes executing on one or more computer systems. Each data source202broadly represents a distinct source of data that can be consumed by system108. Examples of a data sources202include, without limitation, data files, directories of files, data sent over a network, event logs, registries, etc. During operation, the forwarders204identify which indexers206receive data collected from a data source202and forward the data to the appropriate indexers. Forwarders204can also perform operations on the data before forwarding, including removing extraneous data, detecting timestamps in the data, parsing data, indexing data, routing data based on criteria relating to the data being routed, and/or performing other data transformations. In some embodiments, a forwarder204may comprise a service accessible to client devices102and host devices106via a network104. For example, one type of forwarder204may be capable of consuming vast amounts of real-time data from a potentially large number of client devices102and/or host devices106. The forwarder204may, for example, comprise a computing device which implements multiple data pipelines or “queues” to handle forwarding of network data to indexers206. A forwarder204may also perform many of the functions that are performed by an indexer. For example, a forwarder204may perform keyword extractions on raw data or parse raw data to create events. A forwarder204may generate time stamps for events. Additionally or alternatively, a forwarder204may perform routing of events to indexers206. Data store208may contain events derived from machine data from a variety of sources all pertaining to the same component in an IT environment, and this data may be produced by the machine in question or by other components in the IT environment. 3.5. Cloud-Based System Overview The example data intake and query system108described in reference toFIG.2comprises several system components, including one or more forwarders, indexers, and search heads. In some environments, a user of a data intake and query system108may install and configure, on computing devices owned and operated by the user, one or more software applications that implement some or all of these system components. For example, a user may install a software application on server computers owned by the user and configure each server to operate as one or more of a forwarder, an indexer, a search head, etc. This arrangement generally may be referred to as an “on-premises” solution. That is, the system108is installed and operates on computing devices directly controlled by the user of the system. Some users may prefer an on-premises solution because it may provide a greater level of control over the configuration of certain aspects of the system (e.g., security, privacy, standards, controls, etc.). However, other users may instead prefer an arrangement in which the user is not directly responsible for providing and managing the computing devices upon which various components of system108operate. In one embodiment, to provide an alternative to an entirely on-premises environment for system108, one or more of the components of a data intake and query system instead may be provided as a cloud-based service. In this context, a cloud-based service refers to a service hosted by one more computing resources that are accessible to end users over a network, for example, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a cloud-based data intake and query system by managing computing resources configured to implement various aspects of the system (e.g., forwarders, indexers, search heads, etc.) and by providing access to the system to end users via a network. Typically, a user may pay a subscription or other fee to use such a service. Each subscribing user of the cloud-based service may be provided with an account that enables the user to configure a customized cloud-based system based on the user's preferences. FIG.3illustrates a block diagram of an example cloud-based data intake and query system. Similar to the system ofFIG.2, the networked computer system300includes input data sources202and forwarders204. These input data sources and forwarders may be in a subscriber's private computing environment. Alternatively, they might be directly managed by the service provider as part of the cloud service. In the example system300, one or more forwarders204and client devices302are coupled to a cloud-based data intake and query system306via one or more networks304. Network304broadly represents one or more LANs, WANs, cellular networks, intranetworks, internetworks, etc., using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet, and is used by client devices302and forwarders204to access the system306. Similar to the system of38, each of the forwarders204may be configured to receive data from an input source and to forward the data to other components of the system306for further processing. In some embodiments, a cloud-based data intake and query system306may comprise a plurality of system instances308. In general, each system instance308may include one or more computing resources managed by a provider of the cloud-based system306made available to a particular subscriber. The computing resources comprising a system instance308may, for example, include one or more servers or other devices configured to implement one or more forwarders, indexers, search heads, and other components of a data intake and query system, similar to system108. As indicated above, a subscriber may use a web browser or other application of a client device302to access a web portal or other interface that enables the subscriber to configure an instance308. Providing a data intake and query system as described in reference to system108as a cloud-based service presents a number of challenges. Each of the components of a system108(e.g., forwarders, indexers, and search heads) may at times refer to various configuration files stored locally at each component. These configuration files typically may involve some level of user configuration to accommodate particular types of data a user desires to analyze and to account for other user preferences. However, in a cloud-based service context, users typically may not have direct access to the underlying computing resources implementing the various system components (e.g., the computing resources comprising each system instance308) and may desire to make such configurations indirectly, for example, using one or more web-based interfaces. Thus, the techniques and systems described herein for providing user interfaces that enable a user to configure source type definitions are applicable to both on-premises and cloud-based service contexts, or some combination thereof (e.g., a hybrid system where both an on-premises environment, such as SPLUNK® ENTERPRISE, and a cloud-based environment, such as SPLUNK CLOUD™, are centrally visible). 3.6. Searching Externally-Archived Data FIG.4shows a block diagram of an example of a data intake and query system108that provides transparent search facilities for data systems that are external to the data intake and query system. Such facilities are available in the Splunk® Analytics for Hadoop® system provided by Splunk Inc. of San Francisco, California. Splunk® Analytics for Hadoop® represents an analytics platform that enables business and IT teams to rapidly explore, analyze, and visualize data in Hadoop® and NoSQL data stores. The search head210of the data intake and query system receives search requests from one or more client devices404over network connections420. As discussed above, the data intake and query system108may reside in an enterprise location, in the cloud, etc.FIG.4illustrates that multiple client devices404a,404b, . . . ,404nmay communicate with the data intake and query system108. The client devices404may communicate with the data intake and query system using a variety of connections. For example, one client device inFIG.4is illustrated as communicating over an Internet (Web) protocol, another client device is illustrated as communicating via a command line interface, and another client device is illustrated as communicating via a software developer kit (SDK). The search head210analyzes the received search request to identify request parameters. If a search request received from one of the client devices404references an index maintained by the data intake and query system, then the search head210connects to one or more indexers206of the data intake and query system for the index referenced in the request parameters. That is, if the request parameters of the search request reference an index, then the search head accesses the data in the index via the indexer. The data intake and query system108may include one or more indexers206, depending on system access resources and requirements. As described further below, the indexers206retrieve data from their respective local data stores208as specified in the search request. The indexers and their respective data stores can comprise one or more storage devices and typically reside on the same system, though they may be connected via a local network connection. If the request parameters of the received search request reference an external data collection, which is not accessible to the indexers206or under the management of the data intake and query system, then the search head210can access the external data collection through an External Result Provider (ERP) process410. An external data collection may be referred to as a “virtual index” (plural, “virtual indices”). An ERP process provides an interface through which the search head210may access virtual indices. Thus, a search reference to an index of the system relates to a locally stored and managed data collection. In contrast, a search reference to a virtual index relates to an externally stored and managed data collection, which the search head may access through one or more ERP processes410,412.FIG.4shows two ERP processes410,412that connect to respective remote (external) virtual indices, which are indicated as a Hadoop or another system414(e.g., Amazon S3, Amazon EMR, other Hadoop® Compatible File Systems (HCFS), etc.) and a relational database management system (RDBMS)416. Other virtual indices may include other file organizations and protocols, such as Structured Query Language (SQL) and the like. The ellipses between the ERP processes410,412indicate optional additional ERP processes of the data intake and query system108. An ERP process may be a computer process that is initiated or spawned by the search head210and is executed by the search data intake and query system108. Alternatively or additionally, an ERP process may be a process spawned by the search head210on the same or different host system as the search head210resides. The search head210may spawn a single ERP process in response to multiple virtual indices referenced in a search request, or the search head may spawn different ERP processes for different virtual indices. Generally, virtual indices that share common data configurations or protocols may share ERP processes. For example, all search query references to a Hadoop file system may be processed by the same ERP process, if the ERP process is suitably configured. Likewise, all search query references to a SQL database may be processed by the same ERP process. In addition, the search head may provide a common ERP process for common external data source types (e.g., a common vendor may utilize a common ERP process, even if the vendor includes different data storage system types, such as Hadoop and SQL). Common indexing schemes also may be handled by common ERP processes, such as flat text files or Weblog files. The search head210determines the number of ERP processes to be initiated via the use of configuration parameters that are included in a search request message. Generally, there is a one-to-many relationship between an external results provider “family” and ERP processes. There is also a one-to-many relationship between an ERP process and corresponding virtual indices that are referred to in a search request. For example, using RDBMS, assume two independent instances of such a system by one vendor, such as one RDBMS for production and another RDBMS used for development. In such a situation, it is likely preferable (but optional) to use two ERP processes to maintain the independent operation as between production and development data. Both of the ERPs, however, will belong to the same family, because the two RDBMS system types are from the same vendor. The ERP processes410,412receive a search request from the search head210. The search head may optimize the received search request for execution at the respective external virtual index. Alternatively, the ERP process may receive a search request as a result of analysis performed by the search head or by a different system process. The ERP processes410,412can communicate with the search head210via conventional input/output routines (e.g., standard in/standard out, etc.). In this way, the ERP process receives the search request from a client device such that the search request may be efficiently executed at the corresponding external virtual index. The ERP processes410,412may be implemented as a process of the data intake and query system. Each ERP process may be provided by the data intake and query system, or may be provided by process or application providers who are independent of the data intake and query system. Each respective ERP process may include an interface application installed at a computer of the external result provider that ensures proper communication between the search support system and the external result provider. The ERP processes410,412generate appropriate search requests in the protocol and syntax of the respective virtual indices414,416, each of which corresponds to the search request received by the search head210. Upon receiving search results from their corresponding virtual indices, the respective ERP process passes the result to the search head210, which may return or display the results or a processed set of results based on the returned results to the respective client device. Client devices404may communicate with the data intake and query system108through a network interface420, e.g., one or more LANs, WANs, cellular networks, intranetworks, and/or internetworks using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet. The analytics platform utilizing the External Result Provider process described in more detail in U.S. Pat. No. 8,738,629, entitled “EXTERNAL RESULT PROVIDED PROCESS FOR RETRIEVING DATA STORED USING A DIFFERENT CONFIGURATION OR PROTOCOL”, issued on 27 May 2014, U.S. Pat. No. 8,738,587, entitled “PROCESSING A SYSTEM SEARCH REQUEST BY RETRIEVING RESULTS FROM BOTH A NATIVE INDEX AND A VIRTUAL INDEX”, issued on 25 Jul. 2013, U.S. patent application Ser. No. 14/266,832, entitled “PROCESSING A SYSTEM SEARCH REQUEST ACROSS DISPARATE DATA COLLECTION SYSTEMS”, filed on 1 May 2014, and U.S. Pat. No. 9,514,189, entitled “PROCESSING A SYSTEM SEARCH REQUEST INCLUDING EXTERNAL DATA SOURCES”, issued on 6 Dec. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. 3.6.1. ERP Process Features The ERP processes described above may include two operation modes: a streaming mode and a reporting mode. The ERP processes can operate in streaming mode only, in reporting mode only, or in both modes simultaneously. Operating in both modes simultaneously is referred to as mixed mode operation. In a mixed mode operation, the ERP at some point can stop providing the search head with streaming results and only provide reporting results thereafter, or the search head at some point may start ignoring streaming results it has been using and only use reporting results thereafter. The streaming mode returns search results in real time, with minimal processing, in response to the search request. The reporting mode provides results of a search request with processing of the search results prior to providing them to the requesting search head, which in turn provides results to the requesting client device. ERP operation with such multiple modes provides greater performance flexibility with regard to report time, search latency, and resource utilization. In a mixed mode operation, both streaming mode and reporting mode are operating simultaneously. The streaming mode results (e.g., the machine data obtained from the external data source) are provided to the search head, which can then process the results data (e.g., break the machine data into events, timestamp it, filter it, etc.) and integrate the results data with the results data from other external data sources, and/or from data stores of the search head. The search head performs such processing and can immediately start returning interim (streaming mode) results to the user at the requesting client device; simultaneously, the search head is waiting for the ERP process to process the data it is retrieving from the external data source as a result of the concurrently executing reporting mode. In some instances, the ERP process initially operates in a mixed mode, such that the streaming mode operates to enable the ERP quickly to return interim results (e.g., some of the machined data or unprocessed data necessary to respond to a search request) to the search head, enabling the search head to process the interim results and begin providing to the client or search requester interim results that are responsive to the query. Meanwhile, in this mixed mode, the ERP also operates concurrently in reporting mode, processing portions of machine data in a manner responsive to the search query. Upon determining that it has results from the reporting mode available to return to the search head, the ERP may halt processing in the mixed mode at that time (or some later time) by stopping the return of data in streaming mode to the search head and switching to reporting mode only. The ERP at this point starts sending interim results in reporting mode to the search head, which in turn may then present this processed data responsive to the search request to the client or search requester. Typically the search head switches from using results from the ERP's streaming mode of operation to results from the ERP's reporting mode of operation when the higher bandwidth results from the reporting mode outstrip the amount of data processed by the search head in the streaming mode of ERP operation. A reporting mode may have a higher bandwidth because the ERP does not have to spend time transferring data to the search head for processing all the machine data. In addition, the ERP may optionally direct another processor to do the processing. The streaming mode of operation does not need to be stopped to gain the higher bandwidth benefits of a reporting mode; the search head could simply stop using the streaming mode results—and start using the reporting mode results—when the bandwidth of the reporting mode has caught up with or exceeded the amount of bandwidth provided by the streaming mode. Thus, a variety of triggers and ways to accomplish a search head's switch from using streaming mode results to using reporting mode results may be appreciated by one skilled in the art. The reporting mode can involve the ERP process (or an external system) performing event breaking, time stamping, filtering of events to match the search query request, and calculating statistics on the results. The user can request particular types of data, such as if the search query itself involves types of events, or the search request may ask for statistics on data, such as on events that meet the search request. In either case, the search head understands the query language used in the received query request, which may be a proprietary language. One exemplary query language is Splunk Processing Language (SPL) developed by the assignee of the application, Splunk Inc. The search head typically understands how to use that language to obtain data from the indexers, which store data in a format used by the SPLUNK® Enterprise system. The ERP processes support the search head, as the search head is not ordinarily configured to understand the format in which data is stored in external data sources such as Hadoop or SQL data systems. Rather, the ERP process performs that translation from the query submitted in the search support system's native format (e.g., SPL if SPLUNK® ENTERPRISE is used as the search support system) to a search query request format that will be accepted by the corresponding external data system. The external data system typically stores data in a different format from that of the search support system's native index format, and it utilizes a different query language (e.g., SQL or MapReduce, rather than SPL or the like). As noted, the ERP process can operate in the streaming mode alone. After the ERP process has performed the translation of the query request and received raw results from the streaming mode, the search head can integrate the returned data with any data obtained from local data sources (e.g., native to the search support system), other external data sources, and other ERP processes (if such operations were required to satisfy the terms of the search query). An advantage of mixed mode operation is that, in addition to streaming mode, the ERP process is also executing concurrently in reporting mode. Thus, the ERP process (rather than the search head) is processing query results (e.g., performing event breaking, timestamping, filtering, possibly calculating statistics if required to be responsive to the search query request, etc.). It should be apparent to those skilled in the art that additional time is needed for the ERP process to perform the processing in such a configuration. Therefore, the streaming mode will allow the search head to start returning interim results to the user at the client device before the ERP process can complete sufficient processing to start returning any search results. The switchover between streaming and reporting mode happens when the ERP process determines that the switchover is appropriate, such as when the ERP process determines it can begin returning meaningful results from its reporting mode. The operation described above illustrates the source of operational latency: streaming mode has low latency (immediate results) and usually has relatively low bandwidth (fewer results can be returned per unit of time). In contrast, the concurrently running reporting mode has relatively high latency (it has to perform a lot more processing before returning any results) and usually has relatively high bandwidth (more results can be processed per unit of time). For example, when the ERP process does begin returning report results, it returns more processed results than in the streaming mode, because, e.g., statistics only need to be calculated to be responsive to the search request. That is, the ERP process doesn't have to take time to first return machine data to the search head. As noted, the ERP process could be configured to operate in streaming mode alone and return just the machine data for the search head to process in a way that is responsive to the search request. Alternatively, the ERP process can be configured to operate in the reporting mode only. Also, the ERP process can be configured to operate in streaming mode and reporting mode concurrently, as described, with the ERP process stopping the transmission of streaming results to the search head when the concurrently running reporting mode has caught up and started providing results. The reporting mode does not require the processing of all machine data that is responsive to the search query request before the ERP process starts returning results; rather, the reporting mode usually performs processing of chunks of events and returns the processing results to the search head for each chunk. For example, an ERP process can be configured to merely return the contents of a search result file verbatim, with little or no processing of results. That way, the search head performs all processing (such as parsing byte streams into events, filtering, etc.). The ERP process can be configured to perform additional intelligence, such as analyzing the search request and handling all the computation that a native search indexer process would otherwise perform. In this way, the configured ERP process provides greater flexibility in features while operating according to desired preferences, such as response latency and resource requirements. 3.7. Data Ingestion FIG.5Ais a flow chart of an example method that illustrates how indexers process, index, and store data received from forwarders, in accordance with example embodiments. The data flow illustrated inFIG.5Ais provided for illustrative purposes only; those skilled in the art would understand that one or more of the steps of the processes illustrated inFIG.5Amay be removed or that the ordering of the steps may be changed. Furthermore, for the purposes of illustrating a clear example, one or more particular system components are described in the context of performing various operations during each of the data flow stages. For example, a forwarder is described as receiving and processing machine data during an input phase; an indexer is described as parsing and indexing machine data during parsing and indexing phases; and a search head is described as performing a search query during a search phase. However, other system arrangements and distributions of the processing steps across system components may be used. 3.7.1. Input At block502, a forwarder receives data from an input source, such as a data source202shown inFIG.2. A forwarder initially may receive the data as a raw data stream generated by the input source. For example, a forwarder may receive a data stream from a log file generated by an application server, from a stream of network data from a network device, or from any other source of data. In some embodiments, a forwarder receives the raw data and may segment the data stream into “blocks”, possibly of a uniform data size, to facilitate subsequent processing steps. At block504, a forwarder or other system component annotates each block generated from the raw data with one or more metadata fields. These metadata fields may, for example, provide information related to the data block as a whole and may apply to each event that is subsequently derived from the data in the data block. For example, the metadata fields may include separate fields specifying each of a host, a source, and a source type related to the data block. A host field may contain a value identifying a host name or IP address of a device that generated the data. A source field may contain a value identifying a source of the data, such as a pathname of a file or a protocol and port related to received network data. A source type field may contain a value specifying a particular source type label for the data. Additional metadata fields may also be included during the input phase, such as a character encoding of the data, if known, and possibly other values that provide information relevant to later processing steps. In some embodiments, a forwarder forwards the annotated data blocks to another system component (typically an indexer) for further processing. The data intake and query system allows forwarding of data from one data intake and query instance to another, or even to a third-party system. The data intake and query system can employ different types of forwarders in a configuration. In some embodiments, a forwarder may contain the essential components needed to forward data. A forwarder can gather data from a variety of inputs and forward the data to an indexer for indexing and searching. A forwarder can also tag metadata (e.g., source, source type, host, etc.). In some embodiments, a forwarder has the capabilities of the aforementioned forwarder as well as additional capabilities. The forwarder can parse data before forwarding the data (e.g., can associate a time stamp with a portion of data and create an event, etc.) and can route data based on criteria such as source or type of event. The forwarder can also index data locally while forwarding the data to another indexer. 3.7.2. Parsing At block506, an indexer receives data blocks from a forwarder and parses the data to organize the data into events. In some embodiments, to organize the data into events, an indexer may determine a source type associated with each data block (e.g., by extracting a source type label from the metadata fields associated with the data block, etc.) and refer to a source type configuration corresponding to the identified source type. The source type definition may include one or more properties that indicate to the indexer to automatically determine the boundaries within the received data that indicate the portions of machine data for events. In general, these properties may include regular expression-based rules or delimiter rules where, for example, event boundaries may be indicated by predefined characters or character strings. These predefined characters may include punctuation marks or other special characters including, for example, carriage returns, tabs, spaces, line breaks, etc. If a source type for the data is unknown to the indexer, an indexer may infer a source type for the data by examining the structure of the data. Then, the indexer can apply an inferred source type definition to the data to create the events. At block508, the indexer determines a timestamp for each event. Similar to the process for parsing machine data, an indexer may again refer to a source type definition associated with the data to locate one or more properties that indicate instructions for determining a timestamp for each event. The properties may, for example, instruct an indexer to extract a time value from a portion of data for the event, to interpolate time values based on timestamps associated with temporally proximate events, to create a timestamp based on a time the portion of machine data was received or generated, to use the timestamp of a previous event, or use any other rules for determining timestamps. At block510, the indexer associates with each event one or more metadata fields including a field containing the timestamp determined for the event. In some embodiments, a timestamp may be included in the metadata fields. These metadata fields may include any number of “default fields” that are associated with all events, and may also include one more custom fields as defined by a user. Similar to the metadata fields associated with the data blocks at block504, the default metadata fields associated with each event may include a host, source, and source type field including or in addition to a field storing the timestamp. At block512, an indexer may optionally apply one or more transformations to data included in the events created at block506. For example, such transformations can include removing a portion of an event (e.g., a portion used to define event boundaries, extraneous characters from the event, other extraneous text, etc.), masking a portion of an event (e.g., masking a credit card number), removing redundant portions of an event, etc. The transformations applied to events may, for example, be specified in one or more configuration files and referenced by one or more source type definitions. FIG.5Cillustrates an illustrative example of machine data can be stored in a data store in accordance with various disclosed embodiments. In other embodiments, machine data can be stored in a flat file in a corresponding bucket with an associated index file, such as a time series index or “TSIDX.” As such, the depiction of machine data and associated metadata as rows and columns in the table ofFIG.5Cis merely illustrative and is not intended to limit the data format in which the machine data and metadata is stored in various embodiments described herein. In one particular embodiment, machine data can be stored in a compressed or encrypted formatted. In such embodiments, the machine data can be stored with or be associated with data that describes the compression or encryption scheme with which the machine data is stored. The information about the compression or encryption scheme can be used to decompress or decrypt the machine data, and any metadata with which it is stored, at search time. As mentioned above, certain metadata, e.g., host536, source537, source type538, and timestamps535can be generated for each event, and associated with a corresponding portion of machine data539when storing the event data in a data store, e.g., data store208. Any of the metadata can be extracted from the corresponding machine data, or supplied or defined by an entity, such as a user or computer system. The metadata fields can become part of or stored with the event. Note that while the time-stamp metadata field can be extracted from the raw data of each event, the values for the other metadata fields may be determined by the indexer based on information it receives pertaining to the source of the data separate from the machine data. While certain default or user-defined metadata fields can be extracted from the machine data for indexing purposes, all the machine data within an event can be maintained in its original condition. As such, in embodiments in which the portion of machine data included in an event is unprocessed or otherwise unaltered, it is referred to herein as a portion of raw machine data. In other embodiments, the port of machine data in an event can be processed or otherwise altered. As such, unless certain information needs to be removed for some reasons (e.g. extraneous information, confidential information), all the raw machine data contained in an event can be preserved and saved in its original form. Accordingly, the data store in which the event records are stored is sometimes referred to as a “raw record data store.” The raw record data store contains a record of the raw event data tagged with the various default fields. InFIG.5C, the first three rows of the table represent events531,532, and533and are related to a server access log that records requests from multiple clients processed by a server, as indicated by entry of “access.log” in the source column536. In the example shown inFIG.5C, each of the events531-534is associated with a discrete request made from a client device. The raw machine data generated by the server and extracted from a server access log can include the IP address of the client540, the user id of the person requesting the document541, the time the server finished processing the request542, the request line from the client543, the status code returned by the server to the client545, the size of the object returned to the client (in this case, the gif file requested by the client)546and the time spent to serve the request in microseconds544. As seen inFIG.5C, all the raw machine data retrieved from the server access log is retained and stored as part of the corresponding events,1221,1222, and1223in the data store. Event534is associated with an entry in a server error log, as indicated by “error.log” in the source column537that records errors that the server encountered when processing a client request. Similar to the events related to the server access log, all the raw machine data in the error log file pertaining to event534can be preserved and stored as part of the event534. Saving minimally processed or unprocessed machine data in a data store associated with metadata fields in the manner similar to that shown inFIG.5Cis advantageous because it allows search of all the machine data at search time instead of searching only previously specified and identified fields or field-value pairs. As mentioned above, because data structures used by various embodiments of the present disclosure maintain the underlying raw machine data and use a late-binding schema for searching the raw machines data, it enables a user to continue investigating and learn valuable insights about the raw data. In other words, the user is not compelled to know about all the fields of information that will be needed at data ingestion time. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by defining new extraction rules, or modifying or deleting existing extraction rules used by the system. 3.7.3. Indexing At blocks514and516, an indexer can optionally generate a keyword index to facilitate fast keyword searching for events. To build a keyword index, at block514, the indexer identifies a set of keywords in each event. At block516, the indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. In some embodiments, the keyword index may include entries for field name-value pairs found in events, where a field name-value pair can include a pair of keywords connected by a symbol, such as an equals sign or colon. This way, events containing these field name-value pairs can be quickly located. In some embodiments, fields can automatically be generated for some or all of the field names of the field name-value pairs at the time of indexing. For example, if the string “dest=10.0.1.2” is found in an event, a field named “dest” may be created for the event, and assigned a value of “10.0.1.2”. At block518, the indexer stores the events with an associated timestamp in a data store208. Timestamps enable a user to search for events based on a time range. In some embodiments, the stored events are organized into “buckets,” where each bucket stores events associated with a specific time range based on the timestamps associated with each event. This improves time-based searching, as well as allows for events with recent timestamps, which may have a higher likelihood of being accessed, to be stored in a faster memory to facilitate faster retrieval. For example, buckets containing the most recent events can be stored in flash memory rather than on a hard disk. In some embodiments, each bucket may be associated with an identifier, a time range, and a size constraint. Each indexer206may be responsible for storing and searching a subset of the events contained in a corresponding data store208. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. For example, using map-reduce techniques, each indexer returns partial responses for a subset of events to a search head that combines the results to produce an answer for the query. By storing events in buckets for specific time ranges, an indexer may further optimize the data retrieval process by searching buckets corresponding to time ranges that are relevant to a query. In some embodiments, each indexer has a home directory and a cold directory. The home directory of an indexer stores hot buckets and warm buckets, and the cold directory of an indexer stores cold buckets. A hot bucket is a bucket that is capable of receiving and storing events. A warm bucket is a bucket that can no longer receive events for storage but has not yet been moved to the cold directory. A cold bucket is a bucket that can no longer receive events and may be a bucket that was previously stored in the home directory. The home directory may be stored in faster memory, such as flash memory, as events may be actively written to the home directory, and the home directory may typically store events that are more frequently searched and thus are accessed more frequently. The cold directory may be stored in slower and/or larger memory, such as a hard disk, as events are no longer being written to the cold directory, and the cold directory may typically store events that are not as frequently searched and thus are accessed less frequently. In some embodiments, an indexer may also have a quarantine bucket that contains events having potentially inaccurate information, such as an incorrect time stamp associated with the event or a time stamp that appears to be an unreasonable time stamp for the corresponding event. The quarantine bucket may have events from any time range; as such, the quarantine bucket may always be searched at search time. Additionally, an indexer may store old, archived data in a frozen bucket that is not capable of being searched at search time. In some embodiments, a frozen bucket may be stored in slower and/or larger memory, such as a hard disk, and may be stored in offline and/or remote storage. Moreover, events and buckets can also be replicated across different indexers and data stores to facilitate high availability and disaster recovery as described in U.S. Pat. No. 9,130,971, entitled “SITE-BASED SEARCH AFFINITY”, issued on 8 Sep. 2015, and in U.S. patent Ser. No. 14/266,817, entitled “MULTI-SITE CLUSTERING”, issued on 1 Sep. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. As will be described in greater detail below with reference to, inter alia,FIGS.18-49, some functionality of the indexer can be handled by different components of the system. For example, in some cases, the indexer indexes semi-processed, or cooked data (e.g., data that has been parsed and/or had some fields determined for it), and stores the results in common storage. FIG.5Bis a block diagram of an example data store501that includes a directory for each index (or partition) that contains a portion of data managed by an indexer.FIG.5Bfurther illustrates details of an embodiment of an inverted index507B and an event reference array515associated with inverted index507B. The data store501can correspond to a data store208that stores events managed by an indexer206or can correspond to a different data store associated with an indexer206. In the illustrated embodiment, the data store501includes a _main directory503associated with a _main index and a _test directory505associated with a _test index. However, the data store501can include fewer or more directories. In some embodiments, multiple indexes can share a single directory or all indexes can share a common directory. Additionally, although illustrated as a single data store501, it will be understood that the data store501can be implemented as multiple data stores storing different portions of the information shown inFIG.5B. For example, a single index or partition can span multiple directories or multiple data stores, and can be indexed or searched by multiple corresponding indexers. In the illustrated embodiment ofFIG.5B, the index-specific directories503and505include inverted indexes507A,507B and509A,509B, respectively. The inverted indexes507A . . .507B, and509A . . .509B can be keyword indexes or field-value pair indexes described herein and can include less or more information that depicted inFIG.5B. In some embodiments, each inverted index507A . . .507B, and509A . . .509B can correspond to a distinct time-series bucket that is managed by the indexer206and that contains events corresponding to the relevant index (e.g., _main index, _test index). As such, each inverted index can correspond to a particular range of time for an index. Additional files, such as high performance indexes for each time-series bucket of an index, can also be stored in the same directory as the inverted indexes507A . . .507B, and509A . . .509B. In some embodiments inverted index507A . . .507B, and509A . . .509B can correspond to multiple time-series buckets or inverted indexes507A . . .507B, and509A . . .509B can correspond to a single time-series bucket. Each inverted index507A . . .507B, and509A . . .509B can include one or more entries, such as keyword (or token) entries or field-value pair entries. Furthermore, in certain embodiments, the inverted indexes507A . . .507B, and509A . . .509B can include additional information, such as a time range523associated with the inverted index or an index identifier525identifying the index associated with the inverted index507A . . .507B, and509A . . .509B. However, each inverted index507A . . .507B, and509A . . .509B can include less or more information than depicted. Token entries, such as token entries511illustrated in inverted index507B, can include a token511A (e.g., “error,” “itemID,” etc.) and event references511B indicative of events that include the token. For example, for the token “error,” the corresponding token entry includes the token “error” and an event reference, or unique identifier, for each event stored in the corresponding time-series bucket that includes the token “error.” In the illustrated embodiment ofFIG.5B, the error token entry includes the identifiers 3, 5, 6, 8, 11, and 12 corresponding to events managed by the indexer206and associated with the index _main503that are located in the time-series bucket associated with the inverted index507B. In some cases, some token entries can be default entries, automatically determined entries, or user specified entries. In some embodiments, the indexer206can identify each word or string in an event as a distinct token and generate a token entry for it. In some cases, the indexer206can identify the beginning and ending of tokens based on punctuation, spaces, as described in greater detail herein. In certain cases, the indexer206can rely on user input or a configuration file to identify tokens for token entries511, etc. It will be understood that any combination of token entries can be included as a default, automatically determined, a or included based on user-specified criteria. Similarly, field-value pair entries, such as field-value pair entries513shown in inverted index507B, can include a field-value pair513A and event references513B indicative of events that include a field value that corresponds to the field-value pair. For example, for a field-value pair sourcetype::sendmail, a field-value pair entry would include the field-value pair sourcetype::sendmail and a unique identifier, or event reference, for each event stored in the corresponding time-series bucket that includes a sendmail sourcetype. In some cases, the field-value pair entries513can be default entries, automatically determined entries, or user specified entries. As a non-limiting example, the field-value pair entries for the fields host, source, sourcetype can be included in the inverted indexes507A . . .507B, and509A . . .509B as a default. As such, all of the inverted indexes507A . . .507B, and509A . . .509B can include field-value pair entries for the fields host, source, sourcetype. As yet another non-limiting example, the field-value pair entries for the IP address field can be user specified and may only appear in the inverted index507B based on user-specified criteria. As another non-limiting example, as the indexer indexes the events, it can automatically identify field-value pairs and create field-value pair entries. For example, based on the indexers review of events, it can identify IP_address as a field in each event and add the IP_address field-value pair entries to the inverted index507B. It will be understood that any combination of field-value pair entries can be included as a default, automatically determined, or included based on user-specified criteria. Each unique identifier517, or event reference, can correspond to a unique event located in the time series bucket. However, the same event reference can be located in multiple entries. For example if an event has a sourcetype splunkd, host www1 and token “warning,” then the unique identifier for the event will appear in the field-value pair entries sourcetype::splunkd and host::www1, as well as the token entry “warning.” With reference to the illustrated embodiment ofFIG.5Band the event that corresponds to the event reference 3, the event reference 3 is found in the field-value pair entries513host::hostA, source::sourceB, sourcetype::sourcetypeA, and IP_address::91.205.189.15 indicating that the event corresponding to the event reference 3 is from hostA, sourceB, of sourcetypeA, and includes 91.205.189.15 in the event data. For some fields, the unique identifier is located in only one field-value pair entry for a particular field. For example, the inverted index may include four sourcetype field-value pair entries corresponding to four different sourcetypes of the events stored in a bucket (e.g., sourcetypes: sendmail, splunkd, web access, and web service). Within those four sourcetype field-value pair entries, an identifier for a particular event may appear in only one of the field-value pair entries. With continued reference to the example illustrated embodiment ofFIG.5B, since the event reference 7 appears in the field-value pair entry sourcetype::sourcetypeA, then it does not appear in the other field-value pair entries for the sourcetype field, including sourcetype::sourcetypeB, sourcetype::sourcetypeC, and sourcetype::sourcetypeD. The event references517can be used to locate the events in the corresponding bucket. For example, the inverted index can include, or be associated with, an event reference array515. The event reference array515can include an array entry517for each event reference in the inverted index507B. Each array entry517can include location information519of the event corresponding to the unique identifier (non-limiting example: seek address of the event), a timestamp521associated with the event, or additional information regarding the event associated with the event reference, etc. For each token entry511or field-value pair entry513, the event reference501B or unique identifiers can be listed in chronological order or the value of the event reference can be assigned based on chronological data, such as a timestamp associated with the event referenced by the event reference. For example, the event reference 1 in the illustrated embodiment ofFIG.5Bcan correspond to the first-in-time event for the bucket, and the event reference 12 can correspond to the last-in-time event for the bucket. However, the event references can be listed in any order, such as reverse chronological order, ascending order, descending order, or some other order, etc. Further, the entries can be sorted. For example, the entries can be sorted alphabetically (collectively or within a particular group), by entry origin (e.g., default, automatically generated, user-specified, etc.), by entry type (e.g., field-value pair entry, token entry, etc.), or chronologically by when added to the inverted index, etc. In the illustrated embodiment ofFIG.5B, the entries are sorted first by entry type and then alphabetically. As a non-limiting example of how the inverted indexes507A . . .507B, and509A . . .509B can be used during a data categorization request command, the indexers can receive filter criteria indicating data that is to be categorized and categorization criteria indicating how the data is to be categorized. Example filter criteria can include, but is not limited to, indexes (or partitions), hosts, sources, sourcetypes, time ranges, field identifier, keywords, etc. Using the filter criteria, the indexer identifies relevant inverted indexes to be searched. For example, if the filter criteria includes a set of partitions, the indexer can identify the inverted indexes stored in the directory corresponding to the particular partition as relevant inverted indexes. Other means can be used to identify inverted indexes associated with a partition of interest. For example, in some embodiments, the indexer can review an entry in the inverted indexes, such as an index-value pair entry513to determine if a particular inverted index is relevant. If the filter criteria does not identify any partition, then the indexer can identify all inverted indexes managed by the indexer as relevant inverted indexes. Similarly, if the filter criteria includes a time range, the indexer can identify inverted indexes corresponding to buckets that satisfy at least a portion of the time range as relevant inverted indexes. For example, if the time range is last hour then the indexer can identify all inverted indexes that correspond to buckets storing events associated with timestamps within the last hour as relevant inverted indexes. When used in combination, an index filter criterion specifying one or more partitions and a time range filter criterion specifying a particular time range can be used to identify a subset of inverted indexes within a particular directory (or otherwise associated with a particular partition) as relevant inverted indexes. As such, the indexer can focus the processing to only a subset of the total number of inverted indexes that the indexer manages. Once the relevant inverted indexes are identified, the indexer can review them using any additional filter criteria to identify events that satisfy the filter criteria. In some cases, using the known location of the directory in which the relevant inverted indexes are located, the indexer can determine that any events identified using the relevant inverted indexes satisfy an index filter criterion. For example, if the filter criteria includes a partition main, then the indexer can determine that any events identified using inverted indexes within the partition main directory (or otherwise associated with the partition main) satisfy the index filter criterion. Furthermore, based on the time range associated with each inverted index, the indexer can determine that that any events identified using a particular inverted index satisfies a time range filter criterion. For example, if a time range filter criterion is for the last hour and a particular inverted index corresponds to events within a time range of 50 minutes ago to 35 minutes ago, the indexer can determine that any events identified using the particular inverted index satisfy the time range filter criterion. Conversely, if the particular inverted index corresponds to events within a time range of 59 minutes ago to 62 minutes ago, the indexer can determine that some events identified using the particular inverted index may not satisfy the time range filter criterion. Using the inverted indexes, the indexer can identify event references (and therefore events) that satisfy the filter criteria. For example, if the token “error” is a filter criterion, the indexer can track all event references within the token entry “error.” Similarly, the indexer can identify other event references located in other token entries or field-value pair entries that match the filter criteria. The system can identify event references located in all of the entries identified by the filter criteria. For example, if the filter criteria include the token “error” and field-value pair sourcetype::web_ui, the indexer can track the event references found in both the token entry “error” and the field-value pair entry sourcetype::web_ui. As mentioned previously, in some cases, such as when multiple values are identified for a particular filter criterion (e.g., multiple sources for a source filter criterion), the system can identify event references located in at least one of the entries corresponding to the multiple values and in all other entries identified by the filter criteria. The indexer can determine that the events associated with the identified event references satisfy the filter criteria. In some cases, the indexer can further consult a timestamp associated with the event reference to determine whether an event satisfies the filter criteria. For example, if an inverted index corresponds to a time range that is partially outside of a time range filter criterion, then the indexer can consult a timestamp associated with the event reference to determine whether the corresponding event satisfies the time range criterion. In some embodiments, to identify events that satisfy a time range, the indexer can review an array, such as the event reference array1614that identifies the time associated with the events. Furthermore, as mentioned above using the known location of the directory in which the relevant inverted indexes are located (or other index identifier), the indexer can determine that any events identified using the relevant inverted indexes satisfy the index filter criterion. In some cases, based on the filter criteria, the indexer reviews an extraction rule. In certain embodiments, if the filter criteria includes a field name that does not correspond to a field-value pair entry in an inverted index, the indexer can review an extraction rule, which may be located in a configuration file, to identify a field that corresponds to a field-value pair entry in the inverted index. For example, the filter criteria includes a field name “sessionID” and the indexer determines that at least one relevant inverted index does not include a field-value pair entry corresponding to the field name sessionID, the indexer can review an extraction rule that identifies how the sessionID field is to be extracted from a particular host, source, or sourcetype (implicitly identifying the particular host, source, or sourcetype that includes a sessionID field). The indexer can replace the field name “sessionID” in the filter criteria with the identified host, source, or sourcetype. In some cases, the field name “sessionID” may be associated with multiples hosts, sources, or sourcetypes, in which case, all identified hosts, sources, and sourcetypes can be added as filter criteria. In some cases, the identified host, source, or sourcetype can replace or be appended to a filter criterion, or be excluded. For example, if the filter criteria includes a criterion for source S1 and the “sessionID” field is found in source S2, the source S2 can replace S1 in the filter criteria, be appended such that the filter criteria includes source S1 and source S2, or be excluded based on the presence of the filter criterion source S1. If the identified host, source, or sourcetype is included in the filter criteria, the indexer can then identify a field-value pair entry in the inverted index that includes a field value corresponding to the identity of the particular host, source, or sourcetype identified using the extraction rule. Once the events that satisfy the filter criteria are identified, the system, such as the indexer206can categorize the results based on the categorization criteria. The categorization criteria can include categories for grouping the results, such as any combination of partition, source, sourcetype, or host, or other categories or fields as desired. The indexer can use the categorization criteria to identify categorization criteria-value pairs or categorization criteria values by which to categorize or group the results. The categorization criteria-value pairs can correspond to one or more field-value pair entries stored in a relevant inverted index, one or more index-value pairs based on a directory in which the inverted index is located or an entry in the inverted index (or other means by which an inverted index can be associated with a partition), or other criteria-value pair that identifies a general category and a particular value for that category. The categorization criteria values can correspond to the value portion of the categorization criteria-value pair. As mentioned, in some cases, the categorization criteria-value pairs can correspond to one or more field-value pair entries stored in the relevant inverted indexes. For example, the categorization criteria-value pairs can correspond to field-value pair entries of host, source, and sourcetype (or other field-value pair entry as desired). For instance, if there are ten different hosts, four different sources, and five different sourcetypes for an inverted index, then the inverted index can include ten host field-value pair entries, four source field-value pair entries, and five sourcetype field-value pair entries. The indexer can use the nineteen distinct field-value pair entries as categorization criteria-value pairs to group the results. Specifically, the indexer can identify the location of the event references associated with the events that satisfy the filter criteria within the field-value pairs, and group the event references based on their location. As such, the indexer can identify the particular field value associated with the event corresponding to the event reference. For example, if the categorization criteria include host and sourcetype, the host field-value pair entries and sourcetype field-value pair entries can be used as categorization criteria-value pairs to identify the specific host and sourcetype associated with the events that satisfy the filter criteria. In addition, as mentioned, categorization criteria-value pairs can correspond to data other than the field-value pair entries in the relevant inverted indexes. For example, if partition or index is used as a categorization criterion, the inverted indexes may not include partition field-value pair entries. Rather, the indexer can identify the categorization criteria-value pair associated with the partition based on the directory in which an inverted index is located, information in the inverted index, or other information that associates the inverted index with the partition, etc. As such a variety of methods can be used to identify the categorization criteria-value pairs from the categorization criteria. Accordingly based on the categorization criteria (and categorization criteria-value pairs), the indexer can generate groupings based on the events that satisfy the filter criteria. As a non-limiting example, if the categorization criteria includes a partition and sourcetype, then the groupings can correspond to events that are associated with each unique combination of partition and sourcetype. For instance, if there are three different partitions and two different sourcetypes associated with the identified events, then the six different groups can be formed, each with a unique partition value-sourcetype value combination. Similarly, if the categorization criteria includes partition, sourcetype, and host and there are two different partitions, three sourcetypes, and five hosts associated with the identified events, then the indexer can generate up to thirty groups for the results that satisfy the filter criteria. Each group can be associated with a unique combination of categorization criteria-value pairs (e.g., unique combinations of partition value sourcetype value, and host value). In addition, the indexer can count the number of events associated with each group based on the number of events that meet the unique combination of categorization criteria for a particular group (or match the categorization criteria-value pairs for the particular group). With continued reference to the example above, the indexer can count the number of events that meet the unique combination of partition, sourcetype, and host for a particular group. Each indexer communicates the groupings to the search head. The search head can aggregate the groupings from the indexers and provide the groupings for display. In some cases, the groups are displayed based on at least one of the host, source, sourcetype, or partition associated with the groupings. In some embodiments, the search head can further display the groups based on display criteria, such as a display order or a sort order as described in greater detail above. As a non-limiting example and with reference toFIG.5B, consider a request received by an indexer206that includes the following filter criteria: keyword=error, partition=_main, time range=3/1/17 16:22.00.000-16:28.00.000, sourcetype=sourcetypeC, host=hostB, and the following categorization criteria: source. Based on the above criteria, the indexer206identifies _main directory503and can ignore _test directory505and any other partition-specific directories. The indexer determines that inverted partition507B is a relevant partition based on its location within the _main directory503and the time range associated with it. For sake of simplicity in this example, the indexer206determines that no other inverted indexes in the _main directory503, such as inverted index507A satisfy the time range criterion. Having identified the relevant inverted index507B, the indexer reviews the token entries511and the field-value pair entries513to identify event references, or events, that satisfy all of the filter criteria. With respect to the token entries511, the indexer can review the error token entry and identify event references 3, 5, 6, 8, 11, 12, indicating that the term “error” is found in the corresponding events. Similarly, the indexer can identify event references 4, 5, 6, 8, 9, 10, 11 in the field-value pair entry sourcetype::sourcetypeC and event references 2, 5, 6, 8, 10, 11 in the field-value pair entry host::hostB. As the filter criteria did not include a source or an IP_address field-value pair, the indexer can ignore those field-value pair entries. In addition to identifying event references found in at least one token entry or field-value pair entry (e.g., event references 3, 4, 5, 6, 8, 9, 10, 11, 12), the indexer can identify events (and corresponding event references) that satisfy the time range criterion using the event reference array1614(e.g., event references 2, 3, 4, 5, 6, 7, 8, 9, 10). Using the information obtained from the inverted index507B (including the event reference array515), the indexer206can identify the event references that satisfy all of the filter criteria (e.g., event references 5, 6, 8). Having identified the events (and event references) that satisfy all of the filter criteria, the indexer206can group the event references using the received categorization criteria (source). In doing so, the indexer can determine that event references 5 and 6 are located in the field-value pair entry source::sourceD (or have matching categorization criteria-value pairs) and event reference 8 is located in the field-value pair entry source::sourceC. Accordingly, the indexer can generate a sourceC group having a count of one corresponding to reference 8 and a sourceD group having a count of two corresponding to references 5 and 6. This information can be communicated to the search head. In turn the search head can aggregate the results from the various indexers and display the groupings. As mentioned above, in some embodiments, the groupings can be displayed based at least in part on the categorization criteria, including at least one of host, source, sourcetype, or partition. It will be understood that a change to any of the filter criteria or categorization criteria can result in different groupings. As a one non-limiting example, a request received by an indexer206that includes the following filter criteria: partition=_main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 1-12 as satisfying the filter criteria. The indexer would then generate up to 24 groupings corresponding to the 24 different combinations of the categorization criteria-value pairs, including host (hostA, hostB), source (sourceA, sourceB, sourceC, sourceD), and sourcetype (sourcetypeA, sourcetypeB, sourcetypeC). However, as there are only twelve events identifiers in the illustrated embodiment and some fall into the same grouping, the indexer generates eight groups and counts as follows:Group 1 (hostA, sourceA, sourcetypeA): 1 (event reference 7)Group 2 (hostA, sourceA, sourcetypeB): 2 (event references 1, 12)Group 3 (hostA, sourceA, sourcetypeC): 1 (event reference 4)Group 4 (hostA, sourceB, sourcetypeA): 1 (event reference 3)Group 5 (hostA, sourceB, sourcetypeC): 1 (event reference 9)Group 6 (hostB, sourceC, sourcetypeA): 1 (event reference 2)Group 7 (hostB, sourceC, sourcetypeC): 2 (event references 8, 11)Group 8 (hostB, sourceD, sourcetypeC): 3 (event references 5, 6, 10) As noted, each group has a unique combination of categorization criteria-value pairs or categorization criteria values. The indexer communicates the groups to the search head for aggregation with results received from other indexers. In communicating the groups to the search head, the indexer can include the categorization criteria-value pairs for each group and the count. In some embodiments, the indexer can include more or less information. For example, the indexer can include the event references associated with each group and other identifying information, such as the indexer or inverted index used to identify the groups. As another non-limiting examples, a request received by an indexer206that includes the following filter criteria: partition=main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, source=sourceA, sourceD, and keyword=itemID and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 4, 7, and 10 as satisfying the filter criteria, and generate the following groups:Group 1 (hostA, sourceA, sourcetypeC): 1 (event reference 4)Group 2 (hostA, sourceA, sourcetypeA): 1 (event reference 7)Group 3 (hostB, sourceD, sourcetypeC): 1 (event references 10) The indexer communicates the groups to the search head for aggregation with results received from other indexers. As will be understand there are myriad ways for filtering and categorizing the events and event references. For example, the indexer can review multiple inverted indexes associated with an partition or review the inverted indexes of multiple partitions, and categorize the data using any one or any combination of partition, host, source, sourcetype, or other category, as desired. Further, if a user interacts with a particular group, the indexer can provide additional information regarding the group. For example, the indexer can perform a targeted search or sampling of the events that satisfy the filter criteria and the categorization criteria for the selected group, also referred to as the filter criteria corresponding to the group or filter criteria associated with the group. In some cases, to provide the additional information, the indexer relies on the inverted index. For example, the indexer can identify the event references associated with the events that satisfy the filter criteria and the categorization criteria for the selected group and then use the event reference array515to access some or all of the identified events. In some cases, the categorization criteria values or categorization criteria-value pairs associated with the group become part of the filter criteria for the review. With reference toFIG.5Bfor instance, suppose a group is displayed with a count of six corresponding to event references 4, 5, 6, 8, 10, 11 (i.e., event references 4, 5, 6, 8, 10, 11 satisfy the filter criteria and are associated with matching categorization criteria values or categorization criteria-value pairs) and a user interacts with the group (e.g., selecting the group, clicking on the group, etc.). In response, the search head communicates with the indexer to provide additional information regarding the group. In some embodiments, the indexer identifies the event references associated with the group using the filter criteria and the categorization criteria for the group (e.g., categorization criteria values or categorization criteria-value pairs unique to the group). Together, the filter criteria and the categorization criteria for the group can be referred to as the filter criteria associated with the group. Using the filter criteria associated with the group, the indexer identifies event references 4, 5, 6, 8, 10, 11. Based on a sampling criteria, discussed in greater detail above, the indexer can determine that it will analyze a sample of the events associated with the event references 4, 5, 6, 8, 10, 11. For example, the sample can include analyzing event data associated with the event references 5, 8, 10. In some embodiments, the indexer can use the event reference array515to access the event data associated with the event references 5, 8, 10. Once accessed, the indexer can compile the relevant information and provide it to the search head for aggregation with results from other indexers. By identifying events and sampling event data using the inverted indexes, the indexer can reduce the amount of actual data this is analyzed and the number of events that are accessed in order to generate the summary of the group and provide a response in less time. 3.8. Query Processing FIG.6Ais a flow diagram of an example method that illustrates how a search head and indexers perform a search query, in accordance with example embodiments. At block602, a search head receives a search query from a client. At block604, the search head analyzes the search query to determine what portion(s) of the query can be delegated to indexers and what portions of the query can be executed locally by the search head. At block606, the search head distributes the determined portions of the query to the appropriate indexers. In some embodiments, a search head cluster may take the place of an independent search head where each search head in the search head cluster coordinates with peer search heads in the search head cluster to schedule jobs, replicate search results, update configurations, fulfill search requests, etc. In some embodiments, the search head (or each search head) communicates with a master node (also known as a cluster master, not shown inFIG.2) that provides the search head with a list of indexers to which the search head can distribute the determined portions of the query. The master node maintains a list of active indexers and can also designate which indexers may have responsibility for responding to queries over certain sets of events. A search head may communicate with the master node before the search head distributes queries to indexers to discover the addresses of active indexers. At block608, the indexers to which the query was distributed, search data stores associated with them for events that are responsive to the query. To determine which events are responsive to the query, the indexer searches for events that match the criteria specified in the query. These criteria can include matching keywords or specific values for certain fields. The searching operations at block608may use the late-binding schema to extract values for specified fields from events at the time the query is processed. In some embodiments, one or more rules for extracting field values may be specified as part of a source type definition in a configuration file. The indexers may then either send the relevant events back to the search head, or use the events to determine a partial result, and send the partial result back to the search head. At block610, the search head combines the partial results and/or events received from the indexers to produce a final result for the query. In some examples, the results of the query are indicative of performance or security of the IT environment and may help improve the performance of components in the IT environment. This final result may comprise different types of data depending on what the query requested. For example, the results can include a listing of matching events returned by the query, or some type of visualization of the data from the returned events. In another example, the final result can include one or more calculated values derived from the matching events. The results generated by the system108can be returned to a client using different techniques. For example, one technique streams results or relevant events back to a client in real-time as they are identified. Another technique waits to report the results to the client until a complete set of results (which may include a set of relevant events or a result based on relevant events) is ready to return to the client. Yet another technique streams interim results or relevant events back to the client in real-time until a complete set of results is ready, and then returns the complete set of results to the client. In another technique, certain results are stored as “search jobs” and the client may retrieve the results by referring the search jobs. The search head can also perform various operations to make the search more efficient. For example, before the search head begins execution of a query, the search head can determine a time range for the query and a set of common keywords that all matching events include. The search head may then use these parameters to query the indexers to obtain a superset of the eventual results. Then, during a filtering stage, the search head can perform field-extraction operations on the superset to produce a reduced set of search results. This speeds up queries, which may be particularly helpful for queries that are performed on a periodic basis. As will be described in greater detail below with reference to, inter alia,FIGS.18-49, some functionality of the search head or indexers can be handled by different components of the system or removed altogether. For example, in some cases, a query coordinator analyzes the query, identifies dataset sources to be accessed, generates subqueries for execution by dataset sources, such as indexers, collects partial results to produce a final result and returns the final results to the search head for delivery to a client device or delivers the final results to the client device without the search head. In some cases, results from dataset sources, such as the indexers, are communicated to nodes, which further process the data, and communicate the results of the processing to the query coordinator, etc. In some embodiments, the search head spawns a search process, which communicates the query to a search process master. The search process master can communicate the query to the query coordinator for processing and execution. In addition, in some embodiments, the indexers are not involved in search operations or only search some data, such as data in hot buckets, etc. For example, nodes can perform the search functionality described herein with respect to indexers. For example, nodes can use late-binding schema to extract values for specified fields from events at the time the query is processed and/or use one or more rules specified as part of a source type definition in a configuration file for extracting field values, etc. Furthermore, in some embodiments, nodes can perform search operations on data in common storage or found in other dataset sources, such as external data stores, query acceleration data stores, ingested data buffers, etc. 3.9. Pipelined Search Language Various embodiments of the present disclosure can be implemented using, or in conjunction with, a pipelined command language. A pipelined command language is a language in which a set of inputs or data is operated on by a first command in a sequence of commands, and then subsequent commands in the order they are arranged in the sequence. Such commands can include any type of functionality for operating on data, such as retrieving, searching, filtering, aggregating, processing, transmitting, and the like. As described herein, a query can thus be formulated in a pipelined command language and include any number of ordered or unordered commands for operating on data. Splunk Processing Language (SPL) is an example of a pipelined command language in which a set of inputs or data is operated on by any number of commands in a particular sequence. A sequence of commands, or command sequence, can be formulated such that the order in which the commands are arranged defines the order in which the commands are applied to a set of data or the results of an earlier executed command. For example, a first command in a command sequence can operate to search or filter for specific data in particular set of data. The results of the first command can then be passed to another command listed later in the command sequence for further processing. In various embodiments, a query can be formulated as a command sequence defined in a command line of a search UI. In some embodiments, a query can be formulated as a sequence of SPL commands. Some or all of the SPL commands in the sequence of SPL commands can be separated from one another by a pipe symbol “|”. In such embodiments, a set of data, such as a set of events, can be operated on by a first SPL command in the sequence, and then a subsequent SPL command following a pipe symbol “|” after the first SPL command operates on the results produced by the first SPL command or other set of data, and so on for any additional SPL commands in the sequence. As such, a query formulated using SPL comprises a series of consecutive commands that are delimited by pipe “|” characters. The pipe character indicates to the system that the output or result of one command (to the left of the pipe) should be used as the input for one of the subsequent commands (to the right of the pipe). This enables formulation of queries defined by a pipeline of sequenced commands that refines or enhances the data at each step along the pipeline until the desired results are attained. Accordingly, various embodiments described herein can be implemented with Splunk Processing Language (SPL) used in conjunction with the SPLUNK® ENTERPRISE system. While a query can be formulated in many ways, a query can start with a search command and one or more corresponding search terms at the beginning of the pipeline. Such search terms can include any combination of keywords, phrases, times, dates, Boolean expressions, fieldname-field value pairs, etc. that specify which results should be obtained from an index. The results can then be passed as inputs into subsequent commands in a sequence of commands by using, for example, a pipe character. The subsequent commands in a sequence can include directives for additional processing of the results once it has been obtained from one or more indexes. For example, commands may be used to filter unwanted information out of the results, extract more information, evaluate field values, calculate statistics, reorder the results, create an alert, create summary of the results, or perform some type of aggregation function. In some embodiments, the summary can include a graph, chart, metric, or other visualization of the data. An aggregation function can include analysis or calculations to return an aggregate value, such as an average value, a sum, a maximum value, a root mean square, statistical values, and the like. Due to its flexible nature, use of a pipelined command language in various embodiments is advantageous because it can perform “filtering” as well as “processing” functions. In other words, a single query can include a search command and search term expressions, as well as data-analysis expressions. For example, a command at the beginning of a query can perform a “filtering” step by retrieving a set of data based on a condition (e.g., records associated with server response times of less than 1 microsecond). The results of the filtering step can then be passed to a subsequent command in the pipeline that performs a “processing” step (e.g. calculation of an aggregate value related to the filtered events such as the average response time of servers with response times of less than 1 microsecond). Furthermore, the search command can allow events to be filtered by keyword as well as field value criteria. For example, a search command can filter out all events containing the word “warning” or filter out all events where a field value associated with a field “clientip” is “10.0.1.2.” The results obtained or generated in response to a command in a query can be considered a set of results data. The set of results data can be passed from one command to another in any data format. In one embodiment, the set of result data can be in the form of a dynamically created table. Each command in a particular query can redefine the shape of the table. In some implementations, an event retrieved from an index in response to a query can be considered a row with a column for each field value. Columns contain basic information about the data and also may contain data that has been dynamically extracted at search time. FIG.6Bprovides a visual representation of the manner in which a pipelined command language or query operates in accordance with the disclosed embodiments. The query630can be inputted by the user into a search. The query comprises a search, the results of which are piped to two commands (namely, command1and command2) that follow the search step. Disk622represents the event data in the raw record data store. When a user query is processed, a search step will precede other queries in the pipeline in order to generate a set of events at block640. For example, the query can comprise search terms “sourcetype=syslog ERROR” at the front of the pipeline as shown inFIG.6B. Intermediate results table624shows fewer rows because it represents the subset of events retrieved from the index that matched the search terms “sourcetype=syslog ERROR” from search command630. By way of further example, instead of a search step, the set of events at the head of the pipeline may be generating by a call to a pre-existing inverted index (as will be explained later). At block642, the set of events generated in the first part of the query may be piped to a query that searches the set of events for field-value pairs or for keywords. For example, the second intermediate results table626shows fewer columns, representing the result of the top command, “top user” which summarizes the events into a list of the top 10 users and displays the user, count, and percentage. Finally, at block644, the results of the prior stage can be pipelined to another stage where further filtering or processing of the data can be performed, e.g., preparing the data for display purposes, filtering the data based on a condition, performing a mathematical calculation with the data, etc. As shown inFIG.6B, the “fields—percent” part of command630removes the column that shows the percentage, thereby, leaving a final results table628without a percentage column. In different embodiments, other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. In some embodiments, each stage can correspond to a search phase or layer in a DAG. The processing performed in each stage can be handled by one or more partitions allocated to each stage. 3.10. Field Extraction The search head210allows users to search and visualize events generated from machine data received from homogenous data sources. The search head210also allows users to search and visualize events generated from machine data received from heterogeneous data sources. The search head210includes various mechanisms, which may additionally reside in an indexer206, for processing a query. A query language may be used to create a query, such as any suitable pipelined query language. For example, Splunk Processing Language (SPL) can be utilized to make a query. SPL is a pipelined search language in which a set of inputs is operated on by a first command in a command line, and then a subsequent command following the pipe symbol “|” operates on the results produced by the first command, and so on for additional commands. Other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. In response to receiving the search query, search head210uses extraction rules to extract values for fields in the events being searched. The search head210obtains extraction rules that specify how to extract a value for fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the fields corresponding to the extraction rules. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, an extraction rule may truncate a character string or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. The search head210can apply the extraction rules to events that it receives from indexers206. Indexers206may apply the extraction rules to events in an associated data store208. Extraction rules can be applied to all the events in a data store or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the portions of machine data in the events and examining the data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. As mentioned above, and as will be described in greater detail below with reference to, inter alia,FIGS.18-49, some functionality of the search head or indexers can be handled by different components of the system or removed altogether. For example, in some cases, a query coordinator or nodes use extraction rules to extract values for fields in the events being searched. The query coordinator or nodes obtain extraction rules that specify how to extract a value for fields from an event, etc., and apply the extraction rules to events that it receives from indexers, common storage, ingested data buffers, query acceleration data stores, or other dataset sources. FIG.7Ais a diagram of an example scenario where a common customer identifier is found among log data received from three disparate data sources, in accordance with example embodiments. In this example, a user submits an order for merchandise using a vendor's shopping application program701running on the user's system. In this example, the order was not delivered to the vendor's server due to a resource exception at the destination server that is detected by the middleware code702. The user then sends a message to the customer support server703to complain about the order failing to complete. The three systems701,702, and703are disparate systems that do not have a common logging format. The order application701sends log data704to the data intake and query system in one format, the middleware code702sends error log data705in a second format, and the support server703sends log data706in a third format. Using the log data received at one or more indexers206from the three systems, the vendor can uniquely obtain an insight into user activity, user experience, and system behavior. The search head210allows the vendor's administrator to search the log data from the three systems that one or more indexers206are responsible for searching, thereby obtaining correlated information, such as the order number and corresponding customer ID number of the person placing the order. The system also allows the administrator to see a visualization of related events via a user interface. The administrator can query the search head210for customer ID field value matches across the log data from the three systems that are stored at the one or more indexers206. The customer ID field value exists in the data gathered from the three systems, but the customer ID field value may be located in different areas of the data given differences in the architecture of the systems. There is a semantic relationship between the customer ID field values generated by the three systems. The search head210requests events from the one or more indexers206to gather relevant events from the three systems. The search head210then applies extraction rules to the events in order to extract field values that it can correlate. The search head may apply a different extraction rule to each set of events from each system when the event format differs among systems. In this example, the user interface can display to the administrator the events corresponding to the common customer ID field values707,708, and709, thereby providing the administrator with insight into a customer's experience. Note that query results can be returned to a client, a search head, or any other system component for further processing. In general, query results may include a set of one or more events, a set of one or more values obtained from the events, a subset of the values, statistics calculated based on the values, a report containing the values, a visualization (e.g., a graph or chart) generated from the values, and the like. The search system enables users to run queries against the stored data to retrieve events that meet criteria specified in a query, such as containing certain keywords or having specific values in defined fields.FIG.7Billustrates the manner in which keyword searches and field searches are processed in accordance with disclosed embodiments. If a user inputs a search query into search bar1401that includes only keywords (also known as “tokens”), e.g., the keyword “error” or “warning”, the query search engine of the data intake and query system searches for those keywords directly in the event data722stored in the raw record data store. Note that whileFIG.7Bonly illustrates four events, the raw record data store (corresponding to data store208inFIG.2) may contain records for millions of events. As disclosed above, an indexer can optionally generate a keyword index to facilitate fast keyword searching for event data. The indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. For example, if the keyword “HTTP” was indexed by the indexer at index time, and the user searches for the keyword “HTTP”, events713to715will be identified based on the results returned from the keyword index. As noted above, the index contains reference pointers to the events containing the keyword, which allows for efficient retrieval of the relevant events from the raw record data store. If a user searches for a keyword that has not been indexed by the indexer, the data intake and query system would nevertheless be able to retrieve the events by searching the event data for the keyword in the raw record data store directly as shown inFIG.7B. For example, if a user searches for the keyword “frank”, and the name “frank” has not been indexed at index time, the DATA INTAKE AND QUERY system will search the event data directly and return the first event713. Note that whether the keyword has been indexed at index time or not, in both cases the raw data with the events712is accessed from the raw data record store to service the keyword search. In the case where the keyword has been indexed, the index will contain a reference pointer that will allow for a more efficient retrieval of the event data from the data store. If the keyword has not been indexed, the search engine will need to search through all the records in the data store to service the search. In most cases, however, in addition to keywords, a user's search will also include fields. The term “field” refers to a location in the event data containing one or more values for a specific data item. Often, a field is a value with a fixed, delimited position on a line, or a name and value pair, where there is a single value to each field name. A field can also be multivalued, that is, it can appear more than once in an event and have a different value for each appearance, e.g., email address fields. Fields are searchable by the field name or field name-value pairs. Some examples of fields are “clientip” for IP addresses accessing a web server, or the “From” and “To” fields in email addresses. By way of further example, consider the search, “status=404”. This search query finds events with “status” fields that have a value of “404.” When the search is run, the search engine does not look for events with any other “status” value. It also does not look for events containing other fields that share “404” as a value. As a result, the search returns a set of results that are more focused than if “404” had been used in the search string as part of a keyword search. Note also that fields can appear in events as “key=value” pairs such as “user_name=Bob.” But in most cases, field values appear in fixed, delimited positions without identifying keys. For example, the data store may contain events where the “user_name” value always appears by itself after the timestamp as illustrated by the following string: “Nov 15 09:33:22 johnmedlock.” The data intake and query system advantageously allows for search time field extraction. In other words, fields can be extracted from the event data at search time using late-binding schema as opposed to at data ingestion time, which was a major limitation of the prior art systems. In response to receiving the search query, search head210uses extraction rules to extract values for the fields associated with a field or fields in the event data being searched. The search head210obtains extraction rules that specify how to extract a value for certain fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the relevant fields. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, a transformation rule may truncate a character string, or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. FIG.7Billustrates the manner in which configuration files may be used to configure custom fields at search time in accordance with the disclosed embodiments. In response to receiving a search query, the data intake and query system determines if the query references a “field.” For example, a query may request a list of events where the “clientip” field equals “127.0.0.1.” If the query itself does not specify an extraction rule and if the field is not a metadata field, e.g., time, host, source, source type, etc., then in order to determine an extraction rule, the search engine may, in one or more embodiments, need to locate configuration file712during the execution of the search as shown inFIG.7B. Configuration file712may contain extraction rules for all the various fields that are not metadata fields, e.g., the “clientip” field. The extraction rules may be inserted into the configuration file in a variety of ways. In some embodiments, the extraction rules can comprise regular expression rules that are manually entered in by the user. Regular expressions match patterns of characters in text and are used for extracting custom fields in text. In one or more embodiments, as noted above, a field extractor may be configured to automatically generate extraction rules for certain field values in the events when the events are being created, indexed, or stored, or possibly at a later time. In one embodiment, a user may be able to dynamically create custom fields by highlighting portions of a sample event that should be extracted as fields using a graphical user interface. The system would then generate a regular expression that extracts those fields from similar events and store the regular expression as an extraction rule for the associated field in the configuration file712. In some embodiments, the indexers may automatically discover certain custom fields at index time and the regular expressions for those fields will be automatically generated at index time and stored as part of extraction rules in configuration file712. For example, fields that appear in the event data as “key=value” pairs may be automatically extracted as part of an automatic field discovery process. Note that there may be several other ways of adding field definitions to configuration files in addition to the methods discussed herein. The search head210can apply the extraction rules derived from configuration file712to event data that it receives from indexers206. Indexers206may apply the extraction rules from the configuration file to events in an associated data store208. Extraction rules can be applied to all the events in a data store, or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the event data and examining the event data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. In one more embodiments, the extraction rule in configuration file712will also need to define the type or set of events that the rule applies to. Because the raw record data store will contain events from multiple heterogeneous sources, multiple events may contain the same fields in different locations because of discrepancies in the format of the data generated by the various sources. Furthermore, certain events may not contain a particular field at all. For example, event719also contains “clientip” field, however, the “clientip” field is in a different format from events713-715. To address the discrepancies in the format and content of the different types of events, the configuration file will also need to specify the set of events that an extraction rule applies to, e.g., extraction rule716specifies a rule for filtering by the type of event and contains a regular expression for parsing out the field value. Accordingly, each extraction rule will pertain to only a particular type of event. If a particular field, e.g., “clientip” occurs in multiple events, each of those types of events would need its own corresponding extraction rule in the configuration file712and each of the extraction rules would comprise a different regular expression to parse out the associated field value. The most common way to categorize events is by source type because events generated by a particular source can have the same format. The field extraction rules stored in configuration file712perform search-time field extractions. For example, for a query that requests a list of events with source type “access_combined” where the “clientip” field equals “127.0.0.1,” the query search engine would first locate the configuration file712to retrieve extraction rule716that would allow it to extract values associated with the “clientip” field from the event data720well37where the source type is “access_combined. After the “clientip” field has been extracted from all the events comprising the “clientip” field where the source type is “access_combined,” the query search engine can then execute the field criteria by performing the compare operation to filter out the events where the “clientip” field equals “127.0.0.1.” In the example shown inFIG.7B, events713-715would be returned in response to the user query. In this manner, the search engine can service queries containing field criteria in addition to queries containing keyword criteria (as explained above). The configuration file can be created during indexing. It may either be manually created by the user or automatically generated with certain predetermined field extraction rules. As discussed above, the events may be distributed across several indexers, wherein each indexer may be responsible for storing and searching a subset of the events contained in a corresponding data store. In a distributed indexer system, each indexer would need to maintain a local copy of the configuration file that is synchronized periodically across the various indexers. The ability to add schema to the configuration file at search time results in increased efficiency. A user can create new fields at search time and simply add field definitions to the configuration file. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules in the configuration file for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying raw data and uses late-binding schema for searching the raw data, it enables a user to continue investigating and learn valuable insights about the raw data long after data ingestion time. The ability to add multiple field definitions to the configuration file at search time also results in increased flexibility. For example, multiple field definitions can be added to the configuration file to capture the same field across events generated by different source types. This allows the data intake and query system to search and correlate data across heterogeneous sources flexibly and efficiently. Further, by providing the field definitions for the queried fields at search time, the configuration file712allows the record data store712to be field searchable. In other words, the raw record data store712can be searched using keywords as well as fields, wherein the fields are searchable name/value pairings that distinguish one event from another and can be defined in configuration file712using extraction rules. In comparison to a search containing field names, a keyword search does not need the configuration file and can search the event data directly as shown inFIG.7B. It should also be noted that any events filtered out by performing a search-time field extraction using a configuration file can be further processed by directing the results of the filtering step to a processing step using a pipelined search language. Using the prior example, a user could pipeline the results of the compare step to an aggregate function by asking the query search engine to count the number of events where the “clientip” field equals “127.0.0.1.” As mentioned above, and as will be described in greater detail below with reference to, inter alia,FIGS.18-49, some functionality of the search head or indexers can be handled by different components of the system or removed altogether. For example, in some cases, the data is stored in a dataset source, which may be an indexer (or data store controlled by an indexer) or may be a different type of dataset source, such as a common storage or external data source. In addition, a query coordinator or node can request events from the indexers or other dataset source, apply extraction rules and correlate, automatically discover certain custom fields, etc., as described above. 3.11. Example Search Screen FIG.8Ais an interface diagram of an example user interface for a search screen800, in accordance with example embodiments. Search screen800includes a search bar802that accepts user input in the form of a search string. It also includes a time range picker812that enables the user to specify a time range for the search. For historical searches (e.g., searches based on a particular historical time range), the user can select a specific time range, or alternatively a relative time range, such as “today,” “yesterday” or “last week.” For real-time searches (e.g., searches whose results are based on data received in real-time), the user can select the size of a time window to search for real-time events. Search screen800also initially displays a “data summary” dialog as is illustrated inFIG.8Bthat enables the user to select different sources for the events, such as by selecting specific hosts and log files. After the search is executed, the search screen800inFIG.8Acan display the results through search results tabs804, wherein search results tabs804includes: an “events tab” that displays various information about events returned by the search; a “statistics tab” that displays statistics about the search results; and a “visualization tab” that displays various visualizations of the search results. The events tab illustrated inFIG.8Adisplays a timeline graph805that graphically illustrates the number of events that occurred in one-hour intervals over the selected time range. The events tab also displays an events list808that enables a user to view the machine data in each of the returned events. The events tab additionally displays a sidebar that is an interactive field picker806. The field picker806may be displayed to a user in response to the search being executed and allows the user to further analyze the search results based on the fields in the events of the search results. The field picker806includes field names that reference fields present in the events in the search results. The field picker may display any Selected Fields820that a user has pre-selected for display (e.g., host, source, sourcetype) and may also display any Interesting Fields822that the system determines may be interesting to the user based on pre-specified criteria (e.g., action, bytes, categoryid, clientip, date_hour, date_mday, date_minute, etc.). The field picker also provides an option to display field names for all the fields present in the events of the search results using the All Fields control824. Each field name in the field picker806has a value type identifier to the left of the field name, such as value type identifier826. A value type identifier identifies the type of value for the respective field, such as an “a” for fields that include literal values or a “#” for fields that include numerical values. Each field name in the field picker also has a unique value count to the right of the field name, such as unique value count828. The unique value count indicates the number of unique values for the respective field in the events of the search results. Each field name is selectable to view the events in the search results that have the field referenced by that field name. For example, a user can select the “host” field name, and the events shown in the events list808will be updated with events in the search results that have the field that is reference by the field name “host.” 3.12. Data Models A data model is a hierarchically structured search-time mapping of semantic knowledge about one or more datasets. It encodes the domain knowledge used to build a variety of specialized searches of those datasets. Those searches, in turn, can be used to generate reports. A data model is composed of one or more “objects” (or “data model objects”) that define or otherwise correspond to a specific set of data. An object is defined by constraints and attributes. An object's constraints are search criteria that define the set of events to be operated on by running a search having that search criteria at the time the data model is selected. An object's attributes are the set of fields to be exposed for operating on that set of events generated by the search criteria. Objects in data models can be arranged hierarchically in parent/child relationships. Each child object represents a subset of the dataset covered by its parent object. The top-level objects in data models are collectively referred to as “root objects.” Child objects have inheritance. Child objects inherit constraints and attributes from their parent objects and may have additional constraints and attributes of their own. Child objects provide a way of filtering events from parent objects. Because a child object may provide an additional constraint in addition to the constraints it has inherited from its parent object, the dataset it represents may be a subset of the dataset that its parent represents. For example, a first data model object may define a broad set of data pertaining to e-mail activity generally, and another data model object may define specific datasets within the broad dataset, such as a subset of the e-mail data pertaining specifically to e-mails sent. For example, a user can simply select an “e-mail activity” data model object to access a dataset relating to e-mails generally (e.g., sent or received), or select an “e-mails sent” data model object (or data sub-model object) to access a dataset relating to e-mails sent. Because a data model object is defined by its constraints (e.g., a set of search criteria) and attributes (e.g., a set of fields), a data model object can be used to quickly search data to identify a set of events and to identify a set of fields to be associated with the set of events. For example, an “e-mails sent” data model object may specify a search for events relating to e-mails that have been sent, and specify a set of fields that are associated with the events. Thus, a user can retrieve and use the “e-mails sent” data model object to quickly search source data for events relating to sent e-mails, and may be provided with a listing of the set of fields relevant to the events in a user interface screen. Examples of data models can include electronic mail, authentication, databases, intrusion detection, malware, application state, alerts, compute inventory, network sessions, network traffic, performance, audits, updates, vulnerabilities, etc. Data models and their objects can be designed by knowledge managers in an organization, and they can enable downstream users to quickly focus on a specific set of data. A user iteratively applies a model development tool (not shown inFIG.8A) to prepare a query that defines a subset of events and assigns an object name to that subset. A child subset is created by further limiting a query that generated a parent subset. Data definitions in associated schemas can be taken from the common information model (CIM) or can be devised for a particular schema and optionally added to the CIM. Child objects inherit fields from parents and can include fields not present in parents. A model developer can select fewer extraction rules than are available for the sources returned by the query that defines events belonging to a model. Selecting a limited set of extraction rules can be a tool for simplifying and focusing the data model, while allowing a user flexibility to explore the data subset. Development of a data model is further explained in U.S. Pat. Nos. 8,788,525 and 8,788,526, both entitled “DATA MODEL FOR MACHINE DATA FOR SEMANTIC SEARCH”, both issued on 22 Jul. 2014, U.S. Pat. No. 8,983,994, entitled “GENERATION OF A DATA MODEL FOR SEARCHING MACHINE DATA”, issued on 17 Mar. 2015, U.S. Pat. No. 9,128,980, entitled “GENERATION OF A DATA MODEL APPLIED TO QUERIES”, issued on 8 Sep. 2015, and U.S. Pat. No. 9,589,012, entitled “GENERATION OF A DATA MODEL APPLIED TO OBJECT QUERIES”, issued on 7 Mar. 2017, each of which is hereby incorporated by reference in its entirety for all purposes. A data model can also include reports. One or more report formats can be associated with a particular data model and be made available to run against the data model. A user can use child objects to design reports with object datasets that already have extraneous data pre-filtered out. In some embodiments, the data intake and query system108provides the user with the ability to produce reports (e.g., a table, chart, visualization, etc.) without having to enter SPL, SQL, or other query language terms into a search screen. Data models are used as the basis for the search feature. Data models may be selected in a report generation interface. The report generator supports drag-and-drop organization of fields to be summarized in a report. When a model is selected, the fields with available extraction rules are made available for use in the report. The user may refine and/or filter search results to produce more precise reports. The user may select some fields for organizing the report and select other fields for providing detail according to the report organization. For example, “region” and “salesperson” are fields used for organizing the report and sales data can be summarized (subtotaled and totaled) within this organization. The report generator allows the user to specify one or more fields within events and apply statistical analysis on values extracted from the specified one or more fields. The report generator may aggregate search results across sets of events and generate statistics based on aggregated search results. Building reports using the report generation interface is further explained in U.S. patent application Ser. No. 14/503,335, entitled “GENERATING REPORTS FROM UNSTRUCTURED DATA”, filed on 30 Sep. 2014, and which is hereby incorporated by reference in its entirety for all purposes. Data visualizations also can be generated in a variety of formats, by reference to the data model. Reports, data visualizations, and data model objects can be saved and associated with the data model for future use. The data model object may be used to perform searches of other data. As described in greater detail in U.S. patent application Ser. No. 15/665,159, entitled “MULTI-LAYER PARTITION ALLOCATION FOR QUERY EXECUTION”, filed on Jul. 31, 2017, and which is hereby incorporated by reference in its entirety for all purposes, various interfaces can be used to generate and display data models. 3.13. Acceleration Technique The above-described system provides significant flexibility by enabling a user to analyze massive quantities of minimally-processed data “on the fly” at search time using a late-binding schema, instead of storing pre-specified portions of the data in a database at ingestion time. This flexibility enables a user to see valuable insights, correlate data, and perform subsequent queries to examine interesting aspects of the data that may not have been apparent at ingestion time. However, performing extraction and analysis operations at search time can involve a large amount of data and require a large number of computational operations, which can cause delays in processing the queries. Advantageously, the data intake and query system also employs a number of unique acceleration techniques that have been developed to speed up analysis operations performed at search time. These techniques include: (1) performing search operations in parallel across multiple indexers; (2) using a keyword index; (3) using a high performance analytics store; and (4) accelerating the process of generating reports. These novel techniques are described in more detail below. Although described as being performed by an indexer, it will be understood that various components can be used to perform similar functionality. For example, nodes can perform any one or any combination of the search functions described herein. In some cases, the nodes perform the search functions based on instructions received from a query coordinator. 3.13.1. Aggregation Technique To facilitate faster query processing, a query can be structured such that multiple indexers perform the query in parallel, while aggregation of search results from the multiple indexers is performed locally at the search head. For example,FIG.9is an example search query received from a client and executed by search peers, in accordance with example embodiments.FIG.9illustrates how a search query902received from a client at a search head210can split into two phases, including: (1) subtasks904(e.g., data retrieval or simple filtering) that may be performed in parallel by indexers206for execution, and (2) a search results aggregation operation906to be executed by the search head when the results are ultimately collected from the indexers. During operation, upon receiving search query902, a search head210determines that a portion of the operations involved with the search query may be performed locally by the search head. The search head modifies search query902by substituting “stats” (create aggregate statistics over results sets received from the indexers at the search head) with “prestats” (create statistics by the indexer from local results set) to produce search query904, and then distributes search query904to distributed indexers, which are also referred to as “search peers” or “peer indexers.” Note that search queries may generally specify search criteria or operations to be performed on events that meet the search criteria. Search queries may also specify field names, as well as search criteria for the values in the fields or operations to be performed on the values in the fields. Moreover, the search head may distribute the full search query to the search peers as illustrated inFIG.6A, or may alternatively distribute a modified version (e.g., a more restricted version) of the search query to the search peers. In this example, the indexers are responsible for producing the results and sending them to the search head. After the indexers return the results to the search head, the search head aggregates the received results906to form a single search result set. By executing the query in this manner, the system effectively distributes the computational operations across the indexers while minimizing data transfers. As mentioned above, and as will be described in greater detail below with reference to, inter alia,18-49, some functionality of the search head or indexers can be handled by different components of the system or removed altogether. For example, in some cases, the data is stored in one or more dataset sources, such as, but not limited to an indexer (or data store controlled by an indexer), common storage, external data source, ingested data buffer, query acceleration data store, etc. In addition, in some cases a query coordinator can aggregate results from multiple indexers or nodes, perform an aggregation operation906, determine what, if any, portion of the operations of the search query are to be performed locally the query coordinator, modify or translate a search query for an indexer or other dataset source, distribute the query to indexers, peers, or nodes, etc. 3.13.2. Keyword Index As described above with reference to the flow charts inFIG.5A,FIG.5B, andFIG.6A, data intake and query system108can construct and maintain one or more keyword indices to quickly identify events containing specific keywords. This technique can greatly speed up the processing of queries involving specific keywords. As mentioned above, to build a keyword index, an indexer first identifies a set of keywords. Then, the indexer includes the identified keywords in an index, which associates each stored keyword with references to events containing that keyword, or to locations within events where that keyword is located. When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. In some embodiments, a node or other components of the system that performs search operations can use the keyword index to identify events, etc. 3.13.3. High Performance Analytics Store To speed up certain types of queries, some embodiments of system108create a high performance analytics store, which is referred to as a “summarization table,” that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the events and includes references to events containing the specific value in the specific field. For example, an example entry in a summarization table can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. This optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the summarization table to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. Also, if the system needs to process all events that have a specific field-value combination, the system can use the references in the summarization table entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In some embodiments, the system maintains a separate summarization table for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific summarization table includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate summarization table for each indexer. The indexer-specific summarization table includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific summarization tables may also be bucket-specific. The summarization table can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some cases, when the summarization tables may not cover all of the events that are relevant to a query, the system can use the summarization tables to obtain partial results for the events that are covered by summarization tables, but may also have to search through other events that are not covered by the summarization tables to produce additional results. These additional results can then be combined with the partial results to produce a final set of results for the query. The summarization table and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “DISTRIBUTED HIGH PERFORMANCE ANALYTICS STORE”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, issued on 8 Sep. 2015, and U.S. patent application Ser. No. 14/815,973, entitled “GENERATING AND STORING SUMMARIZATION TABLES FOR SETS OF SEARCHABLE EVENTS”, filed on 1 Aug. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. To speed up certain types of queries, e.g., frequently encountered queries or computationally intensive queries, some embodiments of system108create a high performance analytics store, which is referred to as a “summarization table,” (also referred to as a “lexicon” or “inverted index”) that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. For example, an example entry in an inverted index can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. Creating the inverted index data structure avoids needing to incur the computational overhead each time a statistical query needs to be run on a frequently encountered field-value pair. In order to expedite queries, in most embodiments, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. Note that the term “summarization table” or “inverted index” as used herein is a data structure that may be generated by an indexer that includes at least field names and field values that have been extracted and/or indexed from event records. An inverted index may also include reference values that point to the location(s) in the field searchable data store where the event records that include the field may be found. Also, an inverted index may be stored using well-known compression techniques to reduce its storage size. Further, note that the term “reference value” (also referred to as a “posting value”) as used herein is a value that references the location of a source record in the field searchable data store. In some embodiments, the reference value may include additional information about each record, such as timestamps, record size, meta-data, or the like. Each reference value may be a unique identifier which may be used to access the event data directly in the field searchable data store. In some embodiments, the reference values may be ordered based on each event record's timestamp. For example, if numbers are used as identifiers, they may be sorted so event records having a later timestamp always have a lower valued identifier than event records with an earlier timestamp, or vice-versa. Reference values are often included in inverted indexes for retrieving and/or identifying event records. In one or more embodiments, an inverted index is generated in response to a user-initiated collection query. The term “collection query” as used herein refers to queries that include commands that generate summarization information and inverted indexes (or summarization tables) from event records stored in the field searchable data store. Note that a collection query is a special type of query that can be user-generated and is used to create an inverted index. A collection query is not the same as a query that is used to call up or invoke a pre-existing inverted index. In one or more embodiment, a query can comprise an initial step that calls up a pre-generated inverted index on which further filtering and processing can be performed. For example, referring back toFIG.13, a set of events generated at block1320by either using a “collection” query to create a new inverted index or by calling up a pre-generated inverted index. A query with several pipelined steps will start with a pre-generated index to accelerate the query. FIG.7Cillustrates the manner in which an inverted index is created and used in accordance with the disclosed embodiments. As shown inFIG.7C, an inverted index722can be created in response to a user-initiated collection query using the event data723stored in the raw record data store. For example, a non-limiting example of a collection query may include “collect clientip=127.0.0.1” which may result in an inverted index722being generated from the event data723as shown inFIG.7C. Each entry in inverted index722includes an event reference value that references the location of a source record in the field searchable data store. The reference value may be used to access the original event record directly from the field searchable data store. In one or more embodiments, if one or more of the queries is a collection query, the responsive indexers may generate summarization information based on the fields of the event records located in the field searchable data store. In at least one of the various embodiments, one or more of the fields used in the summarization information may be listed in the collection query and/or they may be determined based on terms included in the collection query. For example, a collection query may include an explicit list of fields to summarize. Or, in at least one of the various embodiments, a collection query may include terms or expressions that explicitly define the fields, e.g., using regex rules. InFIG.7C, prior to running the collection query that generates the inverted index722, the field name “clientip” may need to be defined in a configuration file by specifying the “access_combined” source type and a regular expression rule to parse out the client IP address. Alternatively, the collection query may contain an explicit definition for the field name “clientip” which may obviate the need to reference the configuration file at search time. In one or more embodiments, collection queries may be saved and scheduled to run periodically. These scheduled collection queries may periodically update the summarization information corresponding to the query. For example, if the collection query that generates inverted index722is scheduled to run periodically, one or more indexers would periodically search through the relevant buckets to update inverted index722with event data for any new events with the “clientip” value of “127.0.0.1.” In some embodiments, the inverted indexes that include fields, values, and reference value (e.g., inverted index722) for event records may be included in the summarization information provided to the user. In other embodiments, a user may not be interested in specific fields and values contained in the inverted index, but may need to perform a statistical query on the data in the inverted index. For example, referencing the example ofFIG.7Crather than viewing the fields within summarization table722, a user may want to generate a count of all client requests from IP address “127.0.0.1.” In this case, the search engine would simply return a result of “4” rather than including details about the inverted index722in the information provided to the user. The pipelined search language, e.g., SPL of the SPLUNK® ENTERPRISE system can be used to pipe the contents of an inverted index to a statistical query using the “stats” command for example. A “stats” query refers to queries that generate result sets that may produce aggregate and statistical results from event records, e.g., average, mean, max, min, rms, etc. Where sufficient information is available in an inverted index, a “stats” query may generate their result sets rapidly from the summarization information available in the inverted index rather than directly scanning event records. For example, the contents of inverted index722can be pipelined to a stats query, e.g., a “count” function that counts the number of entries in the inverted index and returns a value of “4.” In this way, inverted indexes may enable various stats queries to be performed absent scanning or search the event records. Accordingly, this optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the inverted index to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. In some embodiments, the system maintains a separate inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate inverted index for each indexer. The indexer-specific inverted index includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific inverted indexes may also be bucket-specific. In at least one or more embodiments, if one or more of the queries is a stats query, each indexer may generate a partial result set from previously generated summarization information. The partial result sets may be returned to the search head that received the query and combined into a single result set for the query As mentioned above, the inverted index can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some embodiments, if summarization information is absent from an indexer that includes responsive event records, further actions may be taken, such as, the summarization information may generated on the fly, warnings may be provided the user, the collection query operation may be halted, the absence of summarization information may be ignored, or the like, or combination thereof. In one or more embodiments, an inverted index may be set up to update continually. For example, the query may ask for the inverted index to update its result periodically, e.g., every hour. In such instances, the inverted index may be a dynamic data structure that is regularly updated to include information regarding incoming events. In some cases, e.g., where a query is executed before an inverted index updates, when the inverted index may not cover all of the events that are relevant to a query, the system can use the inverted index to obtain partial results for the events that are covered by inverted index, but may also have to search through other events that are not covered by the inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data on the data store to supplement the partial results. These additional results can then be combined with the partial results to produce a final set of results for the query. Note that in typical instances where an inverted index is not completely up to date, the number of events that an indexer would need to search through to supplement the results from the inverted index would be relatively small. In other words, the search to get the most recent results can be quick and efficient because only a small number of event records will be searched through to supplement the information from the inverted index. The inverted index and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “DISTRIBUTED HIGH PERFORMANCE ANALYTICS STORE”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, filed on 31 Jan. 2014, and U.S. patent application Ser. No. 14/815,973, entitled “STORAGE MEDIUM AND CONTROL DEVICE”, filed on 21 Feb. 2014, each of which is hereby incorporated by reference in its entirety. In some cases, the inverted indexes can be made available, as part of a common storage, to nodes or other components of the system that perform search operations. 3.13.4. Extracting Event Data Using Posting In one or more embodiments, if the system needs to process all events that have a specific field-value combination, the system can use the references in the inverted index entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In other words, the system can use the reference values to locate the associated event data in the field searchable data store and extract further information from those events, e.g., extract further field values from the events for purposes of filtering or processing or both. The information extracted from the event data using the reference values can be directed for further filtering or processing in a query using the pipeline search language. The pipelined search language will, in one embodiment, include syntax that can direct the initial filtering step in a query to an inverted index. In one embodiment, a user would include syntax in the query that explicitly directs the initial searching or filtering step to the inverted index. Referencing the example inFIG.7B, if the user determines that she needs the user id fields associated with the client requests from IP address “127.0.0.1,” instead of incurring the computational overhead of performing a brand new search or re-generating the inverted index with an additional field, the user can generate a query that explicitly directs or pipes the contents of the already generated inverted index1502to another filtering step requesting the user ids for the entries in inverted index1502where the server response time is greater than “0.0900” microseconds. The search engine would use the reference values stored in inverted index722to retrieve the event data from the field searchable data store, filter the results based on the “response time” field values and, further, extract the user id field from the resulting event data to return to the user. In the present instance, the user ids “frank” and “carlos” would be returned to the user from the generated results table722. In one embodiment, the same methodology can be used to pipe the contents of the inverted index to a processing step. In other words, the user is able to use the inverted index to efficiently and quickly perform aggregate functions on field values that were not part of the initially generated inverted index. For example, a user may want to determine an average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” In this case, the search engine would again use the reference values stored in inverted index722to retrieve the event data from the field searchable data store and, further, extract the object size field values from the associated events731,732,733and734. Once, the corresponding object sizes have been extracted (i.e. 2326, 2900, 2920, and 5000), the average can be computed and returned to the user. In one embodiment, instead of explicitly invoking the inverted index in a user-generated query, e.g., by the use of special commands or syntax, the SPLUNK® ENTERPRISE system can be configured to automatically determine if any prior-generated inverted index can be used to expedite a user query. For example, the user's query may request the average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” without any reference to or use of inverted index722. The search engine, in this case, would automatically determine that an inverted index722already exists in the system that could expedite this query. In one embodiment, prior to running any search comprising a field-value pair, for example, a search engine may search though all the existing inverted indexes to determine if a pre-generated inverted index could be used to expedite the search comprising the field-value pair. Accordingly, the search engine would automatically use the pre-generated inverted index, e.g., index722to generate the results without any user-involvement that directs the use of the index. Using the reference values in an inverted index to be able to directly access the event data in the field searchable data store and extract further information from the associated event data for further filtering and processing is highly advantageous because it avoids incurring the computation overhead of regenerating the inverted index with additional fields or performing a new search. The data intake and query system includes one or more forwarders that receive raw machine data from a variety of input data sources, and one or more indexers that process and store the data in one or more data stores. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. In one or more embodiments, a multiple indexer implementation of the search system would maintain a separate and respective inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. As explained above, a search head would be able to correlate and synthesize data from across the various buckets and indexers. This feature advantageously expedites searches because instead of performing a computationally intensive search in a centrally located inverted index that catalogues all the relevant events, an indexer is able to directly search an inverted index stored in a bucket associated with the time-range specified in the query. This allows the search to be performed in parallel across the various indexers. Further, if the query requests further filtering or processing to be conducted on the event data referenced by the locally stored bucket-specific inverted index, the indexer is able to simply access the event records stored in the associated bucket for further filtering and processing instead of needing to access a central repository of event records, which would dramatically add to the computational overhead. In one embodiment, there may be multiple buckets associated with the time-range specified in a query. If the query is directed to an inverted index, or if the search engine automatically determines that using an inverted index would expedite the processing of the query, the indexers will search through each of the inverted indexes associated with the buckets for the specified time-range. This feature allows the High Performance Analytics Store to be scaled easily. In certain instances, where a query is executed before a bucket-specific inverted index updates, when the bucket-specific inverted index may not cover all of the events that are relevant to a query, the system can use the bucket-specific inverted index to obtain partial results for the events that are covered by bucket-specific inverted index, but may also have to search through the event data in the bucket associated with the bucket-specific inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data stored in the bucket (that was not yet processed by the indexer for the corresponding inverted index) to supplement the partial results from the bucket-specific inverted index. FIG.7Dpresents a flowchart illustrating how an inverted index in a pipelined search query can be used to determine a set of event data that can be further limited by filtering or processing in accordance with the disclosed embodiments. At block742, a query is received by a data intake and query system. In some embodiments, the query can be receive as a user generated query entered into search bar of a graphical user search interface. The search interface also includes a time range control element that enables specification of a time range for the query. At block744, an inverted index is retrieved. Note, that the inverted index can be retrieved in response to an explicit user search command inputted as part of the user generated query. Alternatively, the search engine can be configured to automatically use an inverted index if it determines that using the inverted index would expedite the servicing of the user generated query. Each of the entries in an inverted index keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. In order to expedite queries, in most embodiments, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. At block746, the query engine determines if the query contains further filtering and processing steps. If the query contains no further commands, then, in one embodiment, summarization information can be provided to the user at block754. If, however, the query does contain further filtering and processing commands, then at block750, the query engine determines if the commands relate to further filtering or processing of the data extracted as part of the inverted index or whether the commands are directed to using the inverted index as an initial filtering step to further filter and process event data referenced by the entries in the inverted index. If the query can be completed using data already in the generated inverted index, then the further filtering or processing steps, e.g., a “count” number of records function, “average” number of records per hour etc. are performed and the results are provided to the user at block752. If, however, the query references fields that are not extracted in the inverted index, then the indexers will access event data pointed to by the reference values in the inverted index to retrieve any further information required at block756. Subsequently, any further filtering or processing steps are performed on the fields extracted directly from the event data and the results are provided to the user at step758. As described throughout, it will be understood that although described as being performed by an indexer, these functions can be performed by another component of the system, such as a query coordinator or node. For example, nodes can use inverted indexes to identify relevant data, etc. The inverted indexes can be stored with buckets in a common storage, etc. 3.13.5. Accelerating Report Generation In some embodiments, a data server system such as the data intake and query system can accelerate the process of periodically generating updated reports based on query results. To accelerate this process, a summarization engine automatically examines the query to determine whether generation of updated reports can be accelerated by creating intermediate summaries. If reports can be accelerated, the summarization engine periodically generates a summary covering data obtained during a latest non-overlapping time period. For example, where the query seeks events meeting a specified criteria, a summary for the time period includes only events within the time period that meet the specified criteria. Similarly, if the query seeks statistics calculated from the events, such as the number of events that match the specified criteria, then the summary for the time period includes the number of events in the period that match the specified criteria. In addition to the creation of the summaries, the summarization engine schedules the periodic updating of the report associated with the query. During each scheduled report update, the query engine determines whether intermediate summaries have been generated covering portions of the time period covered by the report update. If so, then the report is generated based on the information contained in the summaries. Also, if additional event data has been received and has not yet been summarized, and is required to generate the complete report, the query can be run on these additional events. Then, the results returned by this query on the additional events, along with the partial results obtained from the intermediate summaries, can be combined to generate the updated report. This process is repeated each time the report is updated. Alternatively, if the system stores events in buckets covering specific time ranges, then the summaries can be generated on a bucket-by-bucket basis. Note that producing intermediate summaries can save the work involved in re-running the query for previous time periods, so advantageously only the newer events needs to be processed while generating an updated report. These report acceleration techniques are described in more detail in U.S. Pat. No. 8,589,403, entitled “COMPRESSED JOURNALING IN EVENT TRACKING FILES FOR METADATA RECOVERY AND REPLICATION”, issued on 19 Nov. 2013, U.S. Pat. No. 8,412,696, entitled “REAL TIME SEARCHING AND REPORTING”, issued on 2 Apr. 2011, and U.S. Pat. Nos. 8,589,375 and 8,589,432, both also entitled “REAL TIME SEARCHING AND REPORTING”, both issued on 19 Nov. 2013, each of which is hereby incorporated by reference in its entirety for all purposes. 3.14. Security Features The data intake and query system provides various schemas, dashboards, and visualizations that simplify developers' tasks to create applications with additional capabilities. One such application is the an enterprise security application, such as SPLUNK® ENTERPRISE SECURITY, which performs monitoring and alerting operations and includes analytics to facilitate identifying both known and unknown security threats based on large volumes of data stored by the data intake and query system. The enterprise security application provides the security practitioner with visibility into security-relevant threats found in the enterprise infrastructure by capturing, monitoring, and reporting on data from enterprise security devices, systems, and applications. Through the use of the data intake and query system searching and reporting capabilities, the enterprise security application provides a top-down and bottom-up view of an organization's security posture. The enterprise security application leverages the data intake and query system search-time normalization techniques, saved searches, and correlation searches to provide visibility into security-relevant threats and activity and generate notable events for tracking. The enterprise security application enables the security practitioner to investigate and explore the data to find new or unknown threats that do not follow signature-based patterns. Conventional Security Information and Event Management (SIEM) systems lack the infrastructure to effectively store and analyze large volumes of security-related data. Traditional SIEM systems typically use fixed schemas to extract data from pre-defined security-related fields at data ingestion time and store the extracted data in a relational database. This traditional data extraction process (and associated reduction in data size) that occurs at data ingestion time inevitably hampers future incident investigations that may need original data to determine the root cause of a security issue, or to detect the onset of an impending security threat. In contrast, the enterprise security application system stores large volumes of minimally-processed security-related data at ingestion time for later retrieval and analysis at search time when a live security threat is being investigated. To facilitate this data retrieval process, the enterprise security application provides pre-specified schemas for extracting relevant values from the different types of security-related events and enables a user to define such schemas. The enterprise security application can process many types of security-related information. In general, this security-related information can include any information that can be used to identify security threats. For example, the security-related information can include network-related information, such as IP addresses, domain names, asset identifiers, network traffic volume, uniform resource locator strings, and source addresses. The process of detecting security threats for network-related information is further described in U.S. Pat. No. 8,826,434, entitled “SECURITY THREAT DETECTION BASED ON INDICATIONS IN BIG DATA OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 2 Sep. 2014, U.S. Pat. No. 9,215,240, entitled “INVESTIGATIVE AND DYNAMIC DETECTION OF POTENTIAL SECURITY-THREAT INDICATORS FROM EVENTS IN BIG DATA”, issued on 15 Dec. 2015, U.S. Pat. No. 9,173,801, entitled “GRAPHIC DISPLAY OF SECURITY THREATS BASED ON INDICATIONS OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 3 Nov. 2015, U.S. Pat. No. 9,248,068, entitled “SECURITY THREAT DETECTION OF NEWLY REGISTERED DOMAINS”, issued on 2 Feb. 2016, U.S. Pat. No. 9,426,172, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME ACCESSES”, issued on 23 Aug. 2016, and U.S. Pat. No. 9,432,396, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME REGISTRATIONS”, issued on 30 Aug. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. Security-related information can also include malware infection data and system configuration information, as well as access control information, such as login/logout information and access failure notifications. The security-related information can originate from various sources within a data center, such as hosts, virtual machines, storage devices and sensors. The security-related information can also originate from various sources in a network, such as routers, switches, email servers, proxy servers, gateways, firewalls and intrusion-detection systems. During operation, the enterprise security application facilitates detecting “notable events” that are likely to indicate a security threat. A notable event represents one or more anomalous incidents, the occurrence of which can be identified based on one or more events (e.g., time stamped portions of raw machine data) fulfilling pre-specified and/or dynamically-determined (e.g., based on machine-learning) criteria defined for that notable event. Examples of notable events include the repeated occurrence of an abnormal spike in network usage over a period of time, a single occurrence of unauthorized access to system, a host communicating with a server on a known threat list, and the like. These notable events can be detected in a number of ways, such as: (1) a user can notice a correlation in events and can manually identify that a corresponding group of one or more events amounts to a notable event; or (2) a user can define a “correlation search” specifying criteria for a notable event, and every time one or more events satisfy the criteria, the application can indicate that the one or more events correspond to a notable event; and the like. A user can alternatively select a pre-defined correlation search provided by the application. Note that correlation searches can be run continuously or at regular intervals (e.g., every hour) to search for notable events. Upon detection, notable events can be stored in a dedicated “notable events index,” which can be subsequently accessed to generate various visualizations containing security-related information. Also, alerts can be generated to notify system operators when important notable events are discovered. As described in greater detail U.S. Patent Application No. U.S. patent application Ser. No. 15/665,159, entitled “MULTI-LAYER PARTITION ALLOCATION FOR QUERY EXECUTION”, filed on Jul. 31, 2017, and which is hereby incorporated by reference in its entirety for all purposes, various visualizations can be included to aid in discovering security threats, to monitor virtual machines, to monitor IT environments, etc. 4.0. Data Intake and Fabric System Architecture The capabilities of a data intake and query system are typically limited to resources contained within that system. For example, the data intake and query system has search and analytics capabilities that are limited in scope to the indexers responsible for storing and searching a subset of events contained in their corresponding internal data stores. Even if a data intake and query system has access to external data stores that may include data relevant to a query, the data intake and query system typically has limited capabilities to process the combination of partial search results from the indexers and external data sources to produce comprehensive search results. In particular, the search head of a data intake and query system may retrieve partial search results from external data systems over a network. The search head may also retrieve partial results from its indexers, and combine those partial search results with the partial results of the external data sources to produce final results for a query. For example, the search head can implement map-reduce techniques, where each data source returns partial search results and the search head can combine the partial search results to produce the final results of a query. However, obtaining results in this manner from distributed data systems including internal data stores and external data stores has limited value because the search head can act as a bottleneck for processing complex search queries on distributed data systems. The bottleneck effect at the search head worsens as the number of distributed data systems increases. Furthermore, even without processing queries on distributed data systems, the search head210and the indexers206can act as bottlenecks due to the number of queries received by the data intake and query system108and the amount of processing done by the indexers during data ingestion, indexing, and search. Embodiments of the disclosed data fabric service (DFS) system1001overcome the aforementioned drawbacks by expanding on the capabilities of a data intake and query system to enable application of a query across distributed data systems, which may also be referred to as dataset sources, including internal data stores coupled to indexers (illustrated inFIG.10), external data stores coupled to the data intake and query system over a network (illustrated inFIGS.10,17,18), common storage (illustrated inFIGS.17,18), query acceleration data stores (e.g., query acceleration data store1008illustrated inFIGS.10,17,18), ingested data buffers (illustrated inFIG.18) that include ingested streaming data. Moreover, the disclosed embodiments are scalable to accommodate application of a query on a growing number of diverse data systems. Additional embodiments are disclosed in U.S. Patent Application No. U.S. patent application Ser. No. 15/665,159, entitled “MULTI-LAYER PARTITION ALLOCATION FOR QUERY EXECUTION”, filed on Jul. 31, 2017, and which is hereby incorporated by reference in its entirety for all purposes. In certain embodiments, the disclosed DFS system extends the capabilities of the data intake and query system and mitigates the bottleneck effect at the search head by including one or more query coordinators communicatively coupled to worker nodes distributed in a big data ecosystem. In some embodiments, the worker nodes can be communicatively coupled to the various dataset sources (e.g., indexers, common storage, external data systems that contain external data stores, ingested data buffers, query acceleration data stores, etc.) The data intake and query system can receive a query input by a user at a client device via a search head. The search head can coordinate with a search process master and/or one or more query coordinators (the search process master and query coordinators can collectively referred to as a search process service) to execute a search scheme applied to one or more dataset sources (e.g., indexers, common storage, ingested data buffer, query acceleration data store, external data stores, etc.). The worker nodes can collect, process, and aggregate the partial results from the dataset sources, and transfer the aggregate results to a query coordinator. In some embodiments, the query coordinator can operate on the aggregate results, and send finalized results to the search head, which can render the results of the query on a display device. Hence, the search head in conjunction with the search process master and query coordinator(s) can apply a query to any one or more of the distributed dataset sources. The worker nodes can act in accordance with the instructions received by a query coordinator to obtain relevant datasets from the different dataset sources, process the datasets, aggregate the partial results of processing the different datasets, and communicate the aggregated results to the query coordinator, or elsewhere. In other words, the search head of the data intake and query system can offload at least some query processing to the query coordinator and worker nodes, to both obtain the datasets from the dataset sources and aggregate the results of processing the different datasets. This system is scalable to accommodate any number of worker nodes communicatively coupled to any number and types of data sources. Thus, embodiments of the DFS system can extend the capabilities of a data intake and query system by leveraging computing assets from anywhere in a big data ecosystem to collectively execute queries on diverse data systems regardless of whether data stores are internal of the data intake and query system and/or external data stores that are communicatively coupled to the data intake and query system over a network. FIG.10is a system diagram illustrating an environment1000for ingesting and indexing data, and performing queries on one or more datasets from one or more dataset sources. In the illustrated embodiment, the environment1000includes data sources201, client devices404, described in greater detail above with reference toFIG.4, and external data sources1018communicatively coupled to a data intake and query system1001. The external data sources1018can be similar to the external data systems12-1,12-2described above with reference toFIG.1Aor the external data sources described above with reference toFIG.4. In the illustrated embodiment, the data intake and query system1001includes any combination of forwarders204, indexers206, data stores208, and a search head210, as discussed in greater detail above with reference toFIGS.2-4. For example, the forwarders204can forward data from the data sources202to the indexers206, the indexers can206ingest, parse, index, and store the data in the data stores208, and the search head210can receive queries from, and provide the results of the queries to, client devices404on behalf of the system1001. In addition to forwarders204, indexers206, data stores208, and the search head210, the system1001further includes a search process master1002(in some embodiments also referred to as DFS master), one or more query coordinators1004(in some embodiments also referred to as search service providers), worker nodes1006, and a query acceleration data store1008. In some embodiments, a workload advisor1010, workload catalog1012, node monitor1014, and dataset compensation module1016can be included in the search process master1002. However, it will be understood that any one or any combination of the workload advisor1010, workload catalog1012, node monitor1014, and dataset compensation module1016can be included elsewhere in the system1001, such as in as a separate device or as part of a query coordinator1004. As will be described in greater detail below, the functionality of the search head210and the indexers206in the illustrated embodiment ofFIG.10can differ in some respects from the functionality described previously with respect to other embodiments. For example, in the illustrated embodiment ofFIG.10, the search head210can perform some processing on the query and then communicate the query to the search process master1002and coordinator(s)1004for further processing and execution. For example, the search head210can authenticate the client device or user that sent the query, check the syntax and/or semantics of the query, or otherwise determine that the search request is valid. In some cases, a daemon running on the search head210can receive a query. In response, the search head210can spawn a search process to further handle the query, including communicating the query to the search process master1002or query coordinator1004. Upon completion of the query, the search head210can receive the results of the query from the search process master1002or query coordinator1004and serve the results to the client device404. In such embodiments, the search head210may not perform any additional processing on the results received from the search process master1002or query coordinator1004. In some cases, upon receiving and communicating the results, the search head210can terminate the search process. In addition, the indexers206in the illustrated embodiment ofFIG.10can receive the relevant subqueries from the query coordinator1004rather than the search head210, search the corresponding data stores208for relevant events, and provide their individual results of the search to the worker nodes1006instead of the search head210for further processing. As described previously, the indexers206can analyze events for a query in parallel. For example, each indexer206can search its corresponding data stores208in parallel and communicate its partial results to the worker nodes1006. The search head210, search process master1002, and query coordinator1004can be implemented using separate computer systems, processors, or virtual machines, or may alternatively comprise separate processes executing on one or more computer systems, processors, or virtual machines. In some embodiments, running the search head210, search process master1002, and/or query coordinator1004on the same machine can increase performance of the system1001by reducing communications over networks. In either case, the search process master1002and query coordinator1004can be communicatively coupled to the search head210. The search process master1002and query coordinator1004can be used to reduce the processing demands on the search head210. Specifically, the search process master1002and coordinator1004can perform some of the preliminary query processing to reduce the amount of processing done by the search head210upon receipt of a query. In addition, the search process master1002and coordinator1004can perform some of the processing on the results of the query to reduce the amount of processing done by the search head210prior to communicating the results to a client device. For example, upon receipt of a query, the search head210can determine that the query can be processed by the search process master1002. In turn, the search process master1002can identify a query coordinator1004that can process the query. In some cases, if there is not a query coordinator1004that can handle the incoming query, the search process master1002can spawn an additional query coordinator1004to handle the query. The query coordinator(s)1004can coordinate the various tasks to execute queries assigned to them and return the results to the search head210. For example, as will be described in greater detail below, the query coordinator1004can determine the amount of resources available for a query, allocate resources for the query, determine how the query is to be broken up between dataset sources, generate commands for the dataset sources to execute, determine what tasks are to be handled by the worker nodes1006, spawn the worker nodes1006for the different tasks, instruct different worker nodes1006to perform the different tasks and where to route the results of each task, monitor the worker nodes1006during the query, control the flow of data between the worker nodes1006, process the aggregate results from the worker nodes1006, and send the finalized results to the search head210or to another dataset destination. In addition, the query coordinators1004can include providing data isolation across different searches based on role/access control, as well as fault tolerance (e.g., localized to a search head). For example, if a search operation fails, then its spawned query coordinator1004may fail but other query coordinators1004for other queries can continue to operate. In addition, queries that are to be isolated from one another can use different query coordinators1004. The worker nodes1006can perform the various tasks assigned to them by a query coordinator1004. For example, the worker nodes1006can intake data from the various dataset sources, process the data according to the query, collect results from the processing, combine results from various operations, route the results to various destinations, etc. In certain cases, the worker nodes1006and indexers206can be implemented using separate computer systems, processors, or virtual machines, or may alternatively comprise separate processes executing on one or more computer systems, processors, or virtual machines. The query acceleration data store1008can be used to store datasets for accelerated access. In some cases, the worker nodes1006can obtain data from the indexers206, external data sources1018, or other location (e.g., common storage, ingested data buffer, etc.) and store the data in the query acceleration data store1008. In such embodiments, when a query is received that relates to the data stored in the query acceleration data store1008, the worker nodes1006can access the data in the query acceleration data store1008and process the data according to the query. Furthermore, if the query also includes a request for datasets that are not in the query acceleration data store1008, the worker nodes1006can begin working on the dataset obtained from the query acceleration data store1008, while also obtaining the other dataset(s) from the other dataset source(s). In this way, a client device414a-404ncan rapidly receive a response to a provided query, while the worker nodes1006obtain datasets from the other dataset sources. The query acceleration data store1008can be, for example, a distributed in-memory database system, storage subsystem, and so on, which can maintain (e.g., store) datasets in both low-latency memory (e.g., random access memory, such as volatile or non-volatile memory) and longer-latency memory (e.g., solid state storage, disk drives, and so on). To increase efficiency and response times, the accelerated data set1008can maintain particular datasets in the low-latency memory, and other datasets in the longer-latency memory. For example, the datasets can be stored in-memory (non-limiting examples: RAM or volatile memory) with disk spillover (non-limiting examples: hard disks, disk drive, non-volatile memory, etc.). In this way, the query acceleration data store1008can be used to serve interactive or iterative searches. In some cases, datasets which are determined to be frequently accessed by a user can be stored in the lower-latency memory. Similarly, datasets of less than a threshold size can be stored in the lower-latency memory. As will be described below, a user can indicate in a query that particular datasets are to be stored in the query acceleration data store1008. The query can then indicate operations to be performed on the particular datasets. For subsequent queries directed to the particular datasets (e.g., queries that indicate other operations), the worker nodes1006can obtain information directly from the query acceleration data store1008. Additionally, since the query acceleration data store1008can be utilized to service requests from different clients404a-404n, the query acceleration data store1008can implement access controls (e.g., an access control list) with respect to the stored datasets. In this way, the stored datasets can optionally be accessible only to users associated with requests for the datasets. Optionally, a user who provides a query can indicate that one or more other users are authorized to access particular requested datasets. In this way, the other users can utilize the stored datasets, thus reducing latency associated with their queries. In certain embodiments, the worker nodes1006can store data from any dataset source, including data from a dataset source that has not been transformed by the nodes1006, processed data (e.g., data that has been transformed by the nodes1006), partial results, or aggregated results from a query in the query acceleration data store1008. In such embodiments, the results stored in the query acceleration data store1008can be served at a later time to the search head210, combined with additional results obtained from a later query, transformed or further processed by the worker nodes1006, etc. It will be understood that the system1001can include fewer or more components as desired. For example, in some embodiments, the system1001does not include a search head210. In such embodiments, the search process master1002can receive query requests from clients404and return results of the query to the client devices404. Further, it will be understood that in some embodiments, the functionality described herein for one component can be performed by another component. For example, although the workload advisor1010and dataset compensation module1016are described as being implemented in the search process master1002, it will be understood that these components and their functionality can be implemented in the query coordinator1004. Similarly, as will be described in greater detail below, in some embodiments, the nodes1006can be used to index data and store it in one or more data stores, such as the common storage or ingested data buffer, described in greater detail below. 4.1. Worker Nodes FIG.11is a block diagram illustrating an embodiment of multiple machines1102, each having multiple nodes1006-1,1006-n(individually and collectively referred to as node1006or nodes1006) residing thereon. The worker nodes1006across the various machines1102can be communicatively coupled to each other, to the various components of the system1001, such as the indexers206, query coordinator1004, search head210, common storage, ingested data buffer, etc., and to the external data sources1018. The machines1102can be implemented using multi-core servers or computing systems and can include an operating system layer1104with which the nodes1006interact. For example, in some embodiments, each machine1102can include 32, 48, 64, or more processor cores, multiple terabytes of memory, etc. In the illustrated embodiment, each node1006includes four processors1106, memory1108, a monitoring module1110, and a serialization/deserialization module1112. It will be understood that each node1006can include fewer or more components as desired. Furthermore, it will be understood that the nodes1006can include different components and resources from each other. For example node1006-1can include fewer or more processors1106or memory1108than the node1006-n. The processors1106and memory1108can be used by the nodes1006to perform the tasks assigned to it by the query coordinator1004and can correspond to a subset of the memory and processors of the machine1102. The serialization/deserialization module1112can be used to serialize/deserialize data for communication between components of the system1001, as will be described in greater detail below. The monitoring module1110can be used to monitor the state and utilization rate of the node1006or processors1106and report the information to the search process master1002or query coordinator1004. For example, the monitoring module1110can indicate the number of processors in use by the node1006, the utilization rate of each processor, whether a processor is unavailable or not functioning, the amount of memory used by the processors1106or node1006, etc. In addition, each worker node1006can include one or more software components or modules (“modules”) operable to carry out the functions of the system1001by communicating with the query coordinator1004, the indexers206, and the dataset sources. The modules can run on a programming interface of the worker nodes1006. An example of such an interface is APACHE SPARK, which is an open source computing framework that can be used to execute the worker nodes1006with implicit parallelism and fault-tolerance. In particular, SPARK includes an application programming interface (API) centered on a data structure called a resilient distributed dataset (RDD), which is a read-only multiset of data items distributed over a cluster of machines (e.g., the devices running the worker nodes1006). The RDDs function as a working set for distributed programs that offer a form of distributed shared memory. Based on instructions received from the query coordinator1004, the worker nodes1006can collect and process data or partial search results of a distributed network of data storage systems, and provide aggregated partial search results or finalized search results to the query coordinator1004or other destination. Accordingly, the query coordinator1004can act as a manager of the worker nodes1006, including their distributed data storage systems, to extract, collect, and store partial search results via their modules running on a computing framework such as SPARK. However, the embodiments disclosed herein are not limited to an implementation that uses SPARK. Instead, any open source or proprietary computing framework running on a computing device that facilitates iterative, interactive, and/or exploratory data analysis coordinated with other computing devices can be employed to run the modules218for the query coordinator1004to apply search queries to the distributed data systems. As a non-limiting example, as part of processing a query, a node1006can receive instructions from a query coordinator1004to perform one or more tasks. For example, the node1006can be instructed to intake data from a particular dataset source, parse received data from a dataset source to identify relevant data in the dataset, collect partial results from the parsing, join results from multiple datasets, or communicate partial or completed results to a destination, etc. In some cases, the instructions to perform a task can come in the form of a DAG. In response, the node1006can determine what task it is to perform in the DAG, and execute it. As part of performing the assigned task, the node1006can determine how many processors1106to allocate to the different tasks. In some embodiments the node can determine that all processors1106are to be used for a particular task or only a subset of the processors1106. In certain embodiments, each processor1106of the node1006can be used as a partition to intake, process, or collect data according to a task, or to process data of a partition as part of an intake, process, or collect task. Upon completion of the task, the node1006can inform the query coordinator1004that the task has been completed. When instructed to intake data, the processors1106of the node1006can be used to communicate with a dataset source (non-limiting examples: external data sources1018, indexers206, common storage, query acceleration data store1008, ingested data buffer, etc.). Once the node1006is in communication with the dataset source it can intake the data from the dataset source. As described in greater detail below, in some embodiments, multiple partitions of a node (or different nodes) can be assigned to intake data from a particular source. When instructed to parse or otherwise process data, the processors1106of the node1006can be used to review the data and identify portions of the data that are relevant to the query. For example, if a query includes a request for events with certain errors or error types, the processors1106of the node1006can parse the incoming data to identify different events, parse the different events to identify error fields or error keywords in the events, and determine the error type of the error. In some cases, this processing can be similar to the processing described in greater detail above with reference to the indexers206processing data to identify relevant results in the data stores208. When instructed to collect data, the processors1106of the node1006can be used to receive data from dataset sources or processing nodes. With continued reference to the error example, a collector partition, or processor1106can collect all of the errors of a certain type from one or more parsing partitions or processors1106. For example, if there are seven possible types of errors coming from a particular dataset source, a collector partition could collect all type 1 errors (or events with a type 1 error), while another collector partition could collect all type 2 errors (or events with a type 2 error), etc. When instructed to join results from multiple datasets, the processors1106of the node1006can be used to receive data corresponding to two different datasets and combine or further process them. For example, if data is being retrieved from an external data source and a data store208of the indexers206, join partitions could be used to compare and collate data from the different data stores in order to aggregate the results. When instructed to communicate results to a particular destination, the processors1106of the node1006can be used to prepare the data for communication to the destination and then communicate the data to the destination. For example, in communicating the data to a particular destination, the node1006can communicate with the particular destination to ensure the data will be received. Once communication with the destination has been established, the partition, or processor associated with the partition, can begin sending the data to the destination. As described in greater detail below, in some embodiments, data from multiple partitions of a node (or different nodes) can be communicated to a particular destination. Furthermore, the nodes1006can be instructed to transform the data so that the destination can properly understand and store the data. Furthermore, the nodes can communicate the data to multiple destinations. For example, one copy of the data may be communicated to the query coordinator1004and another copy can be communicated to the query acceleration data store1008. The system1001is scalable to accommodate any number of worker nodes1006. As such, the system1001can scale to accommodate any number of distributed data systems upon which a search query can be applied and the search results can be returned to the search head and presented in a concise or comprehensive way for an analyst to obtain insights into bid data that is greater in scope and provides deeper insights compared to existing systems. 4.1.1. Serialization/Deserialization In some cases, the serialization/deserialization module1112can generate and transmit serialized event groups. An event group can include the following information: number of events in the group, header information, event information, and changes to the cache or cache deltas. The serialization/deserialization module1112can identify the differences between the pieces of information using a type code or token. In certain cases, the type code can be in the form of a type byte. For example, prior to identifying header information, the serialization/deserialization module1112can include a header type code indicating that header information is to follow. Similarly, type codes can be used to identify event data or cache deltas. The header information can indicate the number and order of fields in the events, as well as the name of each field. Similarly, the event information for each event can include the number of fields in the event, as well as the value for that field. The cache deltas can identify changes to make to the cache relied upon to serialize/deserialize the data. As part of generating the group and serializing the data, the serialization/deserialization module1112can determine the number of events to group, determine the order and field names for the fields in the events of the group, parse the events, determine the number of fields for each event, identify and serialize serializable field values in the event fields, and identify cache deltas. In some cases, the serialization/deserialization module1112performs the various tasks in a single pass of the data, meaning that it performs the identification, parsing, and serializing during a single review of the data. In this manner, the serialization/deserialization module1112can operate on streaming data and avoid adding delay to the serialization/deserialization process. In some embodiments, an event group includes an identifier indicating the number of events in the group followed by a header type code and a number of fields indicating the number of fields in the events. For each field designated by the header, the event group can include a type code indicating whether the field name is already stored in cache or a type code indicating that the field name is included. Depending on the type code, the event group can include an identifier or the field name. For example, if the type code indicates the field name is stored in cache (e.g., a cache code), an identifier can be included to enable a receiving component to lookup the field name using the cache. If the type code indicates the field name is not stored in cache (e.g., a data code), the name of the field name can be included. Similar to the header information, for each event in the event group, the event group can include number of fields in the event. For each field of the event, the event group can include a type code indicating whether the field name is already stored in cache or a type code indicating that the field name is included. As mentioned above, the event group can also include cache delta information. The cache delta information can include a cache delta type code indicating that the cache is to be changed, a number of new entries, and a number of dropped entries. For each new entry the cache delta information can include the data or string being cached, and an identifier for the data. For each entry being dropped, the cache delta information can include the identifier of the cache entry to be dropped. As a non-limiting example, consider the following portions of events:ronnie.sv.splunk.com, access_combined, SALE, World of Cheese, 14.95ronnie.sv.splunk.com, access_combined, NO SALE, World of Cheese, 16.75ronnie.sv.splunk.com, access_combined, SALE, World of Cheeseronnie.sv.splunk.com, access_combined, SALE, Fondue Warrior, 20.95 In serializing the above-referenced events, the serialization/deserialization module1112can determine that the field names for the events are source, sourcetype, sale type, company name, and price and that this information is not in cache. The serialization/deserialization module1112can then generate the following event group: 4 (number of events)Header_Code5 (numberData_Code “source”of fields)Data_Code “sourcetype”Data_Code “sale_type”Data_Code “company name”Data_Code “price”Cache_Delta_Code5 (entries“source” x15to add)“sourcetype” x16“sale_type” x17“company name” x18“price” x190 (entriesto drop)Event_Code5 (numberData_Code “ronnie.sv.splunk.com”of fieldsData_Code “access_combined”in event)Data_Code “SALE”Data_Code “World of Cheese”Data_Code “14.95”Cache_Delta_Code5 (number“ronnie.sv.splunk.com” x21of new“access_combined” x22entries)“SALE” x23“World of Cheese” x24“14.95” x250 (entriesto drop)Event_Code5 (numberCache_Code x21of fieldsCache_Code x22in event)Data_Code “NO SALE”Cache_Code x24Data_Code “16.75”Cache_Delta_Code2 (entries“NO SALE” x26to add)“16.75” x270 (entriesto drop)Event_Code4 (numberCache_Code x21of fieldsCache_Code x22in event)Cache_Code x23Cache_Code x24Event_Code5 (numberCache_Code x21of fieldsCache_Code x22in event)Cache_Code x23Data_Code “World of Cheese”Data_Code “20.95”Cache_Delta_Code2 (number“World of Cheese”of new“20.95”entries)1 (entryx25to drop) By generating the group, the serialization/deserialization module1112can reduce the amount of data communicated for each group. For example, instead of transmitting the string “ronnie.sv.splunk.com” each time, the serialization/deserialization module1112serializes it and then communicates the cache ID thereafter. Entries can be added or dropped using a variety of techniques. In some cases, every new field value is cached. In certain cases, a field value is cached after it has been identified a threshold number of times. Similarly, an entry can be dropped after a threshold number of events or event groups have been processed without the particular value being identified. As a non-limiting example, the serialization/deserialization module1112can track X values at a time in a cache C and track up to Y values at a time that are not cached and how many time those values have been identified in a candidate set D. When a value is received, if it is in the cache C, then the identifier can be returned. If the value is not in the cache C, then it can be added to D. If Y has been reached in D, then the least recently used value can be dropped. If the count of the value in D satisfies a threshold T, then it can be moved to the cache C and receive an identifier. If the size of C is more than X, then the least recently used value in C can be dropped. In some embodiments, the cache is built as the data is processed, and changes are transmitted as they occur. For example, the receiver can start with an empty cache, and apply each delta as it comes along. As mentioned above, each delta can have two sections: new entries, and dropped entries. In certain embodiments, the receiver (or deserializer) does not drop cache entries until told to do so, otherwise, it may not be able interpret identifiers received from the serializer. In such embodiments, the serializer performs cache maintenance by informing the deserializer when to drop entries. Upon receipt of such a command, the deserializer can remove the identified entries. 4.2. Search Process Master As mentioned above, the search process master1002can perform various functions to reduce the workload of the search head210. For example, the search process master1002can parse an incoming query and allocate the query to a particular query coordinator1004for execution or spawn an additional query coordinator1004to execute the query. In addition, the search process master1002can track and store information regarding the system1001, queries, external data stores, etc., to aid the query coordinator1004in processing and executing a particular query. In some embodiments, the search process master1002. In some cases, the search process master1002can determine whether a query coordinator1004should be spawned based on user information. For example, for data protection or isolation, the search process master1002can spawn query coordinators1004for different users. In addition, the search process master1002can spawn query coordinators1004if it determines that a query coordinator1004is over utilized. In some cases, to accomplish these various tasks the search process master1002can include a workload advisor1010, workload catalog1012, node monitor1014, and dataset compensation module1016. Although illustrated as being a part of the search process master1002, it will be understood that any one or any combination of these components can be implemented separately or included in one or more query coordinators1004. Furthermore, although illustrated as individual components, it will be understood that any one or any combination of the workload advisor1010, workload catalog1012, node monitor1014, and dataset compensation module1016can be implemented by the same machine, processor, or computing device. As a brief introduction, the workload advisor1010can be used to provide resource allocation recommendations to a query coordinator1004for processing queries, the workload catalog1012can store data related to previous queries, the node monitor1014can receive information from the worker nodes1006regarding a current status and/or utilization rate of the nodes1006, and the dataset compensation module1016can be used by the query coordinator1004to enhance interactions with external data sources. 4.2.1 Workload Catalog The workload catalog1012can store relevant information to aid the workload advisor1010in providing a resource allocation recommendation to a query coordinator1004. As queries are received and processed by the system1001, the workload catalog1012can store relevant information about the queries to improve the workload advisor's1010ability to recommend the appropriate amount of resources for each query. For example, the system1001can track any one or any combination of the following data points about a query: which dataset sources were accessed, what was accessed in each dataset source (particular tables, buckets, etc.), the amount of data retrieved from the dataset sources (individually and collectively), the time taken to obtain the data from the dataset sources, the number of nodes1006used to obtain the data from each dataset source, the utilization rate of the nodes1006while obtaining the data from the dataset source, the number of transformations or phases (processing, collecting, reducing, joining, branching, etc.) performed on the data obtained from the dataset sources, the time to complete each transformation, the number of nodes1006assigned to each phase, the utilization rate of each node1006assigned to the particular phase, the processing performed by the query coordinator1004on results (individual or aggregatee), time to store or deliver results to a particular destination, resources used to store/deliver results, total time to complete query, time of day of query request, etc. Furthermore, the workload catalog can include identifying information corresponding to the datasets with which the system interacts (e.g., indexers, common storage, ingested data buffer, external data sources, query acceleration data store, etc.). This information can include, but is not limited to, relationships between datasets, size of dataset, rate of growth of dataset, type of data, selectivity of dataset, provider of dataset, indicator for private information (e.g., personal health information, etc.), trustworthiness of a dataset, dataset preferences, etc. The workload catalog1012can collect the data from the various components of the system1001, such as the query coordinator1004, worker nodes1006, indexers206, etc. For example, for each task performed by each node1006, the node1006can report relevant timing and resource utilization information to the query coordinator1004or directly to the workload catalog1012. Similarly, the query coordinator1004can report relevant timing, usage, and data information for each phase of a search, each transformation of data, or for a total query. Using the information collected in the workload catalog1012, the workload advisor1010can estimate the compute cost to perform a particular data transformation or query, or to access a particular dataset. Further, the workload advisor can determine the amount of resources (nodes, memory, processors, partitions, etc.) to recommend for a query in order to provide the results within a particular amount of time. 4.2.2 Node Monitor The node monitor1014can also store relevant information to aid the workload advisor1010in providing a resource allocation recommendation. For example, the node monitor1014can track and store information regarding any one or any combination of: total number of processors or nodes in the system1001, number of processors or nodes that are not available or not functioning, number of available processors or nodes, utilization rate of the processors or nodes, number of worker nodes, current tasks being completed by the worker nodes1006or processors, estimated time to complete a task by the nodes1006or processors, amount of available memory, total memory in the system1001, tasks awaiting execution by the nodes1006or processors, etc. The node monitor1014can collect the relevant information by communicating with the monitoring module1110of each node1006of the system1001. As described above, the monitoring modules1110of each node1006can report relevant information about the node state and utilization rate. Using the information from the node monitor1014, the workload advisor1010can ascertain the general state of any particular processor, node, or the system1001, and determine the number of nodes1006or processors1006available for a particular task or query. 4.2.3 Dataset Compensation As discussed above, the external data sources1018with which the system1001can interact vary significantly. For example, some external data source may have processing capabilities that can be used to perform some processing on the data that resides there prior to communicating the data to the nodes1006. In addition, the external data sources1018may support parallel reads from multiple partitions. Conversely, other external data sources1018may not be able to perform much, if any, processing on the data contained therein and/or may only be able to provide serial reads from a single partition. Additionally, each external data source1018may have particular requirements for interacting with it, such as a particular API, throttling requirements, etc. Further, the type and amount of data stored in each external data source1018can vary significantly. As such, the system's1001interaction with the different external data sources1018can vary significantly. To aid the system1001in interacting with the different external data sources1018, the dataset compensation model1016can include relevant information related to each external data source1018with which the system1001can interact. For example, the dataset compensation model1016can include any one or any combination of: the amount of data stored in an external data source1018, the type of data stored in an external data source, query commands supported by an external data source (e.g., aggregation, filtering ordering), query translator to translate a query into tasks supported by an external data source, the file system type and hierarchy of the external data source1018, number of partitions supported by an external data source1018, endpoint locations (e.g., location of processing nodes or processors), throttling requirements (e.g., number and rate at which requests can be sent to the external data source), etc. The information about each external data source1018can be collected in a variety of ways. In some cases, some of the information about the external data source1018can be received when a customer sets up the external data source1018for use with the system1001. For example, a customer can indicate the type of external data source1018e.g., MySQL, PostgreSQL, and Oracle databases; NoSQL data stores like Cassandra, Mongo DB, cloud storage like Amazon S3 HDFS, etc. Based on this information, the system1001can determine certain characteristics about the external data store1018, such as whether it supports multiple partitions. In addition, as discussed herein, different dataset sources have different capabilities. For example, not only can different datasets sources support a different number of partitions, but the dataset sources can support different functions. For example, some dataset sources may be capable of data aggregation, filtering, or ordering, etc., while others may not be. The dataset compensation module1016can store the capabilities of the different dataset sources to aid in providing a seamless experience to users. In certain cases, the system1001can collect relevant information about an external data source by communicating with it. For example, the query coordinator1004or a worker node1006can interact with the external data source1018to determine the number of partitions available for accessing data. In some cases, the number of available partitions may change as computing resources on the external data source1018become available or unavailable, etc. In addition, when the system1001accesses the external data source1018as part of a query it can track relevant information, such as the tables or amount of data accessed, tasks that the external data source was able to perform, etc. Similarly, the system1001can interact with an external data source1018to identify the endpoint that will handle any subqueries and its location. The endpoint and endpoint location may change depending on the subquery that is to be run on the external data source. Accordingly, in some embodiments, the system1001can request endpoint information with each query that is to access the particular external data source. Using the information about the external data sources1018, a query coordinator1004can determine how to interact with it and how to process data obtained from the external data source1018. For example, if an external data source1018supports parallel reads, the query coordinator1004can allocate multiple partitions to read the data from the external data source1018in parallel. In some embodiments, the query coordinator1004can allocate sufficient partitions or processors1106to establish a 1:1 relationship with the available partitions at the external data source1018. Similarly, if the external data source1018can perform some processing of the data, the query coordinator1004can use the information from the dataset compensation module1016to translate the query into commands understood by the external data source1018and push some processing to the external data source1018, thereby reducing the amount of system1001resources (e.g., nodes1006) used to process the query. Furthermore, in some cases, using the dataset compensation module1016, the query coordinator can determine the amount of data in the different external data sources that will be accessed by a particular query. Using that information, the query coordinator1004can intelligently interact with the external data sources1018. For example, if the query coordinator1004determines that data with similar characteristics in two external data sources are to be accessed and the data from each will eventually be combined, the query coordinator1004can first interact with or query the external data source1018that includes less data and then using information gleaned from that data prepare a more narrowly tailored query for the external data source1018with more data. As a specific example, suppose a user wants to identify the source of a particular error using information from an HDFS data source and an Oracle data source, but does not know what the error is or what generated it. To do so, the user enters a query that includes a request to identify errors generated within a particular timeframe and stored in an HDFS data source and an Oracle data source and then correlate the errors based on the error source. Based on the query, the query coordinator1004determines that a union operation is to be performed on the data from the HDFS data source and the Oracle data source based on the source of the errors. Additionally, suppose that the dataset compensation module1016has identified the HDFS data source as being relatively small and identified the Oracle data source as being significantly larger than the HDFS data source. Accordingly, based on the information in the dataset compensation module1016, the query coordinator1004can instruct the nodes1006to first intake and process the data from the HDFS data source. Suppose that by doing so, the nodes1006determine that the HDFS data source only includes fifty types of errors in the specified timeframe from ten sources. Accordingly, using that information, the query coordinator1004can instruct the nodes1006to limit the intake of data from the Oracle data store based on the error type and/or the source based on the error types and sources identified by first analyzing the HDFS data source. As such, the query coordinator1004can reduce the amount of data requested by the Oracle data store and the amount of processing needed to obtain the relevant result. For example, if the Oracle data store included two hundred error types from one hundred sources, the query coordinator1004avoided having to intake and process the data from all one hundred sources. Instead only the data from sources that matched the ten sources from the HDFS data source were requested and processed by the nodes1006. 4.3. Query Coordinator The query coordinator(s)1004can act as the primary coordinator or controller for queries that are assigned to it by the search head210or search process master1002. As such, the query coordinator can process a query, identify the resources to be used to execute the query, control and monitor the nodes to execute the query, process aggregate results of the query, and provide finalized results to the search head210or search process master1002for delivery to a client device404. 4.3.1. Query Processing Upon receipt of a query, the query coordinator1004can analyze the query. In some cases analyzing the query can include verifying that the query is semantically correct or performing other checks on the query to determine whether it is executable by the system. In addition, the query coordinator1004can analyze the query to identify the dataset sources that are to be accessed and to define an executable search process. For example, the query coordinator1004can determine whether data from the indexers206, external data sources1018, query acceleration data store1008, or other dataset sources (e.g., common storage, ingested data buffers, etc.) are to be accessed to obtain the relevant datasets. As part of defining the executable search process, the query coordinator1004can identify the different entities that can perform some processing on the datasets. For example, the query coordinator1004can determine what portion(s) of the query can be delegated to the indexers206, nodes1006, and external data sources1018, and what portions of the query can be executed by the query coordinator1004, search process master1002, or search head210. For tasks that can be completed by the indexers206, the query coordinator1004can generate task instructions for the indexers206to complete, as well as instructions to route all results from the indexers206to the nodes1006. For tasks that can be completed by the external data sources1018, the query coordinator1004can use the dataset compensation module1016to generate task instructions for the external data sources1018and to determine how to set up the nodes1006to receive data from the external data sources1018. In addition, as part of defining the executable search process, the query coordinator1004can generate a logical directed acyclic graph (DAG) based on the query.FIG.12is a diagram illustrating an embodiment of a DAG2000generated as part of a search process. In the illustrated embodiment, the DAG2000includes seven vertices and six edges, with each edge directed from one vertex to another, such that by starting at any particular vertex and following a consistently-directed sequence of edges the DAG2000will not return to the same vertex. Here, the DAG2000can correspond to a topological ordering of search phases, or layers, performed by the nodes1006. As such, a sequence of the vertices can represent a sequence of search phases such that each edge is directed from earlier to later in the sequence of search phases. For example, the DAG2000may be defined based on a search string for each phase or metadata associated with a search string. The metadata may be indicative of an ordering of the search phases such as, for example, whether results of any search string depend on results of another search string such that the later search string must follow the former search string sequentially in the DAG2000. In the illustrated embodiment ofFIG.12, the DAG2000can correspond to a query that identifies data from two dataset sources that are to be combined and then communicated to different locations. Accordingly, the DAG2000includes intake vertices1202,1208, a process vertex1204, collect vertices1206,1210, a join vertex1212, and a branch vertex1214. Each vertex1202,1204,1206,1208,1210,1212,1214can correspond to a search phase performed using one or more partitions or processors1106of one or more nodes1006on a particular set of data. For example, the intake, process, and collect vertices1202,1204,1206can correspond to data search phases, or transformations, on data received from a first dataset source. More specifically, the intake phase or vertex1202can correspond to one or more partitions that receive data from the first dataset source, the process phase1204can correspond to one or more partitions used to process the data received by the partitions at the intake phase1202, and the collect phase1206can correspond to one or more partitions that collect the results of the processing by the partitions in the process phase1204. Similarly, the intake and collect vertices1208,1210can correspond to data search phases performed using one or more partitions or processors1106on data received from a second dataset source. For example, the intake phase1208can correspond to one or more partitions that receive data from the second dataset source and the collect phase1210can correspond to one or more partitions that collect the results from the partitions in the intake phase1208. The join and branch phases1212,1214can correspond to data search phases performed using one or more partitions or processors1106on data received from the different branches of the DAG2000. For example, the join phase1212can correspond to one or more partitions used to combine the data received from the partitions in the collect phases1206,1210. The branch phase1214can correspond to one or more partitions used to communicate results of the join phase1212to one or more destinations. For example, the partitions in the branch phase1214can communicate results of the query to the query coordinator1004, an external data source1018, accelerated data source1008, ingested data buffer, etc. It will be understood that the number, order, and types of search phases in the DAG2000can be determined based on the query. As a non-limiting example, consider a query that indicates data is to be obtained from common storage and an Oracle database, collated, and the results sent to the query coordinator1004and an HDFS data store. In this example, in response to determining that the common storage do not provide processing capabilities, the query coordinator1004can generate vertices1202,1204,1206indicating that an intake phase1202, process phase1204, and collect phase1206will be used to process the data from the common storage sufficiently to be combined with data from the Oracle database. Similarly, based on a determination that the Oracle database can perform some processing capabilities, the query coordinator can generate vertices1208,1210indicating that an intake phase1208and collect phase1210will be used to sufficiently process the data from the Oracle database for combination with the data from the common storage. The query coordinator1004can further generate the join phase1212based on the query indicating that the data from the Oracle database and common storage is to be collated or otherwise combined (e.g., joined, unioned, etc.). In addition, based on the query indicating that the results of the combination are to be communicated to the query coordinator1004and the HDFS data store, the query coordinator1004can generate the branch phase1214. As mentioned above, in each phase, the query coordinator1004can allocate one or more partitions to perform the particular search phase. It will be understood that the DAG2000is a non-limiting example of the search phases that can be included as part of a search process. In some cases, depending on the query, the DAG2000can include fewer or more phases of any type. For example, the DAG2000can include fewer or more intake phases depending on the number of dataset sources. Additionally, depending on the particular processing requirements of a query, the DAG2000can include multiple processing, collect, join, union, stats, or branch phases, in any order. In addition to determining the number and types of search phases for a search process, the query coordinator1004can calculate the relative cost of each phase of the search process, determine the amount of resources to allocate for each phase of the search process, generate tasks and instructions for particular nodes to be assigned to a particular search process, generate instructions for dataset sources, generate tasks for itself and/or the search head210, etc. To calculate the relative cost of each phase of the search process and determine the amount of resources to allocate for each phase of the search process, the query coordinator1004can communicate with the workload advisor1010, workload catalog1012, and/or the node monitor1014. As described previously, the workload advisor1010can use the data collected in the workload catalog1012to determine the cost of a query or an individual transformation or search phase of a search process and to provide a resource allocation recommendation. Furthermore, the workload advisor1010can use the data from the node monitor module1014to determine the available resources in the system1001. Using this information, the query coordinator1004can determine the cost for each search phase, the amount of resources available for allocation, and the amount of resources to allocate for each search phase. In determining the amount of resources to allocate for each search phase, the query coordinator1004can also generate the tasks and instructions for each node1006. The instructions can include computer executable instructions that when executed by the node1006cause the node1006to perform the task assigned to it by the query coordinator1004. For example, for nodes1006that are to be assigned to an intake phase1202,1208, the query coordinator1004can generate instructions on how to access a particular dataset source, what instructions are to be sent to the dataset source, what to do with the data received from the dataset source, where do send the received data, how to perform any load balancing or other tasks assigned to it, etc. For nodes1006that are to process data in the process phase1204, the query coordinator1004can generate instructions indicating how to parse the received data, relevant fields or keywords that are to be identified in the data, what to do with the identified field and keywords, where to send the results of the processing, etc. Similarly, for nodes1006in the collect phases1206,1210, join phase1212, or branch phase1214, the query coordinator1004can generate task instructions so that the nodes1006are able to perform the task assigned to that particular phase. The task instructions can tell the nodes1006what data they are to process, how they are to process the data, where they are to route the results of the processing, either between each other or to another destination. In some cases, the query coordinator1004can generate the tasks and instructions for all nodes1006or processors1106and send the instructions to all of the allocated nodes1006or processors1106. Between them, the nodes1006or processors1106can determine or assign partitions to be used to help execute the different instructions and tasks. The instructions sent to the nodes1006or processors1106can include additional parameters, such as a preference to use processors1106partitions on the same machine for subsequent tasks. Such instructions can help reduce the amount of data communicated over the network, etc. In some embodiments, to generate instructions for the dataset sources, the query coordinator1004can use the dataset compensation module1016. As described previously, the dataset compensation module1016can include relevant data about external data sources including, inter alia, processing abilities of the external dataset sources, number of partitions of the external dataset sources, instruction translators, etc. Using this information, the query coordinator1004can determine what processing to assign to the external data sources, and generate instructions that will be understood by the external data sources. In addition, the query coordinator1004can have access to similar information about other dataset sources and/or communicate with the dataset sources to determine their processing capabilities and how to interact with them (non-limiting examples: number of partitions to use, processing that can be pushed to the dataset source, etc.). Similarly, the query coordinator1004can determine how to interact with the dataset destinations so that the datasets can be properly sent to the correct location in a manner that the destination can store them correctly. In some cases, the query coordinator1004can interact with one partition of the external dataset source using multiple partitions. For example, the query coordinator1004can allocate multiple partitions to interact with a single partition of the external dataset source. The query coordinator1004can break up a query or a subquery into multiple parts. Each part can be assigned to a different partition, which can communicate the subqueries to the partition of the external dataset source. Thus, unbeknownst to the external dataset source, it can concurrently process data from a single query. Furthermore, the query coordinator1004can determine the order for conducting the search process. As mentioned above, in some embodiments, the query coordinator1004can determine that processing data from one dataset source could speed up the search process as a whole (non-limiting example: using data from one dataset source to generate a more targeted search of another dataset source). Accordingly, the query coordinator1004can determine that one or more search phases are to be completed first and then based on information obtained from the search phase, additional search phases are to be initiated. Similarly, other optimizations can be determined by the query coordinator1004. Such optimizations can include, but are not limited to, pushing processing to the edges (e.g., to external data sources, etc.), identifying fields in a query that are key to the query and reducing processing based on the identified field (e.g., if a relevant field is identified in a final processing step, use the field to narrow the set of data that is searched for earlier in the search process), allocating the query to nodes that are physically close to each other or on the same machine, etc. 4.3.2. Query Execution and Node Control Once the query is processed and the search scheme determined, the query coordinator1004can initiate the query execution. In some cases, in initiating the query, the query coordinator1004can communicate the generated task instructions to the various locations that will process the data. For example, the query coordinator1004can communicate task instructions to the indexers206, based on a determination that the indexers206are to perform some amount of processing on the dataset. Similarly, the query coordinator1004can communicate task instructions to the nodes1006, external data sources1018, query acceleration data store1008, common storage, and/or ingested data buffer, etc. In some embodiments, rather than communicating with the various dataset sources, the query coordinator1004can generate task instructions for the nodes1006to interact with the dataset sources such that the dataset sources receive any task instructions from the nodes1006as opposed to the query coordinator1004. For example, rather than communicating the task instructions directly to a dataset source, the query coordinator1004can assign one or more nodes1006to communicate task instructions to the external data sources1018, indexers206, or query acceleration data store1008. In certain embodiments, the query coordinator1004can communicate the same search scheme or task instructions to the nodes1006or partitions of the nodes1006that have been allocated for the query. The allocated nodes1006or partitions of the nodes1006can then assign different groups to perform different portions of the search scheme. Upon receipt of the task instructions, the dataset sources and nodes1006can begin operating in parallel. For example, if task instructions are sent to the indexers206and to the nodes1006, both can begin executing the instructions in parallel. In executing the task instructions, the nodes1006can organize their processors1106or partitions according to task instructions. For example, some of the nodes1006can allocate one or more partitions or processors1106as part of an intake phase, another partition as part of a processing phase, etc. In some cases, all partitions or processors1106of a node1006can be allocated to the same task or to different tasks. In certain embodiments, it can be beneficial to allocate partitions from the same node1006to different tasks to reduce network traffic between nodes1006or machines1102. FIG.13is a block diagram illustrating an embodiment of layers of partitions implementing various search phases of a query. In some cases, the layers can correspond to search phases in a DAG, such as the DAG2000described in greater detail above. In the illustrated embodiment ofFIG.13, based on task instructions received from the query coordinator1004, the nodes1006have arranged various partitions to perform different search phases on data coming from a dataset source1302. As described previously, the dataset source1302can correspond to indexers206, external data sources1018, the query acceleration data store1008, common storage, an ingested data buffer, or other source of data from which the nodes1006can receive data. As referenced inFIG.12, the partitions in each layer can interact with the data based on task instructions received by the query coordinator1004. In the illustrated embodiment ofFIG.13, the partitions in the intake layer1304can receive the data from the dataset source1302, which can be communicated to the partitions in the processing layer1306in a load-balanced fashion. The partitions in the processing layer1306can be used to process the data based on the task instructions, which were generated based on the query, and the results provided to the partitions in the collector layer1308. Similarly, upon completing their assigned task, the processors associated with the partitions in the collector layer1308can communicate the results of their processing to the branch layer1310. In the illustrated embodiment ofFIG.13, the branch layer1310communicates the results received from the partitions in the collector layer1308to a first dataset destination1314and to partitions in a storage layer1312for storage in a second dataset destination1316. It will be understood that fewer or more layers can be included as desired, and can be based on the content of the particular query being executed. Furthermore, it will be understood that the layers can correspond to different map-reduce procedures or commands. For example, as described herein, in the illustrated embodiments, the processing layer1306can correspond to a map procedure and the collector layer1308can correspond to a reduce procedure. However, as described herein, it will be understand that various layers can correspond to map or reduce procedures. In the illustrated embodiment, four partitions have been allocated to the intake layer1304, eight partitions have been allocated to the processing layer1306, five partitions have been allocated to the collector layer1308, one partition has been allocated to the branch layer1310, and three partitions have been allocated to the storage layer1312. However, it will be understood that fewer or more partitions can be assigned to any layer as desired and fewer or additional layers can be included. For example, based on a query that indicates multiple dataset sources are to be accessed, the query coordinator1004can allocate separate intake, processing, and collector layers1304,1306,1308for each dataset source1302. Furthermore, based on the query commands, the query coordinator can allocate additional layers, such as a join layer to combine data received from multiple dataset sources, etc. In determining the number of partitions and/or processors1106for each search phase or layer, the query coordinator1004can use the workload advisor1010and/or dataset compensation module1016. For example, the workload advisor1010can use historical data about executing individual search phases in queries to recommend an allocation scheme that provides sufficient resources to process the query in a reasonable amount of time. In some cases, the query coordinator1004can allocate partitions for the intake layer1304and storage layer1312based on information about the number of partitions available for reading from the dataset source1302and writing data to the dataset destination1316, respectively. The query coordinator1004can obtain the information about the dataset source1302or dataset destination1316from a number of locations, including, but not limited to, the workload catalog1012, the dataset compensation module1016, or from the dataset source1302or dataset destination1316itself. The information can inform the query coordinator1004as to the number of partitions available for reading from the dataset source1302and writing to the dataset destination1316. In some cases, the query coordinator1004can allocate partitions in the intake layer1304or the storage layer1312to have a one-to-one, one-to-many, or many-to-one correspondence with partitions in the dataset source1302or dataset destination1316, respectively. The correspondence between the partitions in the intake or storage layer1304,1312and the partitions in the dataset source or destination1302,1316, respectively, can be based on a threshold number of partitions, the type of the dataset source/destination, etc. In certain embodiments, if the query coordinator1004determines that the dataset source1302(or dataset destination1316) has a number of partitions that satisfies a threshold number of partitions or determines that the number of partitions of the dataset source1302(or dataset destination1316) can be matched without overextending the nodes1006, the query coordinator1004can allocate partitions in the intake layer1304(or storage layer1312) to have a one-to-one correspondence to partitions in the dataset source1302(or dataset destination1316). The number of partitions that satisfy the threshold number of partitions can be determined based on the number of nodes1006or processors1106in the system1001, the number of available nodes1006in the system1001, scheduled usage of nodes1006, etc. Accordingly, the threshold number of partitions can be dynamic depending on the status of the processors1106, nodes1006, or the system1001. For example, if a large number of nodes1006are available, the threshold number of nodes can be larger, whereas, if only a relatively small number of nodes1006are available, the threshold number can be smaller. Similarly, if the workload advisor10010expects a large number of queries in the near term it can allocate fewer partitions to an individual query. Alternatively, if the workload advisor10010does not expect many queries in the near term it can allocate a greater number of partitions to an individual query. In some cases, the query coordinator1004can determine whether to match the number of partitions in the dataset source1302or dataset destination1316with corresponding partitions in the intake layer1304or storage layer1312, respectively, based on the type of the dataset source1302or dataset destination1316. For example, the query coordinator1004can determine there should be a one-to-one correspondence of intake layer1304partitions to dataset source1302partitions (or storage layer1312partitions to dataset destination1316partitions) when the dataset source1302(or dataset destination1316) is an external data source or ingested data buffer and that there should be a one-to-multiple correspondence when the dataset source1302(or dataset destination1316) is indexers206, common storage, query acceleration data store1008, etc. As a non-limiting example, if the dataset source1302is an external data source or ingested data buffer with four partitions and the query coordinator1004determines that it can support a one-to-one correspondence, the query coordinator1004can allocate four partitions to the intake layer1304, as illustrated inFIG.13. Similarly, if the dataset destination1316is an external data source or ingested data buffer with three partitions and the query coordinator1004determines that it can support a one-to-one correspondence, the query coordinator1004can allocate three partitions to the storage layer1312, as illustrated inFIG.13. As another non-limiting example, if the dataset source1302(or dataset destination1316) is indexers206, common storage, or query acceleration data stores1008with hundreds of potential partitions, and/or the query coordinator1004determines that it cannot support a one-to-one correspondence, it can allocate the four partitions to the intake layer1304(or the three partitions to the storage layer1312), as illustrated inFIG.13. In addition, during intake of the data from the dataset source1302, the query coordinator1004can dynamically adjust the number of partitions in the intake layer1304. For example, if an additional partition of the dataset source1302becomes available or one of the partitions becomes unavailable, the query coordinator1004can dynamically increase or decrease the number of partitions in the intake layer1304. Similarly, if the query coordinator1004determines that the intake layer1304is taking too much time and additional resources are available, it can dynamically increase the number of partitions in the intake layer1304. In addition, if the query coordinator1004determines that additional resources are available or become unavailable, it can dynamically increase or decrease the number of partitions in the intake layer1304. Similarly, the query coordinator can dynamically adjust the number of partitions in the storage layer1312. Similar to the intake layer1304and storage layer1312, the query coordinator1004can allocate partitions to the different search layers1306,1308,1310based on information about the query and information in the workload catalog1012. For example, the query may include requests to process the data in a way that is resource intensive. As such, the query coordinator1004can allocate a larger number of partitions and/or processors1106to the processing layer1306or use multiple processing layers1306to process the data. In some cases, more partitions can be allocated to the search layers for queries of larger datasets. In addition, during execution of the query, the query coordinator1004can monitor the partitions in the search layers1306,1308,1310and dynamically adjust the number of partitions in each depending on the status of the individual partitions, the status of the nodes1006, the status of the query, etc. In some cases, the query coordinator1004can determine that a significant number of results are being sent to a particular partition in the collector layer1308. As such, the query coordinator1004can allocate an additional partition to the collector layer and/or instruct that the results from the partitions in the processing layer1306be distributed in a different manner to reduce the load on the particular partition in the collector layer. In certain cases, the query coordinator1004can determine that a partition in the processing layer1306is not functioning or that there is significantly more data coming from the dataset source1302than was anticipated. Accordingly, the query coordinator1004can allocate an additional partition1306to the processing layer. Conversely, if the query coordinator1004determines that some of the partitions or processors1106are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc. Accordingly, the query coordinator1004can dynamically allocate and deallocate resources to intake and process the data from the dataset source1302in a time-efficient and performant manner. As a non-limiting example, consider a query that includes a request to count the number of different types of errors in data stored in an external data source within a timeframe and to return the results to the user and store the results in the query acceleration data store1008. Based on the query, the query coordinator1004can generate a DAG that includes the intake layer1304, processing layer1306, collector layer1308, branch layer1310, and storage layer1312. Additionally, based on a determination that the external data source supports four partitions, the query coordinator1004allocates four partitions to the intake layer1304. In addition, based on the expected amount of data to be processed, the query coordinator1004allocates eight partitions to the processing layer1306, and five partitions to the collector layer1308. Further, based on resource availability and the determination that the dataset destination is the query acceleration data store1008, which can support more than a threshold number of partitions, the query coordinator1004allocates three partitions to the storage layer1312. The task instructions for each partition of each search layer can be sent to the nodes1006, which assign processors1106to the various tasks and partitions. In some cases, the processors1106and partitions can have a 1:1 correspondence, such that each partition corresponds to one processor. In certain embodiments, multiple partitions can be assigned to a processor1106or vice versa. As such, when referred to herein as a partition performing an action, it will be understood that the action can be performed by the processor1106assigned to that partition. During execution, the partitions in the intake layer1304(or processors assigned to the partition) communicate with the dataset source1302to receive the relevant data from the partitions of the dataset source1302. The data is then communicated to the partitions in the processing layer1306. In the illustrated embodiment, each partition of the intake layer1304communicates data in a load-balanced fashion to two partitions in the processing layer1306. The partitions in the processing layer1306can parse the incoming data to identify events that include an error and identify the type of error. The partitions in the processing layer1306can determine the results to the partitions in the collector layer1308. For example, each partition in the processing layer1306can apply a modulo five to the error type in order to attempt to equally separate the results between the five partitions in the collector layer1308. As such, for each error type, a partition in the collector layer1308can include the total count of errors for that type. Depending on the query, in some cases, the partitions in the collector layer1308can also include the event that included the particular error type. The partitions in the collector layer1308can send the results to the partition in the branch layer1310. The partition in the branch layer1310can communicate the results to the query coordinator1004, which can communicate the results to the search head or client device. In addition, the branch layer1310can communicate the results to the partitions in the storage layer1312, which communicate the results in parallel to the query acceleration data store1008. Throughout the execution of the query, the query coordinator1004can monitor the partitions in the intake layer1304, processing layer1306, collector layer1308, branch layer1310, and storage layer1312. If one partition becomes unavailable or becomes overloaded, the query coordinator1004can allocate additional resources. Similarly, if a partitions is not being utilized, the query coordinator1004can deallocated it from a layer. For example, if a partition on the external data source becomes unavailable, a corresponding partition in the intake layer1304may no longer receive any data. As such, the query coordinator1004can deallocate that partition from the intake layer1304. In some embodiments, any change in state of a partition can be reported to the node monitor module1014, which can be used by the query coordinator to allocate resources. 4.3.3. Result Processing Once the nodes1006have completed processing the query or particular results of the query, they can communicate the results to the query coordinator1004. The query coordinator1004can perform any final processing. For example, in some cases, the query coordinator1004can collate the data from the nodes1006. The query coordinator1004can also send the results to the search head210or to a dataset destination. For example, based on a command (non-limiting example “into”), the query coordinator210can store results in the query acceleration data store1008, an external data source1018, an ingested data buffer, etc. In addition, the query coordinator1004can communicate to the search process master1002that the query has been completed. In the event all queries assigned to the query coordinator1004have been completed, the query coordinator can shut down or enter a hibernation state and await additional queries assigned to it by the search process master1002. 4.4. Query Acceleration Data Store As described herein, a query can indicate that information is to be stored (e.g., stored in non-volatile or volatile memory) in the query acceleration data store1008. As described above, the query acceleration data store1008can store information (e.g., datasets) sourced from other dataset sources, such as, external data sources1018, indexers206, ingested data buffers, indexers, and so on. For example, when providing a query, a user can indicate that particular information is to be stored in the query acceleration data source1008(e.g., cached). The information can include the results of the query, partial results of the query, data (processed or unprocessed) received from another dataset source via the nodes1006, etc. Subsequently, the data intake and query system1001can cause queries directed to the particular information to utilize the query acceleration data store1008. In this way, the stored information can be rapidly accessed and utilized. As an example, the query can indicate that information is to be obtained from the external data sources1018. Since the external data sources1018may have potentially high latency, response times to particular queries, the query can be constrained according to characteristics of the external data sources1018. For example, particular external data sources1018may be limited in their processing speed, network bandwidth, and so on, such that the worker nodes1006are required to wait longer for information. As described herein, the query can therefore specify that particular information from the external data sources1018(or other dataset sources) be stored in the query acceleration data store1008. Subsequent queries that utilize this particular information can then be executed more quickly. For example, in subsequent queries the worker nodes1006can obtain the particular information from the query acceleration data store1008rather than from the external data source1018. An example query can be of a particular form, such as:Query=<from [dataset source]>|<[logic]>|[accelerated directive] In the above example, the query indicates that information is to be obtained from a dataset source, such as an external data source1018. Optionally, the query can indicate particular tables, documents, records, structured or unstructured information, and so on. As described above, the data intake and query system1001can process the query and determine that the external data source is being referenced. The next element of the query (e.g., a request parameter) includes logic to be applied to the data from the external data source, for example the logic can be implemented as structured query language (SQL), search processing language (SPL), and so on. As described above, the worker nodes1006can obtain the requested data, and apply the logic to obtain information to be provided in response to the query. In the above example query, an accelerated directive is included. For example, the accelerated directive can be a particular term (e.g., “into query acceleration data store”), symbol, and so on, included in the query. The accelerated directive can optionally be manually included in the query (e.g., a user can type the directive), or automatically. As an example of automatically including the directive, a user can indicate in a user interface associated with entering queries that information is to be stored in the query acceleration data store1008. As another example, the user's client device or query coordinator1004can determine that information is to be stored in the data store1008. For example, the query can be analyzed by the client device or query coordinator1004, and based on a quantity of information being requested, the client device or query coordinator1004can automatically include the accelerated directive (e.g., if greater than a threshold quantity is being requested, the directive can be included). Optionally, the data intake and query system1001can automatically store the requested information in the query acceleration data store1008without an accelerated directive in a received query. For example, the query system1001can automatically store data in the query acceleration data store1008based on a user ID (e.g., always store results for a particular user or based on recent use by the user), time of day (e.g., store results for queries made at the beginning or end of a work day, etc.), dataset source identity (e.g., store data from dataset source identified has having a slower response time, etc.), network topology (e.g., store data from sources on a particular network given the network bandwidth, etc.) etc. Although the above example shows the accelerated directive at the end of the query, it will be understood that it can be placed at any part of it. In some cases, the result of the command preceding the accelerated directive corresponds to the data stored in the query acceleration data store1008. Upon receipt of the query, the data intake and query system1001(e.g., the query coordinator1004) can cause the requested information from the dataset source to be stored in the query acceleration data store1008. Optionally, the query acceleration data store1008can receive the processed result associated with the query (e.g., from the worker nodes1006). The query acceleration data store1008can then provide the processed result to the query coordinator1004to be relayed to the requesting client. However, to increase response times, the worker nodes1006can provide processed information to the query acceleration data store1008, and also to the query coordinator1004. In this way, the query acceleration data store1008can store (e.g., in low latency memory, or longer latency memory such as solid state storage or disk storage) the received processed information, while the query coordinator1004can relay the received processed information to the requesting client. The processed result may be stored by the query acceleration data store1008in association with an identifier, such that the information can be easily referenced. For example, the query acceleration data store1008can generate a unique identifier upon receipt of information for storage by the worker nodes1006. For subsequent queries, the query coordinator1004can receive the identifier, such that the query coordinator1004can replace the initial portion with the unique identifier. In some embodiments, the query coordinator1004can generate the unique identifier. For example, the query coordinator can receive information from the query acceleration data store1008indicating that it stored information. The query coordinator1004can maintain a mapping between generated unique identifiers and datasets, partitions, and so on, that are associated with information stored by the query acceleration data store1008. The query coordinator1004may optionally provide a unique identifier to the requesting client, such that a user of the requesting client can re-use the unique identifier. For example, the user's client can present a list of all such identifiers along with respective queries that are associated with the identifier. The user can select an identifier, and generate a new query that is based on an associated query. In addition to storing the data or the results or partial results of the query, the query acceleration data store can store additional information regarding the results. For example, the query acceleration data store can store information about the size of the dataset, the query that resulted in the dataset, the dataset source of the dataset, the time of the query that resulted in the dataset, the time range of data that was processed to produce the dataset, etc. This information can be used by the system1001to prompt a user as to what data is stored and can be used in the query acceleration data store, determine whether portions of an incoming query correspond to datasets in the accelerate data store, etc. This information can also be stored in the workload catalog1012, or otherwise made available to the query coordinator1004. Subsequently, for received queries that reference the processed information, the query coordinator1004can cause the worker nodes1006to obtain the information from the query acceleration data store1008. For example, a subsequent query can beQuery=<from [dataset source]>|<[logic]>|<[subsequent_logic]> In the above query, the query coordinator1004can determine that some portion of the data referenced in the query corresponds to data that is stored in the query acceleration data store1008(previously stored data) or was previously processed according to a prior query (e.g., the query represented above) and the results of the processing stored in the query acceleration data store1008. For example, the query coordinator1004can compare the query to prior queries, and any portion of data that was referenced in a prior query. The query coordinator1004can then instruct the worker nodes1006to obtain the previously stored data or the results of processing the data from the query acceleration data store1008. In some cases, the subsequent query can include an explicit command to obtain the data or results from the query acceleration data store1008. Obtaining the previously stored data or results of processing the data provides multiple technical advantages. For example, the worker nodes1006can avoid having to reprocess the data, and instead can utilize the prior processed result. Additionally, the worker nodes1006can more rapidly obtain information from the query acceleration data store1008than, for example, the external data sources1018. As an example, the worker nodes1006may be in communication with the query acceleration data store1008via a direct connection (e.g., virtual networks, local area networks, wide area networks). In contrast, the worker nodes1006may be in communication with the external data sources1018via a global network (e.g., the internet). As a non-limiting example, in some cases, a first query can indicate that data from a dataset source is to be stored in the query acceleration data store1008with minimal processing by the nodes1006or without transforming the data from the dataset source. A subsequent query can indicate that the data stored in the query acceleration data store1008is to be processed or transformed, or combined with other data or results to obtain a result. In certain cases, the first query can indicate that data from the dataset source is to be transformed and the results stored in the query acceleration data store1008. The subsequent query can indicate that the results stored in the query acceleration data store1008are to be further processed, combined with data or results from another dataset source, or provided to a client device. Furthermore, in certain embodiments, the worker nodes1006can perform any additional processing on the results obtained from the query acceleration data store1008, while concurrently obtaining data from another dataset source and processing it to obtain additional results. In some cases, the results stored in the query acceleration data store1008can be communicated to a client device while the nodes concurrently obtain data from another dataset source and process it to obtain additional results. By obtaining, processing, and displaying the results of the previously processed data while concurrently obtaining additional data to be processed, processing the additional data, and communicating the results of processing the additional data, the system1001can provide a more effective responsiveness to a user and decrease the response time of a query. For the subsequent query identified above, the ‘subsequent_logic’ can be applied by the worker nodes1006based on the processed result stored by the query acceleration data store1008. The result of the subsequent query can then be provided to the query coordinator1004to be relayed to the requesting client. The query acceleration data store1008, as described herein, can maintain information in low-latency memory (e.g., random access memory) or longer-latency memory. That is, the query acceleration data store1008can cause particular information to spill to disk when needed, ensuring that the data store1008can service large amounts of queries. Since, in some implementations, the low-latency memory can be less than the longer-latency memory, the query acceleration data store1008can determine which datasets are to be stored in the low-latency memory. In some embodiments, to provide this functionality, the query acceleration data store1008can be implemented as a distributed in-memory data store with spillover to disk capabilities. For example, the data in the query acceleration data store1008can be stored in low-latency volatile memory, and in the event, the capacity of the low-latency volatile memory is reached, the data can be stored to disk. In some embodiments, the query acceleration data store1008can utilize one or more storage policies to swap datasets between low-latency memory and longer-latency memory. Additionally, the query acceleration data store1008can flush particular datasets after determining that the datasets are no longer needed (e.g., the user can indicate that the datasets can be flushed, or a threshold amount of time can pass). As an example of a storage policy, the query acceleration data store1008can store a portion of a dataset in low-latency memory while storing a remaining portion in longer-latency memory. In this way, the query acceleration data store1008can have faster access to at least a portion each user's dataset. If a subsequent query is received by the data intake and query system1001that references a stored dataset, the query acceleration data store1008can access the portion of the stored dataset that is in low-latency memory. Since this access is, in general, with low-latency, the query acceleration data store1008can quickly provide this information to the worker nodes1006for processing. At a same, or similar, time, the query acceleration data store1008can access the longer-latency memory and obtain a remaining portion of the stored dataset. The worker nodes1006can then receive this remaining portion for processing. Therefore, the worker nodes1006can quickly respond to a request, based on the initially received portion from the low-latency memory. In this way, the user can receive search results in a manner that appears to be in ‘real-time’, that is, the search results can be provided in a less than a threshold amount of time (e.g., 1 second, 5 seconds, 10 seconds). Subsequent search results can then be provided upon the worker nodes1006processing the portion from the longer-latency memory. The above-described storage policy may be based on a size of the dataset(s). For example, an example dataset may be less than a threshold, and the query acceleration data store1008may store the entirety of the dataset in low-latency memory. For an example dataset greater than the threshold, the data store1008may store a portion in low-latency memory. As the size of the dataset increases, the query acceleration data store1008can store an increasingly lesser sized portion in low-latency memory. In this way, the data store1008can ensure that large data sets do not consume the low-latency memory. While the queries described above indicate, a first query that includes an accelerated directive, and a second query that includes the first query (e.g., as an initial portion), optionally the data intake and query system1001can receive a first query that is a combination of the first query and second query described above. For example, an example initial query can beQuery=<from [dataset source]>|<[logic]>|[accelerated directive]|<subsequent_logic> The above example query indicates that the data intake and query system1001is to obtain information from an example dataset source (e.g., external data source1018), process the information, and cause the query acceleration data store1008to store the processed information. In addition, subsequent logic is to be applied to the processed information, and the result provided to the requesting client404a-404n. FIG.13illustrates a branch layer1310, which for the example query described above, can be utilized to provide information both to the query acceleration data store1008and the data destination1314(e.g., the requesting client). For example, subsequent to the worker nodes1006obtaining processed information (e.g., based on the dataset source and logic), the worker nodes1006can provide the processed information for storage in the query acceleration data store1008while continuing to process the query (e.g., apply the subsequent logic). That is, the worker nodes1006can bifurcate the data (e.g., at branch layer1310), such that the query acceleration data store1008can store partial results while the worker nodes1006service the query and provide the completed results to the query coordinator1004. Optionally, another query may be received that references the partial results in the data store1008, and one or more worker nodes1006may access the data store1008to service the other query. For example, the other query may be processed at a same time as the above-described example initial query. Received queries can further indicate multiple datasets stored by the query acceleration data store1008. For example, a first query can indicate that first information is to be obtained (e.g., from external data source1018, indexers206, common storage, and so on) and stored in the query acceleration data store1008as a first dataset. Additionally, a second query can indicate that second information is to obtained and stored in the data store1008as a second dataset. Subsequent queries can then reference the stored first dataset and second dataset, such that logic can be applied to both the first and second dataset via rapid access to the query acceleration data store1008. Furthermore, queries can reference datasets stored by the query acceleration data store1008, and also datasets to be obtained from another dataset source (e.g., from external data source1018, indexers206, ingested data buffer, and so on). For particular queries, the data intake and query system1001may be able to provide results (e.g., search results) from the query acceleration data store1008while datasets is being obtained from another dataset source. Similarly, the system1001may be able to provide results from the data store1008while data obtained from another dataset source is being processed. As an example, a first query can cause a dataset to be stored in the query acceleration data store1008, with the dataset being from an external data source1018and representing records from a prior time period (e.g., one hour). Subsequently, a second query can reference the stored dataset and further cause newer records to be obtained from the external data source (e.g., a subsequent hour). For this second query, particular logic indicated in the second query can enable the data intake and query system1001to provide results to a requesting client based on the stored dataset in the query acceleration data store1008. As an example, the second query can indicate that the system1001is to search for a particular name. The worker nodes1006can obtain stored information from the query acceleration data store1008, and identify instances of the particular name. This access to the query acceleration data store1008, as described above, can be low-latency. For example, the query acceleration data store1008may have a portion of the stored information in low-latency memory, such as RAM or volatile memory, and the worker nodes1006can quickly obtain the information and identify instances of the particular name. These identified instances can then be relayed to the requesting client. Similarly, the query acceleration data store1008may have a different portion of the stored information in longer-latency memory, and can similarly identify instances of the particular name to be provided to the requesting client. The above-described worker node1006interactions with the query acceleration data store1008can occur while information is being obtained, or processed, from the external data source1018referenced by the second query. In this way, the requesting client can view search results, for example search results based on the dataset stored by the query acceleration data store1008, while subsequent search results are being determined (e.g., search results based on information from a different dataset source). Furthermore, and as described above, the dataset being obtained from the other dataset source can be provided to the query acceleration data store1008for storage, for example, provided while the worker nodes1006apply logic to determine results from the obtained dataset. To increase security of the datasets stored by the query acceleration data store, access controls can be implemented. For example, each dataset can be associated with an access control list, and the query coordinator1004can provide an identification of a requesting user to the worker nodes1006and/or query acceleration data store1008. For example, the identification can be an authorization or authentication token associated with the user. The query acceleration data store1008can then ensure that only authorized users are allowed access to stored datasets. For example, a user who causes a dataset to be stored in the query acceleration data store1008(e.g., based on a provided query) can be indicated as being authorized (e.g., in an access control list associated with the dataset). Optionally, the user can indicate one or more other users as having access. Optionally, the data intake and query system108can utilize role-based access controls to allow any user associated with a particular role to access particular datasets. In this way, the stored information can be secure while enabling the query acceleration data store1008to service multitudes of users. 5.0. Query Data Flow FIG.14is a data flow diagram illustrating an embodiment of communications between various components within the environment1000to process and execute a query. At (1), the search head210receives and processes a query. At (2), the search head210communicates the query to the search process service, which can refer to the search process master1002and/or query coordinator1004. At (3) the search process service processes the query. As described in greater detail above, as part of processing the query, the query coordinator1004can identify the dataset sources (e.g., external data sources1018, indexers206, query acceleration data store1008, common storage, ingested data buffer, etc.) to be accessed, generate instructions for the dataset sources based on their processing capabilities or communication protocols, determine the size of the query, determine the amount of resources to allocate for the query, generate instructions for the nodes1006to execute the query, and generate tasks for itself to process results from the nodes1006. At (4), the query coordinator1004communicates the task instructions for the query to the worker nodes1006and/or the dataset sources1404. As described above, in some embodiments, the query coordinator1004can communicate task instructions to the dataset sources1404. In certain embodiments, the nodes1006communicate task instructions to the dataset sources1404. At (5), the nodes1006and/or dataset sources1404process the received instructions. As described in greater detail above, the instructions for the dataset sources1404can include instructions for performing certain transformations on the data prior to communicating the data to the nodes1006, etc. As described in greater detail above, the instructions for the nodes1006can include instructions on how to access the relevant data, the number of search phases or layers to be generated, the number of partitions to be allocated for each search phase or layer, the tasks for the partitions in the different layer, data routing information to route data between the nodes1006and to the search process service1402, etc. As such, based on the received instructions, the nodes1006can assign partitions to different layer and begin executing the task instructions. At (6), the nodes1006receive the data from the dataset source(s). As described in greater detail above, the nodes1006can receive the data from one or more dataset sources1404in parallel. In addition, the nodes1006can receive the data from a dataset source using one or more partitions. The data received from the dataset sources1404can be semi-processed data based on the processing capabilities of the dataset source1404or it can be unprocessed data from the dataset source1404. At (7), the nodes1006process the data based on the task instructions received from the query coordinator1004. As described in greater detail above, the nodes can process the data using one or more layers, each having one or more partitions assigned thereto. Although not illustrated inFIG.37, it will be understood that the search process service1402can monitor the nodes1006and dynamically allocate resources based on the monitoring. At (8), the nodes1006communicate the results of the processing to the query coordinator1004and/or to a dataset destination1404. In some cases the dataset destination1404can be the same as the dataset source. For example, the nodes1006can obtain data from the ingested data buffer and then return the results of the processing to a different section of the ingested data buffer, or obtain data from the query acceleration data store1008or an external data source1018and then return the results of the processing to the query acceleration data store1008or external data source1018, respectively. However, in certain embodiments, the dataset destination1404can be different from the dataset source1404. For example, the nodes1006can obtain data from the ingested data buffer and then return the results of the processing to the query acceleration data store1008or an external data source1018. At (9), the search process service1402can perform additional processing, and at (10) the results can be communicated to the search head210for communication to the client device. In some cases, prior to communicating the results to the client device, the search head210can perform additional processing on the results. It will be understood that the query data flow can include fewer or more steps. For example, in some cases, the search process service1402does not perform any further processing on the results and can simply forward the results to the search head210. In certain embodiments, nodes1006receive data from multiple dataset sources1404, etc. 6.0. Query Coordinator Flow FIG.15is a flow diagram illustrative of an embodiment of a routine1500implemented by the query coordinator1004to provide query results. Although described as being implemented by the query coordinator1004, one skilled in the relevant art will appreciate that the elements outlined for routine1500can be implemented by one or more computing devices/components that are associated with the system1001, such as the search head210, search process master1001, indexer206, and/or worker nodes1006. Thus, the following illustrative embodiment should not be construed as limiting. At block1502, the query coordinator1004receives a query. As described in greater detail above, the query coordinator1004can receive the query from the search head210, search process master1002, etc. In some cases, the query coordinator1004can receive the query from a client404. The query can be in a query language as described in greater detail above. In some cases, the query received by the query coordinator1004can correspond to a query received and reviewed by the search head210. For example, the search head210can determine whether the query was submitted by an authenticated user and/or review the query to determine that it is in a proper format for the data intake and query system1001, has correct semantics and syntax, etc. In some cases, the search head210can run a daemon to receive search queries, and in some cases, spawn a search process, to communicate the received query to and receive the results from the query coordinator1004or search process master1002 At block1504, the query coordinator1004processes the query. As described in greater detail above and as will be described in greater detail inFIG.16, processing the query can include any one or any combination of: identifying relevant dataset sources and destinations for the query, obtaining information about the dataset sources and destinations, determining processing tasks to execute the query, determining available resources for the query, and/or generating a query processing scheme to execute the query based on the information. In some embodiments, as part of generating a query processing scheme, the query coordinator1004allocates multiple layers or search phases of partitions to execute the query. Each level of partitions can be given a different task in order to execute the query. For example, as described in greater detail above with reference toFIGS.12and13, one level can be given the task of interacting with the dataset source and receiving data from the dataset source, another level can be tasked with processing the data received from the dataset source, a third level can be tasked with collecting results of processing the data, and additional levels can be tasked with communicating results to different destinations, storing the results in one or more dataset destinations, etc. The query coordinator1004can allocate as many or as few levels of partitions to execute the query. At block1506, the query coordinator1004distributes the query for execution. Distributing the query for execution can include any one or any combination of: communicating the query processing scheme to the nodes1006, monitoring the nodes1006during the processing of the query, or allocating/deallocating resources based on the status of the nodes and the query, and so forth, described in greater herein. At block1508, the query coordinator1004receives the results. In some embodiments, the query coordinator1004receives the results from the nodes1006. For example, upon completing the query processing scheme, or as a part of it, the nodes1006can communicate the results of the query to the query coordinator1004. In certain cases, the query coordinator1004receives the results from the query acceleration data store, or indexers206, etc. In some cases, the query coordinator1004receives the results from one or more components of the data intake and query system1001depending on the dataset sources used in the query. At block1510, the query coordinator1004processes the results. As described in greater detail above, in some cases, the results of a query cannot be finalized by the nodes1006. For example, in some cases, all of the data must be gathered before the results can be determined. As a non-limiting example, for some cursored searches, the query coordinator1004, a result cannot be determined until all relevant data has been collected by the worker nodes. In such cases, the query coordinator1004can receive the results from the worker nodes1006, and then collate the results. At block1512, the query coordinator1004communicates the results. In some embodiments, the query coordinator1004communicates the results to the search head210, such as a search process generated by the search to handle the query. In certain cases, the query coordinator1004communicates the results to the search process master1002or client device404, etc. It will be understood that fewer, more, or different blocks can be used as part of the routine1500. In some cases, one or more blocks can be omitted. For example, in certain embodiments, the results received from nodes1006can be in a form that does not require any additional processing by the query coordinator1004. In such embodiments, the query coordinator1004can communicate the results without additional processing. As another example, the routine1500can include monitoring nodes during execution of the query or query processing scheme, allocating or deallocating resources during the execution of the query, etc. Similarly, routine1500can include reporting completion of the query to a component, such as the search process master1002, etc. Furthermore, it will be understood that the various blocks described herein with reference toFIG.15can be implemented in a variety of orders. In some cases, the query coordinator1004can implement some blocks concurrently or change the order as desired. For example, the query coordinator1004can receive (1508), process (1510), and/or communicate results (1512) concurrently or in any order, as desired. 7.0. Query Processing Flow FIG.16is a flow diagram illustrative of an embodiment of a routine1600implemented by the query coordinator1004to process a query. Although described as being implemented by the query coordinator1004, one skilled in the relevant art will appreciate that the elements outlined for routine1600can be implemented by one or more computing devices/components that are associated with the system1001, such as the search head210, search process master1001, indexer206, and/or worker nodes1006. Thus, the following illustrative embodiment should not be construed as limiting. At block1602, the query coordinator1004identifies dataset sources and/or destinations for the query. In some cases, the query explicitly identifies the dataset sources and destinations that are to be used in the query. For example, the query can include a command indicating that data is to be retrieved from the query acceleration data store1008, ingested data buffer, common storage, indexers, or an external data source. In certain cases, the query coordinator1004parses the query to identify the dataset sources and destinations that are to be used in the query. For example, the query may identify the name (or other identifier) of the location (e.g., my index) of the relevant data and the query coordinator1004can use the name or identifier to determine whether that particular location is associated with the query acceleration data store1008, ingested data buffer, common storage, indexers206, or an external data source1018. In some cases, the query coordinator identifies the dataset source based on timing requirements of the search. For example, in some cases, queries for data that satisfy a timing threshold or are within a time period are handled by indexers or correspond to data in an ingested data buffer, as described herein. In some embodiments, data that does not satisfy the timing threshold or is outside of the time period are stored in common storage, query acceleration data stores, external data sources, or by indexers. For example, as described in greater detail herein, in some cases, the indexers fill hot buckets with incoming data. Once a hot bucket is filled, it is stored. In some embodiments hot buckets are searchable and in other embodiments hot buckets are not. Accordingly, in embodiments where hot buckets are searchable, a query that reflects a time period that includes hot buckets can indicate that the dataset source is the indexers, or hot buckets being processed by the indexers. Similarly, in embodiments where warm buckets are stored by the indexers, a query that reflects a time period that includes warm buckets can indicate that the dataset source is the indexers. In certain embodiments, a query for data that satisfies the timing threshold or is within the time period can indicate that the ingested data buffer is the dataset source. Further, in embodiments, where warm buckets are stored in a common storage, a query for data that does not satisfy the timing threshold or is outside of the time period can indicate that the common storage is the dataset source. In some embodiments, the time period can be reflective of the time it takes for data to be processed by the data intake and query system1001and stored in a warm bucket. Thus, a query for data within the time period can indicate that the data has not yet been indexed and stored by the indexers206or that the data resides in hot buckets that are still being processed by the indexers206. In some embodiments, the query coordinator1004identifies the dataset source based on the architecture of the system1001. As described herein, in some architectures, real-time searches or searches for data that satisfy the timing threshold are handled by indexers. In other architectures, these same types of searches are handled by the nodes1006in combination with the ingested data buffer. Similarly, in certain architectures, historical searches, or searches for data that do not satisfy the timing threshold are handled by the indexers. In other architectures, these same types of searches are handled by the nodes1006in combination with the common storage. At block1604the query coordinator1004obtains relevant information about the dataset sources/destinations. The query coordinator1004can obtain the relevant information from a variety of sources, such as the workload advisor1010, workload catalog1012, dataset compensation module1016, the dataset sources/destinations themselves, etc. For example, if the dataset source/destination is an external data source, the query coordinator1004can obtain relevant information about the external dataset source1018from the dataset compensation module or by communicating with the external data source1018. Similarly, if the dataset source/destination is an indexer206, common storage, query acceleration data store1008, ingested data buffer, etc., the query coordinator can obtain relevant information by communicating with the dataset source/destination and/or the workload advisor1010or workload catalog1012. The relevant information can include, but is not limited to, information to enable the query coordinator1004to generate a search scheme with sufficient information to interact with and obtain data from a dataset source or send data to a dataset destination. For example, the relevant information can include information related to the number of partitions supported by the dataset source/destination, location of compute nodes at the dataset source/destination, computing functionality of the dataset source/destination, commands supported by the dataset source/destination, physical location of the dataset source/destination, network speed and reliability in communicating with the dataset source/destination, amount of information stored by the dataset source/destination, computer language or protocols for communicating with the dataset source/destination, summaries or indexes of data stored by the dataset source/destination, data format of data stored by the dataset source/destination, etc. At block1606, the query coordinator1004determines processing requirement for the query. In some cases, to determine the processing requirements, the query coordinator1004parses the query. As described previously, the workload catalog1012can store information regarding the various transformations or commands that can be executed on data and the amount of processing to perform the transformation or command. In some cases, this information can be based on historical information from previous queries executed by the system1001. For example, the query coordinator1004can determine that a “join” command will have significant computational requirements, whereas a “count by” command may not. Using the information about the transformations included in the query, the query coordinator can determine the processing requirements of individual transformations on the data, as well as the processing requirements of the query. At block1608, the query coordinator1004determines available resources. As described in greater detail above, the nodes1006can include monitoring modules that monitor the performance and utilization of its processors. In some cases, a monitoring module can be assigned for each processor on a node. The information about the utilization rate and other scheduling information can be used by the query coordinator1004to determine the amount of resources available for the query. At block1610, the query coordinator1004generates a query processing scheme. In some cases, the query coordinator1004can use the information regarding the dataset sources/destinations, the processing requirements of the query and/or the available resources to generate the query processing scheme. As part of generating the query processing scheme, the query coordinator1004can generate instructions to be executed by the dataset sources/destinations, allocate partitions/processors for the query, generate instructions for the partitions/nodes, generate instructions for itself, generate a DAG, etc. As described in greater detail above, in some embodiments, to generate instructions for the dataset sources/destinations, the query coordinator1004can use the information from the dataset compensation module1016. This information can be used by the query coordinator1004to determine what processing can be done by an external data source, how to translate the commands or subqueries for execution to the external dataset source, the number of partitions that can be used to read data from the external dataset source, etc. Similarly, the query coordinator1004can generate instructions for other dataset sources, such as the indexers, query acceleration data store, common storage, etc. For example, the query coordinator1004can generate instructions for the ingested data buffer to retain data until it receives an acknowledgment from the query coordinator that the data from the ingested data buffer has been received and processed. In addition, as described in greater detail above, to generate instructions for the processors/partitions, the query coordinator1004can determine how to break up the processing requirements of the query into discrete or individual tasks, determine the number of partitions/processors to execute the task, etc. In some cases, the determine how to break up the processing requirements of the query into discrete or individual tasks, the query coordinator1004can parse the query to its different portions of the query and then determine the tasks to use to execute the different portions. The query coordinator1004can then use this information to generate specific instructions for the nodes that enable the nodes to execute the individual tasks, route the results of each task to the next location, and route the results of the query to the proper destination. The instructions for the nodes can further include instructions for interacting with the dataset sources/destinations. In some cases, instructions for the dataset sources can be embedded in the instructions for the nodes so that the nodes can communicate the instructions to the dataset sources/destinations. Accordingly, the instructions generated by the query coordinator1004for the nodes can include all of the information in order to enable the nodes to handle the various tasks of the query and provide the query coordinator with the appropriate data so that the query coordinator1004can finalize the results and communicate them to the search head210. In some cases, the query coordinator1004can use network topology information of the machines that will be executing the query to generate the instructions for the nodes. For example, the query coordinator1004can use the physical location of the processors that will execute the query to generate the instructions. As one example, the query coordinator1004can indicate that it is preferred that the processors assigned to execute the query be located on the same machine or close to each other. In some embodiments, the instructions for the nodes can be generated in the form of a DAG, as described in greater detail above. The DAG can include the instructions for the nodes to carry out the processing tasks included in the DAG. In some cases, the DAG can include additional information, such as instructions on how to select partitions for the different tasks. For example, the DAG can indicate that it is preferable that a partition that will be receiving data from another partition be on the same machine, or nearby machine, in order to reduce network traffic. In addition to generating instructions for the dataset sources/destinations and the nodes, the query coordinator1004can generate instructions for itself. In some cases, the instructions generated for itself can depend on the query that is being processed, the capabilities of the nodes1006, and the results expected from the nodes. For example, in some cases, the type of query requested may require the query coordinator1004to perform more or less processing. For example, a cursored search may require more processing by the query coordinator1004than a batch search. Accordingly, the query coordinator1004can generate tasks or instructions for itself based on the query requested. In addition, if the nodes1006are unable to perform certain tasks on the data, then the query coordinator1004can assign those tasks to itself and generate instructions for itself based on those tasks. Similarly, based on the form of the data that the query coordinator1004is expected to receive, it can generate instructions for itself in order to finalize the results for reporting. It will be understood that fewer, more, or different blocks can be used as part of the routine1600. In some cases, one or more blocks can be omitted. Furthermore, it will be understood that the various blocks described herein with reference toFIG.16can be implemented in a variety of orders. In some cases, the query coordinator1004can implement some blocks concurrently or change the order as desired. For example, the query coordinator1004can obtain information about the dataset sources/destinations (3904), determine processing requirements (3906), and determine available resources (3908) concurrently or in any order, as desired. 8.0. Common Storage Architecture As discussed above, indexers206may in some embodiments operate both to ingest information into a data intake and query system1001, and to search that information in response to queries from client devices404. The use of an indexer206to both ingest and search information may be beneficial, for example, because indexers206may have ready access to information that they have ingested, and thus be enabled to quickly access that information for searching purposes. However, use of an indexer206to both ingest and search information may not be desirable in all instances. As an illustrative example, consider an instance in which information within the system1001is organized into buckets, and each indexer206is responsible for maintaining buckets within a data store208corresponding to the indexer206. Illustratively, a set of 10 indexers206may maintain 100 buckets, distributed evenly across ten data stores208(each of which is managed by a corresponding indexer206). Information may be distributed throughout the buckets according to a load-balancing mechanism used to distribute information to the indexers206during data ingestion. In an idealized scenario, information responsive to a query would be spread across the 100 buckets, such that each indexer206may search their corresponding 10 buckets in parallel, and provide search results to a search head210. However, it is expected that this idealized scenario may not always occur, and that there will be at least some instances in which information responsive to a query is unevenly distributed across data stores208. As an extreme example, consider a query in which responsive information exists within 10 buckets, all of which are included in a single data store208associated with a single indexer206. In such an instance, a bottleneck may be created at the single indexer206, and the effects of parallelized searching across the indexers206may be minimal. To increase the speed of operation of search queries in such cases, it may therefore be desirable to configure the data intake and query system1001such that parallelized searching of buckets may occur independently of the operation of indexers206. Another potential disadvantage in utilizing an indexer206to both ingest and search data is that computing resources of the indexers206may be split among those two tasks. Thus, ingestion speed may decrease as resources are used to search data, or vice versa. It may further be desirable to separate ingestion and search functionality, such that computing resources available to either task may be scaled or distributed independently. One example of a configuration of the data intake and query system1001that enables parallelized searching of buckets independently of the operation of indexers206is shown inFIG.17. The embodiment of system1001that is shown inFIG.17substantially corresponds to embodiment of the system1001as shown inFIG.10, and thus corresponding elements of the system1001will not be re-described. However, unlike the embodiment as shown inFIG.10, where individual indexers206are assigned to maintain individual data stores208, the embodiment ofFIG.17includes a common storage1702. Common storage1702may correspond to any data storage system accessible to each of the indexers206. For example, common storage1702may correspond to a storage area network (SAN), network attached storage (NAS), other network-accessible storage system (e.g., a ho33sted storage system, which may also be referred to as “cloud” storage), or combination thereof. The common storage1702may include, for example, hard disk drives (HDDs), solid state storage devices (SSDs), or other substantially persistent or non-transitory media. Data stores208within common storage1702may correspond to physical data storage devices (e.g., an individual HDD) or a logical storage device, such as a grouping of physical data storage devices or a virtualized storage device hosted by an underlying physical storage device. In one embodiment, common storage1702may be multi-tiered, with each tier providing more rapid access to information stored in that tier. For example, a first tier of the common storage1702may be physically co-located with indexers206and provide rapid access to information of the first tier, while a second tier may be located in a different physical location (e.g., in a hosted or “cloud” computing environment) and provide less rapid access to information of the second tier. Distribution of data between tiers may be controlled by any number of algorithms or mechanisms. In one embodiment, a first tier may include data generated or including timestamps within a threshold period of time (e.g., the past seven days), while a second tier or subsequent tiers includes data older than that time period. In another embodiment, a first tier may include a threshold amount (e.g., n terabytes) or recently accessed data, while a second tier stores the remaining less recently accessed data. In one embodiment, data within the data stores208is grouped into buckets, each of which is commonly accessible to the indexers206. The size of each bucket may be selected according to the computational resources of the common storage1702or the data intake and query system1001overall. For example, the size of each bucket may be selected to enable an individual bucket to be relatively quickly transmitted via a network, without introducing excessive additional data storage requirements due to metadata or other overhead associated with an individual bucket. In one embodiment, each bucket is 750 megabytes in size. The indexers206may operate to communicate with common storage1702and to generate buckets during ingestion of data. Data ingestion may be similar to operations described above. For example, information may be provided to the indexers206by forwarders204, after which the information is processed and stored into buckets. However, unlike some embodiments described above, the buckets may be stored in common storage1702, rather than in a data store208maintained by an individual indexer206. Thus, the common storage1702can render information of the data intake and query system1001commonly accessible to elements of that system1001. As will be described below, such common storage1702can beneficial enable parallelized searching of buckets to occur independently of the operation of indexers206. As noted above, it may be beneficial in some instances to separate within the data intake and query system1001functionalities of ingesting data and searching for data. As such, in the illustrative configuration ofFIG.17, worker nodes1006may be enabled to search for data stored within common storage1702. The nodes1006may therefore be communicatively attached (e.g., via a communication network) with the common storage1702, and be enabled to access buckets within the common storage1702. The nodes1006may search for data within buckets in a manner similar to how searching may occur at the indexers206, as discussed in more detail above. However, because nodes1006in some instances are not statically assigned to individual data stores208(and thus to buckets within such a data store208), the buckets searched by an individual node1006may be selected dynamically, to increase the parallelization with which the buckets can be searched. For example, using the example provided above, consider again an instance where information is stored within 100 buckets, and a query is received at the data intake and query system1001for information within 10 such buckets. Unlike the example above (in which only indexers206already associated with those 10 buckets could be used to conduct a search), the 10 buckets holding relevant information may be dynamically distributed across worker nodes1006. Thus, if10worker nodes1006are available to process a query, each worker node1006may be assigned to retrieve and search within 1 bucket, greatly increasing parallelization when compared to the low-parallelization scenario discussed above (e.g., where a single indexer206is required to search all 10 buckets). Moreover, because searching occurs at the worker nodes1006rather than at indexers206, computing resources can be allocated independently to searching operations. For example, worker nodes1006may be executed by a separate processor or computing device than indexers206, enabling computing resources available to worker nodes1006to scale independently of resources available to indexers206. Operation of the data intake and query system1001to utilize worker nodes1006to search for information within common storage1702will now be described. As discussed above, a query can be received at the search head210, processed at the search process master1002, and passed to a query coordinator1004for execution. The query coordinator1004may generate a DAG corresponding to the query, in order to determine sequences of search phases within the query. The query coordinator1004may further determine based on the query whether each branch of the DAG requires searching of data within the common storage1702(e.g., as opposed to data within external storage, such as remote systems414and416). It will be assumed for the purposes of described that at least one branch of the DAG requires searching of data within the common storage1702, and as such, description will be provided for execution of such a branch. While interactions are described for executing a single branch of a DAG, these interactions may be repeated (potentially concurrently or in parallel) for each branch of a DAG that requires searching of data within the common storage1702. As discussed above with reference toFIG.13, executing a search representing a branch of a DAG can include a number of phases, such as an intake phase1304, processing phase1306, and collector phase1308. It is therefore illustrative to discuss execution of a branch of a DAG that requires searching of the common storage1702with reference to such phases. As also discussed above, each phase may be carried out by a number of partitions, each of which may correspond to a worker node1006(e.g., a specific worker node1006, processor within the worker node1006, execution environment within a worker node1006, such as a virtualized computing device or software-based container, etc.). When a branch requires searching within common storage1702, the query coordinator1004can select a partition (e.g., a processor within a worker node1006) at random or according to a load-balancing algorithm to gather metadata regarding the information within the common storage1702, for use in dynamically assigning partitions (each implemented by a worker node1006) to implement an intake phase1304. Metadata is discussed in more detail above, but may include, for example, data identifying a host, a source, and a source type related to a bucket of data. Metadata may further indicate a range of timestamps of information within a bucket. The metadata can then be compared against a query to determine a subset of buckets within the common storage1702that may contain information relevant to a query. For example, where a query specifies a desired time range, host, source, source type, or combination thereof, only buckets in the common storage1702that satisfy those specified parameters may be considered relevant to the query. In one embodiment, the subset of buckets is determined by the assigned partition, and returned to the query coordinator1004. In another embodiment, the metadata retrieved by a partition is returned to the query coordinator1004and used by the query coordinator1004to determine the subset of buckets. Thereafter, the query coordinator1004can dynamically assign partitions to intake individual buckets within the determined subset of buckets. In one embodiment, the query coordinator1004attempts to maximize parallelization of the intake phase1304, by attempting to intake the subset of buckets with a number of partitions equal to the number of buckets in the subset (e.g., resulting in a one-to-one mapping of buckets in the subset to partitions). However, such parallelization may not be feasible or desirable, for example, where the total number of partitions is less than the number of buckets within the determined subset, where some partitions are processing other queries, or where some partitions should be left in reserve to process other queries. Accordingly, the query coordinator1004may interact with the workload advisor1010to determine a number of partitions that are to be utilized to conduct the intake phase1304of the query. Illustratively, the query coordinator1004may initially request a one-to-one correspondence between buckets and partitions, and the workload advisor1010may reduce the number of partitions used for the intake phase1304of the query, resulting in a 2-to-1, 3-to-1, or n-to-1 correspondence between buckets and partitions. Operation of the workload advisor1010is described in more detail above. The query coordinator1004can then assign the partitions (e.g., those partitions identified by interaction with the workload advisor1010) to intake the buckets previously identified as potentially containing relevant information (e.g., based on metadata of the buckets). In one embodiment, the query coordinator1004may assign all buckets as a single operation. For example, where 10 buckets are to be searched by 5 partitions, the query coordinator1004may assign 2 buckets to a first partitions, two buckets to a second partitions, etc. In another embodiment, the query coordinator1004may assign buckets iteratively. For example, where 10 buckets are to be searched by 5 partitions, the query coordinator1004may initially assign five buckets (e.g., one buckets to each partition), and assign additional buckets to each partition as the respective partitions complete intake of previously assigned buckets. In some instances, buckets may be assigned to partitions randomly, or in a simple sequence (e.g., a first partitions is assigned a first bucket, a second partitions is assigned a second bucket, etc.). In other instances, the query coordinator1004may assign buckets to partitions based on buckets previously assigned to a partitions, in a prior or current search. Illustratively, in some embodiments each worker node1006may be associated with a local cache of information (e.g., in memory of the partitions, such as random access memory [“RAM”] or disk-based cache). Each worker node1006may store copies of one or more buckets from the common storage1702within the local cache, such that the buckets may be more rapidly searched by partitions implemented on the worker node1006. The query coordinator1004may maintain or retrieve from worker nodes1006information identifying, for each relevant node1006, what buckets are copied within local cache of the respective nodes1006. Where a partition assigned to execute a search is implemented by a worker node1006that has within its local cache a copy of a bucket determined to be potentially relevant to the search, that partition may be preferentially assigned to search that locally-cached bucket. In some instances, local cache information can further be used to determine the partitions to be used to conduct a search. For example, partitions corresponding to worker nodes1006that have locally-cached copies of buckets potentially relevant to a search may be preferentially selected by the query coordinator1004or workload advisor1010to execute the intake phase1304of a search. In some instances, the query coordinator1004or other component of the system1001(e.g., the search process master1002) may instruct worker nodes1006to retrieve and locally cache copies of various buckets from the common storage1702, independently of processing queries. In one embodiment, the system1001is configured such that each bucket from the common storage1702is locally cached on at least one worker node1006. In another embodiment, the system1001is configured such that at least one bucket from the common storage1702is locally cached on at least two worker nodes1006. Caching a bucket on at least two worker nodes1006may be beneficial, for example, in instances where different queries both require searching the bucket (e.g., because the at least two worker nodes3006may process their respective local copies in parallel). In still other embodiments, the system1001is configured such that all buckets from the common storage1702are locally cached on at least a given number n of worker nodes1006, wherein n is defined by a replication factor on the system1001. For example, a replication factor of 5 may be established to ensure that 5 searches of buckets can be executed concurrently by 5 different worker nodes1006, each of which has locally cached a copy of a given bucket potentially relevant to the searches. In some embodiments, buckets may further be assigned to partitions to assist with time ordering of search results. For example, where a search requests time ordering of results, the query coordinator1004may attempt to assign buckets with overlapping time ranges to the same partition, such that information within the buckets can be sorted at the partition. Where the buckets assigned to different partitions are non-overlapping in time, the query coordinator1004may sort information from different partitions according to an absolute ordering of the buckets processed by the different partitions. That is, if all timestamps in all buckets processed by a first worker node1006occur prior to all timestamps in all buckets processed by a second worker node1006, query coordinator1004can quickly determine (e.g., without referencing timestamps of information) that all information identified by the first worker node1006in response to a search occurs in time prior to information identified by the second worker node1006in response to the search. Thus, assigning buckets with overlapping time ranges to the same partition can reduce computing resources needed to time-order results. In still more embodiments, partitions may be assigned based on overlaps of computing resources of the partitions. For example, where a partition is required to retrieve a bucket from common storage1702(e.g., where a local cached copy of the bucket does not exist on the worker node1006implementing the partition), such retrieval may use a relatively high amount of network bandwidth or disk read/write bandwidth on the worker node1006implementing the partition. Thus, assigning a second partition of the same worker node1006might be expected to strain or exceed the network or disk read/write bandwidth of the worker node1006. For this reason, it may be preferential to assign buckets to partitions such that two partitions within a common worker node1006are not both required to retrieve buckets from the common storage1702. Illustratively, it may be preferential to evenly assign all buckets containing potentially relevant information among the different worker nodes1006used to implement the intake phase1304. For similar reasons, where a given worker node1006has within its local cache two buckets that potentially include relevant information, it may be preferential to assign both such buckets to different partitions implemented by the same worker node1006, such that both buckets can be search in parallel on the worker node1006by the respective partitions. In some instances, commonality of computing resources between partitions can further be used to determine the partitions to be used to conduct an intake phase1304. For example, the query coordinator1004may preferentially select partitions that are implemented by different worker nodes1006(e.g., in order to maximize network or disk read/write bandwidth) to implement an intake phase1304. However, where a worker node1006has locally cached multiple buckets with information potentially relevant to the search, the query coordinator1004may preferentially multiple partitions on that worker node1006(e.g., up to a number of partitions equal to the number of potentially-relevant buckets stored at the worker node1006). The above mechanisms for assigning buckets to partitions may be combined based on priorities of each potential outcome. For example, the query coordinator1004may give an initial priority to distributing assigned partitions across a maximum number of different worker nodes1006, but a higher priority to assigning partitions to process buckets with overlapping timestamps. The query coordinator1004may give yet a higher priority to assigning partitions to process buckets that have been locally cached. The query coordinator1004may still further give higher priority to ensuring that each partition is searching at least one bucket for information responsive to a query at any given time. Thus, the query coordinator1004can dynamically alter the assignment of buckets to partitions to increase the parallelization of a search, and to increase the speed and efficiency with which the search is executed. When searching for information within the common storage1702, the intake phase1304may be carried out according to bucket-to-partition mapping discussed above, as determined by the query coordinator1004. Specifically, after assigning at least one bucket to each partition to be used during the intake phase1304, each partition may begin to retrieve its assigned bucket. Retrieval may include, for example, downloading the bucket from the common storage1702, or locating a copy of the bucket in a local cache of a worker node1006implementing the partition. Thereafter, each partition may conduct an initial search of the bucket for information responsive to a query. The initial search may include processing that is expected to be disk or network intensive, rather than processing (e.g., CPU) intensive. For example, the initial search may include accessing the bucket, which may include decompressing the bucket from a compressed format, and accessing an index file stored within the bucket. The initial search may further include referencing the index or other information (e.g., metadata within the bucket) to locate one or more portions (e.g., records or individual files) of the bucket that potentially contain information relevant to the search. Thereafter, the search proceeds to the processing phase1306, where the portions of buckets identified during the intake phase1304are searched to locate information responsive to the search. Illustratively, the searching that occurs during the processing phase1306may be predicted to be more processor (e.g., CPU) intensive than that which occurred during the intake phase1304. As such, the number of partitions used to conduct the processing phase1306may vary from that of the intake phase1304. For example, during or after the conclusion of the intake phase1304, each partition implementing that phase1304may communicate to the query coordinator1004information regarding the portions identified as potentially containing information relevant to the query (e.g., the number, size, or formatting of portions, etc.). The query coordinator1004may thereafter determine from that information (e.g., based on interactions with the workload advisor1010) the partitions to be used to conduct the processing phase1306. In other embodiments, the query coordinator1004may select partitions to be used to conduct the processing phase1306prior to implementation of the intake phase1304(e.g., contemporaneously with selecting partitions to conduct the intake phase1304). The partitions selected for conducting the processing phase1306may include one or more partitions that previously conducted the intake phase1304. However, because the processing phase1306may be expected to be more resource intensive than the intake phase1304(e.g., with respect to use of processing cycles), the number of partitions selected for conducting the processing phase1306may exceed the number of partitions that previously conducted the intake phase1304. To minimize network communications, the additional partitions selected to conduct the processing phase1306may be preferentially selected to be collocated on a worker node1006with a partition that previously conducted the intake phase1304, such that portions of buckets to be processed by the additional partitions can be received from a partition on that worker node1006, rather than being transmitted across a network. At the processing phase1306, the partitions may parse the portions of buckets located during the intake phase1304in order to identify information relative to a search. For example, the may parse the portions of buckets (e.g., individual files or records) to identify specific lines or segments that contain values specified within the search, such as one or more error types desired to be located during the search. Where the search is conducted according to map-reduce techniques, the processing phase1306can correspond to implementing a map function. Where the search requires that results be time-ordered, the processing phase1306may further include sorting results at each partition into a time-ordering. The remainder of the search may be executed in phases according to the DAG determined by the query coordinator1004. For example, where the branch of the DAG currently being processed includes a collection node, the search may proceed to a collector phase1308. The collector phase1308may be executed by one or more partitions selected by the query coordinator1004(e.g., based on the information identified during the processing phase1306), and operate to aggregate information identified during the processing phase1306(e.g., according to a reduce function). Where the processing phase1306represents a top-node of a branch of the DAG being executed, the information located by each partition during the processing phase1306may be transmitted to the query coordinator1004, where any additional nodes of the DAG are completed, and search results are transmitted to a data destination1316. These additional phases may be implemented in a similar manner as described above, and they are therefore not discussed in detail with respect to searches against a common storage1702. As will be appreciated in view of the above description, the use of a common storage1702can provide many advantages within the data intake and query system1001. Specifically, use of a common storage1702can enable the system1001to decouple functionality of data ingestion, as implemented by indexers206, with functionality of searching, as implemented by partitions of worker nodes1006. Moreover, because buckets containing data are accessible by each worker node1006, a query coordinator1004can dynamically allocate partitions to buckets at the time of a search in order to maximize parallelization. Thus, use of a common storage1702can substantially improve the speed and efficiency of operation of the system1001. 9.0. Ingested Data Buffer Architecture One embodiment of the system1001that enables worker nodes1006to search not-yet-indexed information is shown inFIG.18. Searching of not-yet-indexed information (e.g., prior to processing of the information by an indexer206) may be beneficial, for example, where information is desired on a continuous or streaming basis. For example, a client device404amay desire to establish a long-running (e.g., until manually halted) search of data received at the data intake and query system1001, such that the client is quickly notified on occurrence of specific types of information within the data, such as errors within machine records. Thus, it may be desirable to conduct the search against the data as it enters intake and query system1001, rather than waiting for the data to be processed by the indexers206and saved into a data store208. The embodiment ofFIG.18is similar to that ofFIG.17, and corresponding elements will not be re-described. However, unlike the embodiment ofFIG.17, the embodiment ofFIG.18includes an ingested data buffer1802. The ingested data buffer1802ofFIG.18operates to receive information obtained by the forwarders204from the data sources202, and make such information available for searching to both indexers206and worker nodes1006. As such, the ingested data buffer1802may represent a computing device or computing system in communication with both the indexers206and the worker nodes1006via a communication network. In one embodiment, the ingested data buffer1802operates according to a publish-subscribe (“pub-sub”) messaging model. For example, each data source202may be represented as one or more “topics” within a pub-sub model, and new information at the data source may be represented as a “message” within the pub-sub model. Elements of the system1001, including indexers206and worker nodes1006(or partitions within worker nodes1006) may subscribe to a topic representing desired information (e.g., information of a particular data source202) to receive messages within the topic. Thus, an element subscribed to a relevant topic will be notified of new data categorized under the topic within the ingested data buffer1802. A variety of implementations of the pub-sub messaging model are known in the art, and may be usable within the ingested data buffer1802. As will be appreciated based on the description below, use of a pub-sub messaging model can provide many benefits to the system1001, including the ability to search data quickly after the data is received at the ingested data buffer1802(relative to waiting of the data to be processed by an indexer206) while maintaining or increasing data resiliency. In embodiments that utilize an ingested data buffer1802, operation of the indexer206may be modified to receive information from the buffer1802. Specifically, each indexer206may be configured to subscribe to one or more topics on the ingested data buffer1802and to thereafter process the information in a manner similarly to as described above with respect to other embodiments of the system. After data representing a message has been processed by an indexer206, the indexer206can send an acknowledgement of the message to the ingested data buffer1802. In accordance with the pub-sub messaging model, the ingested data buffer1802can delete a message once acknowledgements have been received from all subscribers (which may include, for example, a single indexer206configured to process the message). Thereafter, operation of the system1001to store the information processed by the indexer206and enable searching of such information is similar to embodiments described above (e.g., with reference toFIGS.10and17, etc.). As discussed above, the ingested data buffer1802is also in communication with the worker nodes1006. As such, the data intake and query system1001can be configured to utilize the worker nodes1006to search data from the ingested data buffer1802directly, rather than waiting for the data to be processed by the indexers206. As discussed above, a query can be received at the search head210, processed at the search process master1002, and passed to a query coordinator1004for execution. The query coordinator1004may generate a DAG corresponding to the query, in order to determine sequences of search phases within the query. The query coordinator1004may further determine based on the query whether any branch of the DAG requires searching of data within the ingested data buffer1802. For example, the query coordinator1004may determine that at least one branch of the query requires searching of data within the ingested data buffer1802by identifying, within the query, a topic of the ingested data buffer1802for searching. It will be assumed for the purposes of described that at least one branch of the DAG requires searching of data within the ingested data buffer1802, and as such, description will be provided for execution of such a branch. While interactions are described for executing a single branch of a DAG, these interactions may be repeated (potentially concurrently or in parallel) for each branch of a DAG that requires searching of data within the ingested data buffer1802. As discussed above with reference toFIG.13, executing a search representing a branch of a DAG can include a number of phases, such as an intake phase1304, processing phase1306, and collector phase1308. It is therefore illustrative to discuss execution of a branch of a DAG that requires searching of the common storage1702with reference to such phases. As also discussed above, each phase may be carried out by a number of partitions, each of which may correspond to a worker node1006(e.g., a specific worker node1006, processor within the worker node1006, execution environment within a worker node1006, etc.). Particularly in the case of streaming or continuous searching, different instances of the phases may be carried out at least partly concurrently. For example, the processing phase1306may occur with respect to a first set of information while the intake phase1304occurs with respect to a second set of information, etc. Thus, while the phases will be discussed in sequence below, it should be appreciated that this sequence can occur multiple times with respect to a single query (e.g., as new data enters the system1001), and each sequence may occur at least partially concurrently with one or more other sequences. Moreover, because the ingested data buffer1802can be configured to make messages available to any number of subscribers, the sequence discussed below may occur with respect to multiple different searches, potentially concurrently. Thus, the architecture ofFIG.18provides a highly scalable, highly resilient, high availability architecture for searching information received at the system1001. When a branch requires searching within ingested data buffer1802, the query coordinator1004can select a partition (e.g., a processor within a worker node1006) at random or according to a load-balancing algorithm to gather metadata regarding the topic specified within the query from the ingested data buffer1802. Metadata regarding a topic may include, for example, a number of message queues within the ingested data buffer1802corresponding to the topic. Each message queue can represent a collection of messages published to the topic, which may be time-ordered (e.g., according to a time that the message was received at the ingested data buffer1802). In some instances, the ingested data buffer1802may implement a single message queue for a topic. In other instances, the ingested data buffer1802may implement multiple message queues (e.g., across multiple computing devices) to aid in load-balancing operation of the ingested data buffer1802with respect to the topic. The selected partition can determine the number of message queues maintained at the ingested data buffer1802for a topic, and return this information to the query coordinator. Thereafter, the query coordinator1004can dynamically assign partitions to conduct an intake phase1304, by retrieving individual message queues of the topic within the ingested data buffer1802. In one embodiment, the query coordinator1004attempts to maximize parallelization of the intake phase1304, by attempting to retrieve messages from the message queues with a number of partitions equal to the number of message queues for the topic maintained at the ingested data buffer1802(e.g., resulting in a one-to-one mapping of message queues in the topic to partitions). However, such parallelization may not be feasible or desirable, for example, where the total number of partitions is less than the number of message queues, where some partitions are processing other queries, or where some partitions should be left in reserve to process other queries. Accordingly, the query coordinator1004may interact with the workload advisor1010to determine a number of partitions that are to be utilized to intake messages from the message queues during the intake phase1304. Illustratively, the query coordinator1004may initially request a one-to-one correspondence between message queues and partitions, and the workload advisor1010may reduce the number of partitions used to read the message queues, resulting in a 2-to-1, 3-to-1, or n-to-1 correspondence between message queues and partitions. Operation of the workload advisor1010is described in more detail above. When a greater than 1-to-1 correspondence exists between queues and partitions (e.g., 2-to-1, 3-to-1, etc.), the message queues may be evenly assigned among different worker nodes1006used to implement the intake phase1304, to maximize network or read/write bandwidth available to partitions conducting the intake phase1304. During the intake phase1304, each partition used during the intake phase1304can subscribe to those message queues assigned to the partition. Illustratively, where partitions are assigned in a 1-to-1 correspondence with message queues for a topic in the ingested data buffer1802, each partition may subscribe to one corresponding message queue. Thereafter, in accordance with the pub-sub messaging model, the partition can receive from the ingested data buffer1802messages publishes within those respective message queues. However, to ensure message resiliency, a partition may decline to acknowledge the messages until such messages have been fully searched, and results of the search have been provided to a data destination (as will be described in more detail below). In some embodiments, a partition may, during the intake phase1304act as an aggregator of messages published to a respective message queue of the ingested data buffer1802, to define a collection of data to be processed during an instance of the processing phase1306. For example, the partition may collect messages corresponding to a given time-window (such as a 30 second time window, 1 minute time window, etc.), and bundle the messages together for further processing during a processing phase1306of the search. In one instance, the time window may be set to a duration lower than a typical delay needed for an indexer206to process information from the ingested data buffer1802and place the processed information into a data store208(as, if a time-window greater than this delay were used, a search could instead be conducted against the data stores208). The time window may further be set based on an expected variance between timestamps in received information and the time at which the information is received at the ingested data buffer1802. For example, it is possible the information arrives at the ingested data buffer1802in an out-of-order manner (e.g., such that information with a later timestamp is received prior to information with an earlier timestamp). If the actual delay in receiving out-of-order information (e.g., the delay between when information is actually received and when it should have been received to maintain proper time-ordering) exceeds the time window, it is possible that the delayed information will be processed during a later instance of the processing phase1306(e.g., with a subsequent bundle of messages), and as such, results derived from the delayed information may be delivered out-of-order to a data destination. Thus, a longer time-window can assist in maintaining order of search results. In some instances, the ingested data buffer1802may guarantee time ordering of results within each message queue (though potentially not across message queues), and thus, modification of a time window in order to maintain ordering of results may not be required. In still more embodiments, the time-window may further be set based on computing resources available at the worker nodes1006. For example, a longer time window may reduce computing resources used by a partition, by enabling a larger collection of messages to be processed at a single instance of the processing phase1306. However, the longer time window may also delay how quickly an initial set of results are delivered to a data destination. Thus, the specific time-window may vary across embodiments of the present disclosure. While embodiments are described herein with reference to a collection of messages defined according to a time-window, other embodiments of the present disclosure may utilize additional or alternative collection techniques. For example, a partition may be configured to include no more than a threshold number of messages or a threshold amount of data in a collection, regardless of a time-window for collection. As another example, a partition may be configured during the intake phase1304not to aggregate messages, but rather to pass each message to a processing phase1306immediately or substantially immediately. Thus, embodiments related to time-windowing of messages are illustrative in nature. In some embodiments, the partitions, during the intake phase1304may further conduct coarse filtering on the messages received during a given time-window, in order to identify any messages not relevant to a given query. Illustratively, the coarse filtering may include comparison of metadata regarding the message (e.g., a source, source type, or host related to the message), in order to determine whether the metadata indicates that the message is irrelevant to the query. If so, such a message may be removed from the collection prior to the search process proceeding to the processing phase1306. In one embodiment, the coarse filtering does not include searching for or processing the actual content of a message, as such processing may be predicted to be relatively computing resource intensive. After generating a collection of messages from a respective message queue, the search can proceed to the processing phase1306, where one or more partitions are utilize to search the messages for information relevant to the search query. Illustratively, the searching that occurs during the processing phase1306may be predicted to be more processor (e.g., CPU) intensive than that which occurred during the intake phase1304. As such, the number of partitions used to conduct the processing phase1306may vary from that of the intake phase1304. For example, during or after the conclusion of the intake phase1304, each partition implementing that phase1304may communicate to the query coordinator1004information regarding the collections of messages received during a given time-window (e.g., the number, size, or formatting of messages, etc.). The query coordinator1004may thereafter determine from that information (e.g., based on interactions with the workload advisor1010) the partitions to be used to conduct the processing phase1306. In other embodiments, the query coordinator1004may select partitions to be used to conduct the processing phase1306prior to implementation of the intake phase1304(e.g., contemporaneously with selecting partitions to conduct the intake phase1304). The partitions selected for conducting the processing phase1306may include one or more partitions that previously conducted the intake phase1304. However, because the processing phase1306may be expected to be more resource intensive than the intake phase1304(e.g., with respect to use of processing cycles), the number of partitions selected for conducting the processing phase1306may exceed the number of partitions that previously conducted the intake phase1304. To minimize network communications, the additional partitions selected to conduct the processing phase1306may be preferentially selected to be collocated on a worker node1006with a partition that previously conducted the intake phase1304, such that portions of buckets to be processed by the additional partitions can be received from a partition on that worker node1006, rather than being transmitted across a network. At the processing phase1306, the partitions may parse the collections of messages generated during the intake phase1304in order to identify information relative to a search. For example, the may parse individual messages to identify specific lines or segments that contain values specified within the search, such as one or more error types desired to be located during the search. Where the search is conducted according to map-reduce techniques, the processing phase1306can correspond to implementing a map function. Where the search requires that results be time-ordered, the processing phase1306may further include sorting results at each partition into a time-ordering. The remainder of the search may be executed in phases according to the DAG determined by the query coordinator1004. For example, where the branch of the DAG currently being processed includes a collection node, the search may proceed to a collector phase1308. The collector phase1308may be executed by one or more partitions selected by the query coordinator1004(e.g., based on the information identified during the processing phase1306), and operate to aggregate information identified during the processing phase1306(e.g., according to a reduce function). Where the processing phase1306represents a top-node of a branch of the DAG being executed, the information located by each partition during the processing phase1306may be transmitted to the query coordinator1004, where any additional nodes of the DAG are completed, and search results are transmitted to a data destination1316. These additional phases may be implemented in a similar manner as described above, and they are therefore not discussed in detail with respect to searches against a common storage1702. Subsequent to these phases, a set of search results corresponding to each collection of messages (e.g., as received during a time-window) may be transmitted to a data destination. On transmission of such information (and potentially verification of arrival of such information at the data destination), the search head210may cause an acknowledgement of each message within the collection to be transmitted to the ingested data buffer1802. For example, the search head210may notify the query coordinator1004that search results for a particular set of information (e.g., information corresponding to a range of timestamps representing a given time window) have been transmitted to a data destination. The query coordinator1004can thereafter notify partitions used to ingest messages making up the set of information that the search results have been transmitted. The partitions can then acknowledge to the ingested data buffer300receipt of the messages. In accordance with the pub-sub messaging model, the ingested data buffer1802may then delete the messages after acknowledgement by subscribing parties. By delaying acknowledgement of messages until after search results based on such messages are transmitted to (or acknowledged by) a data destination, resiliency of such search results can be improved or potentially guaranteed. For example, in the instance that an error occurs between receiving a message from the ingested data buffer1802and search results based on that message being passed to a data destination (e.g., a worker node1006fails, causing a copy of the message maintained at the worker node1006to be lost), the query coordinator1004can detect the failure (e.g., based on heartbeat information from a worker node1006), and cause the worker node1006to be restarted, or a new worker node1006to replace the failed worker node1006. Because the message has not yet been acknowledged to the ingested data buffer1802, the message is expected to still exist within a message queue of the ingested data buffer1802, and thus, the restarted or new worker node1006can retrieve and process the message as described below. Thus, by delaying acknowledgement of a message, failures of worker nodes1006during the process described above can be expected not to result in data loss within the data intake and query system1001. In some embodiments, the ingested data buffer1802and search functionalities described above may be used to make “enhanced” or annotated data available for searching in a streaming or continuous manner. For example, search results may in some instances be represented by codes or other machine-readable information, rather than in an easy-to-comprehend format (e.g., as error codes, rather than textual descriptions of what such a code represents). Thus, the embodiment ofFIG.18may enable a client to define a long-running search that locates codes within messages of the ingested data buffer1802(e.g., via regular expression or other pattern matching criteria), correlates the codes to a corresponding textual description (e.g., via a mapping stored in common storage1702), annotates or modifies the messages to include relevant textual descriptions for any code appearing within the message, and re-publishes the messages to the ingested data buffer1802. In this manner, the information maintained at the ingested data buffer1802may be readily annotated or transformed by searches executed at the system1001. Any number of types of processing or transformation may be applied to information of the ingested data buffer1802to produce search results, and any of such search results may be republished to the ingested data buffer1802, such that the search results are themselves made available for searching. As will be appreciated in view of the above description, the use of an ingested data buffer1802can provide many advantages within the data intake and query system1001. Specifically, use of a ingested data buffer1802can enable the system1001to utilize worker nodes1006to search not-yet-indexed information, thus decoupling searching of such information from the functionality of data ingestion, as implemented by indexers206. Moreover, because the ingested data buffer1802can make messages available to both indexers206and worker nodes1006, searching of not-yet-indexed information by worker nodes1006can be expected not to detrimentally effect the operation of the indexers206. Still further, because the ingested data buffer1802can operate according to a pub-sub messaging model, the system1001may utilize selective acknowledgement of messages (e.g., after indexing by an indexer206and after delivery of search results based on a message to a data destination) to increase resiliency of the data on the data intake and query system1001. Thus, use of an ingested data buffer1802can substantially improve the speed, efficiency, and reliability of operation of the system1001. As described in greater detail in greater detail in U.S. Patent Application No. U.S. patent application Ser. No. 15/665,159, entitled “MULTI-LAYER PARTITION ALLOCATION FOR QUERY EXECUTION”, filed on Jul. 31, 2017, and which is hereby incorporated by reference in its entirety for all purposes, the various architectures of the system described herein can be used to define query processing schemes and execute queries based on workload monitoring, process and execute queries corresponding to data in external data sources, common storage, ingested data buffers, acceleration data stores, indexers, etc., allocate partitions based on identified dataset sources or dataset destinations, dynamically generate subqueries for external data sources or indexers, serialize/deserialize data for communication, accelerate query processing using the acceleration data store1008, etc. 10.0 Combining Datasets In some cases, a query can indicate that two or more datasets are to be combined in some fashion, such as by using a join or a union operation. Combining datasets can result in significantly larger number of data entries than the sum of the datasets to be combined. Accordingly, executing a query on large datasets, in some cases, the system can allocate more partitions for combination or expansion operations than for mapping or reducing operations to avoid partitions having too many data entries. For example, in certain cases, the system can automatically allocate five, ten, or twenty, times (or more) more partitions for combination or expansion operations than for mapping or reducing operations. Mapping operations generally operate on a set of results or partitions. Reducing operations generally reduce a set of results to a smaller set of results, which can result in fewer partitions being used. Combination or expansion operations generally increase a set of results to a larger set of results, which can result in more partitions being used. However, while assigning more partitions can result in smaller partitions overall, some partitions may have a disproportionate number of the data entries from the datasets. For example, if the data entries are partitioned based on a particular field value or field-value pairs, one or more field values or field-value pairs may occur significantly more frequently than others. As such, the partition assigned to store that field-value pair can end up having a disproportionate number of data entries than the other partitions. Similarly, a processor core tasked with processing the data entries of that partition may end up having significantly more processing to do as compared to other processor cores. This imbalance can result in a significant delay of the entire set of results until the processor core finishes processing the imbalanced partition. For example, consider the following search which, includes a join command that is to be executed by a distributed system having sixty cores:“search index=dogfood2007|fields _time, source|fields−_raw|join usetime=f left=L right=R where L._time=R._time [search index=dogfood2007|fields _time, sourcetype|fields−_raw]|stats count” As indicated in the search command, the index dogfood2007 is being searched for data entries with fields_time and source. The search command also includes an instruction to join two datasets, L and R, based at least in part on the values in the _time field. Supposing that the dogfood2007 index contains approximately 500 million data entries the above join returns results in >264 billion data entries. In some scenarios, the distributed system partitions the 500 million data entries based on the field value of each entry that corresponds to the field that is the subject of the join. For example, one core is tasked with processing the data entries that include an identical or similar _time field value. However, such a distribution can result in imbalanced partitions in cases where one or more field values are highly repetitive and/or have high cardinality. Consider an instance in which one of the _time field values in the above search has ˜380,000 repetitions. Since the above search involves a self join (joining different datasets that originated from the same data source or dataset), one of the partitions would contain 380,000 field value repetitions on both sides (from the L and R datasets) that are to be joined. The joining of the two sets of 380,000 field values would result in ˜144 billion results in the partition. The processor core assigned to process the data entries in that partition would be tasked to process the ˜144 billion search, which could result in days of search execution time. Thus, although sixty cores are assigned for the above search, fifty-nine of the cores would no longer be utilized after completing the processing on their relatively smaller partitions, while one core would continue to run for many hours to compute the ˜144 billion result output. To improve the distribution of the data entries that have a high number of repetitive field values and high cardinality, the system can determine and apply a seed value to such data entries such that the data entries are distributed between multiple partitions, enabling multiple processor cores to process them in parallel. As the additional partitioning can result in additional processing resources, the system can perform an preliminary review to determine whether to implement the multi-partition operation. This additional processing can be completed by a master node, such as by a search head, query coordinator, or by one or more of the distributed processor cores executing the search. Furthermore, in some embodiments, the additional processing can be performed during the query processing stage and/or during the query execution stage. In some embodiments, prior to executing the query, the system can determine whether a multi-partition operation is to be used. For example, the system can perform a semantic analysis of the query itself to determine the likelihood of a significantly imbalanced partition. The semantic analysis can include a review of the query command itself. For example, if the system determines that the query command does not include a combination operation, such as a join operation, the system can determine not to implement the multi-partition operation. In some cases, the system can determine not to implement the multi-partition operation if the query command indicates a combination based on a field that was previously used in a reduction operation of the query or is a subset of the fields used in a reduction operation. Further, if the system determines that the query command indicates that the combination is based on a field that was not previously used in a reduction operation or that the operation just prior to the instruction to combine was a combination or expansion operation, then the system can determine that the multi-partition operation may be used. If, following the pre-execution analysis, the system determines that an imbalanced partition is possible, then it can monitor the execution of the query or instruct the distributed processor cores to monitor the execution of the query. During execution of the query, the system can identify the field that is to be used to combine the different datasets, and determine the number of data entries in each dataset that have the same field value for the identified field. Using the number of data entries from each dataset that have the same field value for the field used in the combination, the system can determine whether to implement the multi-partition operation. In some embodiments, if the data entries from each dataset that have the same field value satisfies a data entries quantity threshold, then the system can implement the multi-partition operation. As a non-limiting example, the system can combine the number of data entries in each dataset, such as by multiplying the number of data entries that have the same field value in a first dataset with the number of data entries that have the same field value in a second dataset. If the product exceeds a data entries quantity threshold, then the system can implement the multi-partition operation. Following the multi-partition operation, the system can perform a similar analysis on each of the partitions involved in the multi-partition operation. If the combination of the data entries from the datasets in a partition exceeds the data entries quantity threshold or the memory usage for the sub-partition exceeds a memory level threshold, the system can implement the multi-partition operation for the affected partition. The system can continue to perform the multi-partition operation until the combination of data entries from the datasets in each partition or sub-partition does not satisfy the data entries quantity threshold and the memory level threshold. 10.1 Multi-Partition Determination FIG.19is a flow diagram illustrative of an embodiment of a routine1900implemented by the system to process and execute a query. One skilled in the relevant art will appreciate that the elements outlined for routine1900can be implemented by one or more computing devices/components that are associated with the system1000, such as the search head210, search process master1002, query coordinator1004and/or worker nodes1006, or any combination thereof. Thus, the following illustrative embodiment should not be construed as limiting. At block1902, the system receives a query. The system can receive the query similar in a manner similar to that described above with reference to block3802ofFIG.38 At decision block1904, the system determines whether the query is susceptible to a significant partition imbalance, such as an imbalance that could result in a processor core spending significant amounts of additional time (non-limiting examples: hours or days) processing the partition while other processor cores assigned to the same query have completed processing their partitions. For example, as part of decision block1904, the system can analyze the syntax or semantics of the query to determine whether the query is susceptible to a significant partition imbalance. The system can make this determination in a variety of ways. In some embodiments, the system can determine whether the query is susceptible to a significant partition imbalance based on whether the query includes a reduction operation prior to the combination operation and/or whether the datasets are to be combined using a field that is to be used in a reduction operation prior to the combination operation. Some reduction operations can include, but are not limited to, stats commands, such as, countby, count, etc. or mathematical operations, such as mean, median, average, etc. Examples of combination operations can include, but are not limited to inner joint, outer join (left outer, right outer, full outer), union, etc. In certain embodiments, such as when no reduction operation has been performed on the datasets or the field used in the combination operation does not correspond to a field used in a prior reduction operation, the system can determine that the query is susceptible to a significantly imbalanced partition. In certain embodiments, such as when the field used in the combination operation corresponds to a field used in a prior reduction operation (e.g., field in the combination is the same field or a subset of the fields used in a prior reduction operation), the system can determine that the query is not susceptible to a significantly imbalanced partition. As a non-limiting example, if the query indicates that two datasets are to be joined based on the field “_time,” the system can determine whether the query includes a reduction operation using the field “_time” that is prior to the join. For example, the reduction operation can use the field “_time” alone or in combination with other fields, such that the field “_time” in the join is a subset of the fields used in the reduction operation. In some embodiments, upon determining that the query includes a reduction operation using the field “_time” prior to the join, the system can determine that the query is not susceptible to a significantly imbalanced partition. Conversely, in certain embodiments, upon determining that the query does not include any reduction operations, any reduction operations prior to the join, or a reduction operation prior to the join that uses the field “_time,” the system can determine that the query is susceptible to a significantly imbalanced partition. In some embodiments, the system can determine that the query is susceptible to a significant partition imbalance if one of the datasets includes a combination or expansion operation just prior to the combination operation. In some circumstances, this determination can be made even if the field in the combination operation matches a field in an earlier reduction operation. For example, if the _time field is used to reduce two datasets, an expansion operation is performed on one of the datasets (with or without the _time field), and the datasets are then to be combined based on the _time field (or any other field in some embodiments), the system can determine that the query is susceptible to a significant partition imbalance. In certain embodiments, the system can review an inverted index to determine whether the query is susceptible to a significant partition imbalance. As described herein, inverted indexes can include information about data entries or events that are stored by the system, such as, but not limited to, relevant fields associated with different data entries or events, field-value pairs of various data entries or events, a count of the field-value pairs for data entries or events in different data stores or time series buckets, etc. Accordingly, if the field to be used in the combination operation is included in an inverted index, the system can review the inverted indexes associated with the datasets that are to be combined. For example, the system can review field-value pairs in the relevant inverted indexes and the quantity of each field-value pair. The system can then use the quantity of the field-value pairs to determine whether the query is susceptible to a significant partition imbalance. For example, if the quantity of a given field-value pair in the inverted indexes associated with the datasets satisfies the data entries quantity threshold, the system can determine that the query is susceptible to a significant partition imbalance. In the event that the system determines that the query is susceptible to a significant partition imbalance, the routine moves to block1906and the system monitors the query during execution to determine a number of matching field-value pair data entries in datasets that are to be combined based on the field corresponding to the matching field-value pair data entries. The matching field-value pair data entries can correspond to data entries that have a matching field-value pair (i.e., a combination of a field and field value for that field). It will be understood that each dataset can include a large number of matching field-value pair data entries for many different field-value pairs. Furthermore, it will be understood that a single data entry can be a matching field-value pair data entry for different fields and field-value pairs. For example, if a data entry includes the field-value pairs “_time::1:34:00” and “IP_addr::192.168.1.4,” then it can belong to one group of matching field-value pair data entries with the field-value pair “_time::1:34:00” and to another group of matching field-value pair data entries with the field-value pair “IP_addr::192.168.1.4.” Further, although reference is made herein to a data entry including a field-value pair, in some embodiments, the data entry itself may not expressly identify the field. Rather, the data entry may include a field value that corresponds to a field designated by the system. For example, the data entry may include the value “192.168.1.4,” which the system identifies as the field value for an IP address field. As part of monitoring the query during execution, the system can identify the fields that are to be used in a combination operation of the datasets. The system can also identify the number of matching field-value pairs data entries corresponding to the identified field in each of the datasets to be combined in the combination operation. The system can determine whether the matching field-value pair data entries in the different datasets satisfies a data entries quantity threshold. At decision block1908, the system determines whether to implement a multi-partition operation for a particular field-value pair. In certain embodiments, the system can determine whether to implement the multi-partition operation based on whether the matching field-value pair data entries in the datasets satisfy a data entries quantity threshold. In some cases, the system can combine the quantity of matching field-value pair data entries from the datasets to determine if the data entries quantity threshold is satisfied. In certain instances, the system can combine the quantity of matching field-value pair data entries by multiplying or adding the number of matching field-value pair data entries from each dataset that is to be combined. In some embodiments, the data entries quantity threshold can be based on the processing power/speed of the individual processing cores, the number of available cores for the query, a timing preference for completing the query, etc. For example, the data entries quantity threshold can be larger for processing cores with more processing power/speed and smaller for processing cores with less processing power/speed. In some cases, the data entries quantity threshold can be larger for when fewer cores are used for a particular query or smaller when more cores are used for the particular query. In certain embodiments, the data entries quantity threshold can be smaller for queries that are to be completed in less time than for queries that can be completed in more time. In certain embodiments, the data entries quantity threshold can be one million, five, million, ten million or more data entries. In certain embodiments, the system can use the inverted indexes, as described above, to determine whether to implement a multi-partition operation. Thus the inverted indexes can be used to determine whether the query is susceptible to a significantly imbalanced partition and/or whether to implement a multi-partition operation for a particular field-value pair. In the event the system determines to implement the multi-partition operation for a particular field-value pair, then the routine moves to block1910and the system implements the multi-partition operation. Some embodiments of the re-partition operation are described in greater detail below with reference toFIGS.20and22. In some embodiments, the multi-partition operation includes partitioning matching field-value pair data entries from the different datasets into multiple partitions, such that each partition includes a group or subset of the combined matching field-value pair data entries from the different datasets (e.g., each partition can include a group of matching field-value pair data entries from each of the datasets to be combined). By partitioning matching field-value pair data entries, the system can reduce the size of each partition, as well as the amount of time and processing power to process the partition. In certain cases, as part of the multi-partition operation, the system can determine whether to perform a second multi-partition operation on one or more of the partitions formed as a result of the first multi-partition operation. In some cases, the system can determine to perform the second multi-partition operation based on the quantity of data entries in the particular partition, as described above. For example, if the quantity of data entries in the particular partition, or a combination of the data entries, satisfies the data entries quantity threshold, then the system can perform a multi-partition operation on that partition, effectively creating sub-partitions or replacement partitions for that partition. In some embodiments, the system can also perform a second multi-partition operation on one or more of the partitions based on a combined size of the data entries in that partition. For example, if the amount of memory used or required to store the data entries, or the data entries following a combination operation, in a partition satisfies a memory level threshold, then the system can perform a multi-partition operation on that partition. In some cases, the system may have limited amounts of memory that can be used for each processor core. Accordingly, to avoid exceeding that amount, the system can use a memory level threshold. The memory level threshold can correspond to an acceptable amount of memory that the combination of the first subgroup and the second group can use. The threshold can vary depending on the number of processor cores in use on a system, the total amount of memory on the device, the total amount of available memory, etc. If the amount of memory required to store the combination satisfies or exceeds the threshold, the system can repeat the multi-partition operation until the combination of matching field-value pair data entries in a partition/sub-partition satisfy the data entries quantity threshold and the memory level threshold. By performing the multi-partition operation at block1910, the system can avoid a significantly imbalanced partition, thereby reducing the overall runtime of the query. Following the multi-partition operation, the system continues with the query execution, such as by combining, or continuing to combine, the datasets as illustrated in block1912. In some embodiments the query execution can include combining groups of data entries of the datasets in different partitions that were not part of the multi-partition operation or performing the multi-partition operation for other field-value pairs. In addition, as shown inFIG.19, in the event that the system determines that the query is not susceptible to a significant partition imbalance or determines not to implement the multi-partition operation for a given field-value pair or combination operation, the system moves to block1912and continues query execution without performing the multi-partition operation for that particular field-value pair or combination operation, respectively. It will be understood that fewer, more, or different blocks can be used as part of the routine1900. In some cases, one or more blocks can be omitted. For example, decision blocks1904and1908, and the corresponding no decision paths, can be replaced with a “determine that query is susceptible to an imbalance” block and a “determine to implement multi-partition operation” block, respectively. As yet another example, for a combination operation, the system can analyze each field-value pair of the datasets that are to be combined to determine whether the multi-partition operation is to performed. Further, if the query includes multiple combination operations, the system can analyze each combination operation to determine whether to perform the multi-partition operation for relevant data entries of the datasets. Accordingly, during execution of a query, the multi-partition operation may not be performed at all or may be performed one or more times for a single query, one or more times for a single combination operation (e.g., for different field-value pairs), or one or more times for a single field-value pair. Furthermore, it will be understood that the various blocks described herein with reference toFIG.19can be implemented in a variety of orders. In some cases, the system can implement some blocks concurrently or change the order as desired. For example, the system can continue with a query execution for some field-value pairs, while concurrently executing the multi-partition operation for other field-value pairs as desired. In addition, it will be understood that any of the blocks described herein with reference to routine1900can be combined with any of routines2000or2200, or be combined with or form part of routines1500,1600. For example, in some cases, the decision block1904, or a similar block, can form part processing a query, as described in greater detail with reference to block1504ofFIG.15. In certain embodiments, the instructions to monitor the query can be generated as part of the query processing scheme, as described in greater detail with reference to block1610ofFIG.16, and included in the DAG communicated to the worker nodes1006, as described above. 10.2 Multi-Partition Operation As described above, in some instances a query includes instructions to combine multiple datasets based on one or more fields. Each dataset may include multiple field-value pairs that correspond to the one or more fields used to combine the datasets. In some cases, these field-value pairs can be used to assign matching field-value pair data entries of the datasets to different partitions. For example, matching field-value pair data entries from the different datasets can be assigned to the same partition. However, as there may exist a large number of matching field-value-pair data entries assigned to the same partition, the system can determine that at least some of the matching field-value pair data entries should be further partitioned. In such cases, the system can allocate the data that was to be assigned to the single partition to multiple partitions. Accordingly,FIG.20is a flow diagram illustrative of an embodiment of a multi-partition routine2000implemented by the system on matching field-value pair data entries. In some embodiments, the multi-partition routine2000can correspond to the multi-partition operation referenced in block1910ofFIG.19. One skilled in the relevant art will appreciate that the elements outlined for routine2000can be implemented by one or more computing devices/components that are associated with the 1000, such as the search head210, search process master1002, query coordinator1004and/or worker nodes1006, or any combination thereof. Thus, the following illustrative embodiment should not be construed as limiting. At block2002, the system identifies a first group of data entries from a first dataset and a second group of data entries from a second dataset. The first group of data entries can correspond to data entries of the first dataset that have a field-value pair that corresponds to a field that is being used to combine the first dataset with a second dataset. Similarly, the second group of data entries can correspond to data entries of the second dataset that have a field-value pair that matches the field-value pair of the data entries of the first group. As mentioned above, the datasets that are to be combined can correspond to the same original data source or dataset that may have been processed differently or can correspond to different original data sources or datasets. At block2004, the system assigns data entries of the first group to a plurality of partitions. The allocated partitions can correspond to partitions that are being used to process other matching field-value pair data entries or other data entries of the datasets, or they can correspond to separate partitions that are used to process just the subgroups of the first group. In some embodiments, the system assigns each data entry of the first group to one of the allocated partitions. In certain embodiments, the system assigns the data entries of the first group to a partition in a random or pseudo-random fashion. By randomly assigning the data entries to the different partitions, the system can reduce overhead as compared to sequentially assigning the data entries to the different partitions. However, it will be understood that the system can sequentially assign the data entries of the first group to the different partitions, or use other mechanisms to assign the data entries of the first group to the different partitions as desired. Once the system assigns the data entries of the first group to the plurality of partitions, each partition can include a first subgroup of data entries that correspond to a subset of the first group. In some embodiments, the system can calculate a seed value, and use the seed value to partition the data entries of the first group. In some embodiments, the seed value can be determined based on the first group of data entries and the second group of data entries from the second dataset. In certain cases, the system can calculate the seed value based on the number of data entries in the first group of data entries and the number of data entries in the second group of data entries. Furthermore, in the event more than two datasets are to be combined, the system can use the number of data entries in the additional datasets to determine the seed value. In certain embodiments, to calculate the seed value, the system uses a data entries quantity threshold. For example, the seed value can be determined by dividing the number of data entries after combining the number of data entries in the first group and the number of data entries in the second group by the data entries quantity threshold. In certain embodiments, following the division, the system can round up the quotient to determine the seed value. In some embodiments, the seed value can be calculated as: seed⁢value=ceiling⁢of⁢entries⁢in⁢group⁢1*entries⁢in⁢group⁢⁢2data⁢entries⁢quantity⁢threshold However, it will be understood that the seed value can be determined in a number of ways to reduce the number of entries in a partition so as to not satisfy the data entries quantity threshold. In some cases, the system can use the seed value to allocate partitions and to assign the data entries of the first group to the different partitions. For example, the system can allocate a number of partitions equal to the seed value. In addition, the system can randomly or sequentially assign the data entries of the first group to the different partitions using the seed value, for example, by modulating a randomly generated number by the seed value and using the result to assign a particular data entry to a partition. At block2006, for at least one partition, the system combines the data entries of the first subgroup of the first group with the data entries of the second group. In certain embodiments, the system performs block2006for all partitions. The system can access the entries of the second group for combination with the data entries of the subgroup of the first group in a variety of ways. In some embodiments, the system uses multiple processors or nodes to combine the data entries in the different partitions. In certain embodiments, a distinct processor is used to combine the data entries for each partition. In some embodiments, a copy of each data entry of the second group can be assigned to each partition. For example, the system can make a copy of each entry of the second group and assign it to each allocated partition. In certain embodiments, each core processing a partition can read a memory location that stores the data entries of the second group. The system can use the data read from the relevant memory location to combine the first subgroup with the second subgroup. In certain embodiments, the system can assign the second group to the allocated partitions, similar to the manner in which the data entries of the first group are assigned, such that each partition includes a second subgroup of data entries that correspond to the second group. For each partition, the system can generate a copy of each data entry of the second subgroup, reassign the copies (or the original) to the other partitions, and reform the second subgroup to include the data entries assigned from other partitions. The system can then combine the reformed second subgroup with the first subgroup. The combination of the first subgroup with the second group can result in a larger number of data entries than the sum of the first subgroup and second group. In some cases, the resultant number of data entries can correspond to the product of the number of entries in the first subgroup and the number of entries in the second group. In certain embodiments, the system can combine the first subgroup with the second group based on the matching field value. For example, each data entry in the combined subgroup can correspond to a unique combination of a data entry of the first subgroup and a data entry of the second group. It will be understood that fewer, more, or different blocks can be used as part of the routine2000. In addition, it will be understood that any of the blocks described herein with reference to routine2000can be combined with any of routines1900or2200. In some cases, one or more blocks can be omitted or repeated. For example, the system can determine that the second group is smaller than the first group and assign the first group to the different partitions based on that determination. As another example, the system can perform additional operations on the data entries of the combined subgroup or the combined first and second dataset. As described herein, the instruction to combine the datasets can be part of a combination operation that is only one operation of a query. Accordingly, following the combination operation, the system can perform additional operations on the data entries that resulted from the combination operation. As yet another example, in addition to combining the subgroups and groups described above, the routine2000can also combine other groups of the datasets and/or perform other tasks to complete the execution of the query. As described above, in some cases, the system does not perform a multi-partition operation for all data entries. Thus, routine2000can further include the system combining the first and second datasets, with blocks2002,2004, and2006being performed on a subset of the data entries of the datasets. As a non-limiting example, the system can partition datasets using matching field-value pairs that correspond to a field that is used to combine the datasets. Further, the system can perform the routine2000on a subset of the partitions, such as the partitions that include matching field-value pair data entries that, when combined, satisfy the data entries quantity threshold. In some cases, the system only performs the routine2000on the partitions that include matching field-value pair data entries that, when combined, satisfy the data entries quantity threshold. As described above, the routine2000can be repeated multiple times for a particular field-value pair (e.g., in the event the combination of events of the subgroup of the first group and the second group satisfies a data entries quantity threshold) or for a particular combination operation (e.g., for multiple field-value pairs in the datasets to be combined). In addition, if the query includes multiple combination operations, the system can repeat the routine2000for one or more field-value pairs in each combination operation. Furthermore, it will be understood that the various blocks described herein with reference toFIG.20can be implemented in a variety of orders. In some cases, the system can concurrently assign data entries of the first group to the different partitions, while concurrently combining the data entries of the subgroup of the first group with the data entries of the second group. In addition, the system can concurrently implement the routine2000for multiple field-value pairs as part of a combination operation. Similarly, the system can implement the routine2000for one or more field-value pairs, while concurrently combining other field-value pairs without routine2000. In some embodiments, when combining multiple datasets as part of a combination operation, the system can determine that data entries assigned to one partition are to be assigned to multiple partitions, while data entries assigned to a second partition are not to be assigned to multiple partitions. For example, the data entries assigned to the second partition may not satisfy the data entries quantity threshold. As such, the multi-partition operation may not be used for that partition. However, the system can combine the data entries in the second partition, while concurrently using the routine2000to combine the data entries of the first partition. FIG.21is a diagram illustrating an embodiment of a join operation performed on two datasets. It will be understood that the although the datasets in the illustrated example are relatively small, the datasets used by the system can be significantly larger and include millions or even billions of data entries. Accordingly, the illustrated example should be not construed as limiting. In the illustrated embodiment, Dataset 1 and Dataset 2, illustrated at2102, are to be joined based on the field time. For purposes of this example, the data entries quantity threshold is five. In addition, in the illustrated example, the data entries of Dataset 1 include field values for the fields time and source and the data entries in Dataset 2 include field values for the fields time and source type. It will be understood that the illustrated data entries are examples only and should not be construed as limiting. As described in greater detail above with reference to block1906ofFIG.19, the system can monitor the number of field-value pairs in each dataset that correspond to the field being used in the join operation. As part of the monitoring, the system can determine whether the matching field-value pair data entries in the different datasets satisfy the data entries quantity threshold. When the system analyzes the field-value pair time::1, it determines that the combination of the matching field-value pair entries for Dataset 1 and Dataset 2 is six, which satisfies the data entries quantity threshold of five. Accordingly, the system can proceed to implement a multi-partition operation on the data entries with a field-value pair of time::1. In this example, the system determines a seed value of two based on the quantity of matching field-value pair data entries in each dataset and the data entries quantity threshold. Using the seed value, the system randomly assigns a seed to each matching field-value pair data entry in Dataset 2 as illustrated by the seeded Dataset 22104. In some cases, Dataset 2 can be selected for seeding based on a determination that the Dataset 1 has fewer matching field-value pair data entries than Dataset 2. However, it will be understood that Dataset 2 can be selected for seeding in a variety of ways, such as randomly, because it has more matching field-value pair data entries than Dataset 1, etc. Using the seeding, the system allocates the matching field-value pair data entries of Dataset 2 to Partition 1 and Partition 2 as illustrated at2106. In some cases, the number of partitions used corresponds to the seed value. For example, as the seed value is two in this example, the system uses two partitions and allocates the matching field-value pair data entries of Dataset 2 based on the number of partitions. In addition, as illustrated, in some embodiments, the partitions maintain a separation between the data from Dataset 1 and Dataset 2, or otherwise identify the matching field-value pair data entries based on the dataset from which they came. As illustrated at2108, the system makes the matching field-value pair data entries of Dataset 1 available to each of the partitions. In some cases, this can be done by copying the matching field-value pair data entries of Dataset 1 to each partition, enabling the processor cores that process the different partitions to access the matching field-value pair data entries in a read-only fashion, or partitioning, duplicating, and repartitioning the matching field-value pair data entries of Dataset 2 as described in greater detail below with reference toFIG.22. As illustrated at2110, in each partition, the system joins the matching field-value pair data entries from the different datasets. Although not illustrated in this example, it will be understood that the join of the matching field-value pair data entries in Partition 1 and Partition 2 can occur before, after, or concurrently with each other and/or with the join performed on the other data entries of the datasets. For example, in addition to Partition 1 and Partition 2, used to join the matching field-value pair data entries for time 1, an additional one or more partitions can be concurrently used to join the matching field-value pair data entries for time 2, 3, and 4. As illustrated, given that the combination of matching field-value pair data entries for time 4 satisfies the data entries quantity threshold, the system can generate multiple partitions to process the matching field-value pair data entries for time 4. One example of partitioning the matching field-value pair data entries for time 4 is described below with reference toFIG.23. In some embodiments, the seeds used to assign the different data entries to the different partitions (e.g., 0.1 and 0.2) can remain with the data entries. In this way the system can track the different subgroups of the first group. In certain embodiments, the seeds can be removed following the combination operation, and/or as part of or after a subsequent operation. In some cases, the seeds can remain until a reduction operation is performed using the data entries. FIG.22is a flow diagram illustrative of an embodiment of a multi-partition routine2200implemented by the system to partition matching field-value pair data entries. One skilled in the relevant art will appreciate that the elements outlined for routine2200can be implemented by one or more computing devices/components that are associated with the 1000, such as the search head210, search process master1002, query coordinator1004and/or worker nodes1006, or any combination thereof. Thus, the following illustrative embodiment should not be construed as limiting. At block2202, the system identifies a first group and a second group associated with the multi-partition operation. As mentioned, the first group of data entries can correspond to data entries of the first dataset that have a field-value pair that corresponds to a field that is being used to combine the first dataset with a second dataset. Similarly, the second group of data entries can correspond to data entries of the second dataset that have a field-value pair that matches the field-value pair of the data entries of the first group. At block2204, the system identifies the first group as the a partitioning group. In some embodiments, the system identifies the first group as the partitioning group based on a determination that the second group of data entries has fewer data entries than the first group of data entries. However, it will be understood that the system can identify the partition group in a variety of ways. In some cases, the system can identify the first group as the partitioning group based on a determination that it is the same size as or larger than the second group or based on a default setting. In some embodiments, the system determines the quantity of the first group and the second group using a stats command or other command that provides a count of the number of data entries in the first group and the second group. In some embodiments, the command can be executed as a background process and without the knowledge of the user. Using the data, the system can determine that the second group has fewer data entries then the first group. At block2206, the system assigns each data entry of the first group and each data entry of the second group to one of a plurality of partitions. The assignment of the data entries from the different groups can be accomplished similar to the manner described above. For example, the system can calculate a seed value as described in greater detail above. Further, the system can use the seed value to allocate partitions and/or assign the data entries of the first and second groups to the partitions. Once the first and second groups have been partitioned and the partitions include the first and second subgroups of the first and second groups, the system can perform blocks2208,2210,2212, and2214for at least one partition. However, in certain embodiments, the system performs blocks2208,2210,2212, and2214on each partition. In addition, in some embodiments, the system can use one or more processors to perform blocks2208,2210,2212, and2214on the different partitions. In some cases, a distinct processor can be assigned to perform blocks2208,2210,2212, and2214on each partition. At block2208, the system duplicates the second subgroup. In certain cases, the system duplicates the second subgroup based on the identification of the first group as the partitioning group. In some cases, the system can duplicate each data entry of the second subgroup based on the number of partitions that hold a subgroup of the second group. For example, if seven partitions hold a subgroup of the second group, the system can generate six duplicates for each data entry of the second subgroup such that a total of seven identical data entries exist. At block2210, the system reassigns the data entries of the second subgroup, or assigns duplicates of the second subgroup, to the other partitions. In some cases the system reassigns the data entries so that each partition includes the data entries corresponding to the second group. In some cases, as the system generates the duplicates for each data entry it can also assign it to a partition. In certain embodiments, the duplicates can be assigned in a sequential manner such that for partition 1, the first duplicate of a data entry is assigned to partition 2, the second duplicate of a data entry is assigned to partition 3, and so on. However, it will be understood that the data entries can be assigned in any manner as desired. For example, in some cases, all of the original data entries of the second group can be assigned to one partition, all of the first duplicates in each partition can be assigned to a second partition, and so on. At block2212, the system reforms the second subgroup to include one or more data entries assigned to it from other partitions. As the system re-assigns data entries, assigns duplicate data entries, or repartitions the second group of data entries so that each partition includes a set of the second group, the second subgroup in each partition can be reformed to include the data entries assigned from other partitions. Further, once the repartitioning is complete, each partition can include a complete set of the second group. Accordingly, in some embodiments, the reformed second subgroup can correspond to, or be the same as, the second group of data entries. At block2214, the system generates a combined subgroup based the first subgroup and the reformed second group. As described above, the system can combine the first subgroup and the reformed second group in a variety of ways as desired, or depending on the combination operation to be performed by based on the query. In some cases, the number of data entries in the combined subgroup can correspond to the product of the number of data entries in the first subgroup and the number of entries in the reformed second subgroup or second group. In certain embodiments, the system can combine the first subgroup with the second group based on the matching field value such that data entries in the combined subgroup includes the field value, at least one value from a data entry in the first subgroup, and at least one value from a data entry in the second group or reformed second subgroup. Furthermore, the system can generate the combined subgroup by generating a data entry for each unique combination of a data entry in the first group with a data entry in the second group or reformed second subgroup. It will be understood that fewer, more, or different blocks can be used as part of the routine2200. In some cases, the routine can include additional blocks for performing additional functions on the partitions that include the combined subgroup. For example, following the combination operation that generates the different operations, the node can perform a reduction operation that results in fewer data entries and/or reduces the number of partitions used to hold the data entries. Similarly, the node can perform an expansion operation that results in more data entries and/or increases the number of partitions used to hold the data entries. In certain cases, the system can retain the seed values assigned to the different data entries following the combination operation, or discard the seed value as part of a subsequent operation. In certain embodiments, the system can discard the seed value assigned to the different data entries, as part of or after a reduction operation. In some cases, one or more blocks can be omitted or repeated. For example, blocks2208,2210, or2212can be combined into a single block. In addition, the system can also combine data entries of partitions that were not subject to the seeding or repartitioning described above. The combination of data entries of partitions that were not subject to the seeding or repartitioning can be done before, after, or concurrently with the blocks of routine2200. In addition, it will be understood that any of the blocks described herein with reference to routine2200can be combined with any of routines1900or2000. As described above, the routine2200can be repeated multiple times for a particular field-value pair (e.g., in the event the combination of events of the subgroup of the first group and the second group satisfies a data entries quantity threshold) or for a particular combination operation (e.g., for multiple field-value pairs in the datasets to be combined). In addition, if the query includes multiple combination operations, the system can repeat the routine2200for one or more field-value pairs in each combination operation. Furthermore, it will be understood that the various blocks described herein with reference toFIG.22can be implemented in a variety of orders. In some cases, the system can concurrently duplicate the second group, assign duplicate entries to other partitions, and reform the second subgroup. FIG.23is a diagram illustrating an embodiment of a join operation of Dataset 1 and Dataset 2 described above with reference toFIG.3for the field-value pair time::4. As discussed, Dataset 1 and Dataset 2 are to be joined based on the field time with a data entries quantity threshold of five. When the system analyzes the field-value pairs for time 4, it determines that the combination of the matching field-value pair entries for Dataset 1 and Dataset 2 is twelve, which satisfies the data entries quantity threshold of five. Accordingly, the system can proceed to implement a multi-partition operation on the data entries with a field-value pair of time::4. Based on the quantity of matching field-value pair data entries for time 4 in Datasets 1 and 2 and the data entries quantity threshold, the system determines a seed value of three. In this example, using the seed value, the system randomly assigns each matching field-value pair data entry in Dataset 1 and Dataset 2 to a partition as illustrated at2304, and allocates the matching field-value pair data entries of Dataset 1 and Dataset 2 to Partition 1, Partition 2, or Partition 3 based on the assignment, as illustrated at2306. As discussed above, the number of partitions can correspond to the seed value. Further, as illustrated, in some embodiments, the partitions can retain information indicating the dataset from which each matching field-value pair data entry came. As shown at2308, the system duplicates the matching field-value pair data entries from Dataset 1 in each partition. In some cases, Dataset 1 can be selected for duplication based on a determination that the Dataset 1 has fewer matching field-value pair data entries than Dataset 2. However, it will be understood that Dataset 1 can be selected for duplication in a variety of ways, such as by random selection, etc. In the illustrated embodiment, the system also seeds the duplicate matching field-value pair data entries, or duplicate data entries, for assignment to the other partitions. In some cases, the system can sequentially seed the duplicate data entries for assignment to the other partitions. However, it will be understood that the system can allocate the matching field-value pair data entries that correspond to Dataset 1 for assignment to the different partitions in a variety of ways as discussed above. As shown at2310, the system repartitions the matching field-value pair data entries that correspond to Dataset 1 so that each partition includes matching field-value pair data entries that correspond to Dataset 1. As described above, this can be done by repartitioning duplicate data entries from each partition to another partition or otherwise reassigning the matching field-value pair data entries that correspond to Dataset 1 to the different partitions. In addition, as shown at2310, the system determines that the number of matching field-value pair data entries in Partition 2 satisfies the data entries quantity threshold. Accordingly, the system determines a seed value (2), and assigns the matching field-value pair data entries in Partition 2 to one of two partitions, Partition 2.1 and Partition 2.2, as illustrated at2312. As shown at2314, the system allocates the matching field-value pair data entries in Partition 2 to the Partitions 2.1 and 2.2, and based on a determination that the number of matching field-value pair data entries in the Dataset 2 portion of Partition 2 is less than the number of matching field-value pair data entries in the Dataset 1 portion of Partition 2, the system duplicates the matching field-value pair data entries in the Dataset 2 portion of Partitions 2.1 and 2.2. As shown at2316, the system reallocates the matching field-value pair data entries that correspond to the Dataset 2 portion of Partition 2 such that Partitions 2.1 and 2.2 each include the matching field-value pair data entries that correspond to the Dataset 2 portion of Partition 2. As illustrated at2318, in each of the Partitions 1, 2.1, 2.2, and 3, the system joins the matching field-value pair data entries from the different datasets. Although not illustrated in this example, it will be understood that the join of the matching field-value pair data entries in the Partitions 1, 2.1, 2.2, and 3, can occur before, after, or concurrently with each other and/or with the join performed on the other data entries of the datasets. For example, as discussed above with reference toFIG.3, one or more partitions can be concurrently used to join the matching field-value pair data entries for time 1, 2, and 3. AlthoughFIGS.19-23have been described with reference to the system1000, it will be understood that the concepts described herein can be used in any distributed data processing system where datasets are to be combined in some fashion. 11.0. Hardware Embodiment FIG.24is a block diagram illustrating a high-level example of a hardware architecture of a computing system in which an embodiment may be implemented. For example, the hardware architecture of a computing system72can be used to implement any one or more of the functional components described herein (e.g., indexer, data intake and query system, search head, data store, server computer system, edge device, etc.). In some embodiments, one or multiple instances of the computing system72can be used to implement the techniques described herein, where multiple such instances can be coupled to each other via one or more networks. The illustrated computing system72includes one or more processing devices74, one or more memory devices76, one or more communication devices78, one or more input/output (I/O) devices80, and one or more mass storage devices82, all coupled to each other through an interconnect84. The interconnect84may be or include one or more conductive traces, buses, point-to-point connections, controllers, adapters, and/or other conventional connection devices. Each of the processing devices74controls, at least in part, the overall operation of the processing of the computing system72and can be or include, for example, one or more general-purpose programmable microprocessors, digital signal processors (DSPs), mobile application processors, microcontrollers, application-specific integrated circuits (ASICs), programmable gate arrays (PGAs), or the like, or a combination of such devices. Each of the memory devices76can be or include one or more physical storage devices, which may be in the form of random access memory (RAM), read-only memory (ROM) (which may be erasable and programmable), flash memory, miniature hard disk drive, or other suitable type of storage device, or a combination of such devices. Each mass storage device82can be or include one or more hard drives, digital versatile disks (DVDs), flash memories, or the like. Each memory device76and/or mass storage device82can store (individually or collectively) data and instructions that configure the processing device(s)74to execute operations to implement the techniques described above. Each communication device78may be or include, for example, an Ethernet adapter, cable modem, Wi-Fi adapter, cellular transceiver, baseband processor, Bluetooth or Bluetooth Low Energy (BLE) transceiver, or the like, or a combination thereof. Depending on the specific nature and purpose of the processing devices74, each I/O device80can be or include a device such as a display (which may be a touch screen display), audio speaker, keyboard, mouse or other pointing device, microphone, camera, etc. Note, however, that such I/O devices80may be unnecessary if the processing device74is embodied solely as a server computer. In the case of a client device (e.g., edge device), the communication devices(s)78can be or include, for example, a cellular telecommunications transceiver (e.g., 3G, LTE/4G, 5G), Wi-Fi transceiver, baseband processor, Bluetooth or BLE transceiver, or the like, or a combination thereof. In the case of a server, the communication device(s)78can be or include, for example, any of the aforementioned types of communication devices, a wired Ethernet adapter, cable modem, DSL modem, or the like, or a combination of such devices. A software program or algorithm, when referred to as “implemented in a computer-readable storage medium,” includes computer-readable instructions stored in a memory device (e.g., memory device(s)76). A processor (e.g., processing device(s)74) is “configured to execute a software program” when at least one value associated with the software program is stored in a register that is readable by the processor. In some embodiments, routines executed to implement the disclosed techniques may be implemented as part of OS software (e.g., MICROSOFT WINDOWS® and LINUX®) or a specific software application, algorithm component, program, object, module, or sequence of instructions referred to as “computer programs.” 12.0 Terminology Computer programs typically comprise one or more instructions set at various times in various memory devices of a computing device, which, when read and executed by at least one processor (e.g., processing device(s)74), will cause a computing device to execute functions involving the disclosed techniques. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a non-transitory computer-readable storage medium (e.g., the memory device(s)76). Any or all of the features and functions described above can be combined with each other, except to the extent it may be otherwise stated above or to the extent that any such embodiments may be incompatible by virtue of their function or structure, as will be apparent to persons of ordinary skill in the art. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described herein may be performed in any sequence and/or in any combination, and (ii) the components of respective embodiments may be combined in any manner. Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims, and other equivalent features and acts are intended to be within the scope of the claims. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense, i.e., in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Likewise the term “and/or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y or Z, or any combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present. Further, use of the phrase “at least one of X, Y or Z” as used in general is to convey that an item, term, etc. may be either X, Y or Z, or any combination thereof. In some embodiments, certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all are necessary for the practice of the algorithms). In certain embodiments, operations, acts, functions, or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described. Software and other modules may reside and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. Software and other modules may be accessible via local computer memory, via a network, via a browser, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, interactive voice response, command line interfaces, and other suitable interfaces. Further, processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. Two or more components of a system can be combined into fewer components. Various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems and/or computing devices. Likewise, the data repositories shown can represent physical and/or logical data storage, including, e.g., storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations. Embodiments are also described above with reference to flow chart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of the flow chart illustrations and/or block diagrams, and combinations of blocks in the flow chart illustrations and/or block diagrams, may be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer, specially-equipped computer (e.g., comprising a high-performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flow chart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flow chart and/or block diagram block or blocks. The computer program instructions may also be loaded to a computing device or other programmable data processing apparatus to cause operations to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the acts specified in the flow chart and/or block diagram block or blocks. Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims. To reduce the number of claims, certain aspects of the invention are presented below in certain claim forms, but the applicant contemplates other aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C. sec. 112(f) (AIA), other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for,” but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. § 112(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application, in either this application or in a continuing application.
451,587
11860875
DETAILED DESCRIPTION Data encryption is an integral part of twenty-first century digital infrastructure. For reasons ranging from security breaches to privacy regulations, the ability to secure/protect data from unauthorized access has never been more important. Despite their usefulness in data security, however, known encryption techniques suffer from a variety of implementation issues, such as data access management and the processing and searching of underlying data. In a hyper-digital economy, it is increasingly important to ensure both the protection of data and the ability to extract value from that data. These, however, are often competing interests when using known encryption techniques. One or more of the encrypted search embodiments set forth herein overcome the shortcomings of known encryption techniques by facilitating end-to-end Advanced Encryption Standard (AES) encryption, full data security and state-of-the-art search performance, as discussed further below. The History of Encrypted Search The first searchable encryption scheme was proposed in 2000 by Song, Wagner and Perrig, who described the problem of searching over encrypted data using an example involving “Alice” and “Bob.” Alice is an individual or entity that wants to store a set of documents on an untrusted server owned by Bob. Using the scheme of Song, et al., Alice is able to encrypt and store her documents in Bob's server, and Bob is then able to determine, with some probability, whether each document contains a specific keyword without learning anything else. Two approaches to encrypted search are proposed by Song et al.: one that involves scanning the document collection, and one that involves an index of keywords. Scanning, however, can take a prohibitively large amount of time for a large dataset, and updating an index can necessitate additional overhead and pose security risks. Since the scheme of Song, et al. was proposed, many others have been constructed. Today's proposals are built on different cryptographic primitives, allowing for different levels of security, query complexity and efficiency. Searchable encryption schemes therefore have the goals of protecting user data, supporting different queries and performing optimally. Optimizing for any one goal typically comes at the expense of another. Therefore, the extent to which these goals are met differ in each scheme, and tradeoffs usually align with a specific set of a user's most immediate needs. Secure Indexes/Indices Secure indexes/indices are discussed in Eu-Jin Goh, “Secure Indexes.”IACR Cryptology EPrint Archive, April 2004, the entire contents of which are incorporated by reference herein. Goh's secure indexes were proposed as a safer and more computationally efficient alternative to previous propositions of searching through encrypted data (such as the work by Song, et al.). Goh's construction not only improved security against statistical attacks and data leaks, but had other practical benefits, such as allowing for search over compressed data. Goh defines a secure index as a data structure through which a user can query a collection of documents in O(1) time without leaking information about the index or document itself. In Goh, an adversary cannot learn any new information about any word in a document's index, even if they have access to other index-document pairs. Searching is performed by providing a user with a trapdoor used to query the index. This trapdoor can only be generated using a private key. Quotient Filters Goh's secure indexes use Bloom filters and pseudo-random functions. A Bloom filter is a type of quotient filter, including a bit array that represents a set of S=(s1, . . . , sn) of n elements, with all bits in the array initially set to 0. When setting up the Bloom filter, r independent hash functions h1(s), . . . , hr(s), are computed on all elements in S. Each hash value returns a number between 1 and the size of the Bloom filter. The corresponding indices of this result are set to 1. In effect, the indices corresponding to the hash values of existing elements will be set to 1, and all other (non-existing) indices will remain 0. When checking whether an element α is in S, h1(α), . . . , hr(α) is computed. If at least one resulting index in the Bloom filter is zero, α is not a member of S, otherwise, it may be. One downside of this approach is that it is exhibits a relatively high probability of returning a false positive. Minimizing false positives is possible through the reduction of hash collisions, which can be achieved by enlarging the hash filter and using more hash functions. However, this comes at the expense of added complexity in producing the Bloom filter, as well as increasing its size. In addition, in most real-world applications, false positives cannot practically be completely eliminated using quotient filters. Pseudo-Random Functions Although quotient filters like the Bloom filters used in Goh's secure indexes are limited in their effectiveness, they can be secured by the use of pseudo-random functions. Pseudo-random functions can be used to generate strings that are computationally indistinguishable from random strings, for example to ensure that no two occurrences of the same word are associated with the same combination of indices on two different filters. When quotient filters and pseudo-random functions are used together, they can provide an efficient way to search encrypted data, with improved data security as compared with methods that preceded it. Some structured encryption schemes described below leverage this construction, and offer further enhancements to search efficiency and search accuracy. Structured Encryption Structured Encryption (STE) is a category of encrypted search methods that refers to the private querying of data that is encrypted in an arbitrary data structure, such as a graph. Searchable Symmetric Encryption (SSE) is another category of encrypted search methods in which a private keyword search is performed over encrypted document collections. A STE scheme will typically accept structured data as an input, and output an encrypted data structure and a sequence of ciphertexts. Similar to Goh's secure indexes, when a query is performed using STE, a private key is used to generate a token, and the token is used to recover pointers to encrypted data. Index-based schemes like SSE, however, though secure, still reveal (or “leak”) a certain amount of information. This information can be used by the server or any third-party listener to derive conclusions about the stored encrypted data. Data leaks therefore pose a security risk, especially if the leaked data includes sensitive data. Homomorphic Encryption Homomorphic Encryption is a method of encrypting data that allows a user to perform computations on the data without decrypting it. For example, an individual, A, with a private key can decrypt the data. A third-party, B, who does not have the private key and cannot decrypt the data, can perform operations (such as addition and multiplication) on the data, and retrieve encryptions of the results, without decrypting the data. B, therefore, does not learn anything about the contents of the data, and the data is never made vulnerable. Homomorphic encryption is useful, for example, when working with data that is safeguarded by law, such as medical records, since it facilitates analyses that do not risk privacy. Many homomorphic encryption schemes employ security mechanisms that are based on the Ring-Learning with Errors (“RLWE”) computational problem in cryptographic key exchange. The RLWE problem, in combination with the homomorphic encryption scheme is generally considered to be secure against quantum computers. There are three types of homomorphic encryption: partially homomorphic encryption (“PHE”), somewhat homomorphic encryption (“SHE”), and fully-homomorphic encryption (“FHE”). These types of homomorphic encryption differ primarily in the number of operations they support and the number of times these operations can be performed on the data. FHE is the most robust of the three types of homomorphic encryption, allowing for any function to be performed any number of times. Homomorphic encryption, however, has two limitations: it does not support multiple users, and for complex functions, computations become impractically slow. FHE schemes therefore have a significant computational overhead. Encrypted Search Challenges Some known encrypted search solutions (e.g., as discussed in S. Kamara, “Encrypted Search.”XRDS: Crossroads. The ACM Magazine for Students, vol. 21, no. 3, 2015, pp. 30-34, doi:10.1145/2730908, the entire contents of which are incorporated by reference herein) are characterized by the tradeoffs they make between security, efficiency and query expressiveness:1. Fully-homomorphic encryption (FHE) and oblivious RAM (GRAM) are secure and support expressive queries at the expense of efficiency.2. Searchable encryption schemes built over property-preserving encryption (PPE) (e.g. order-revealing encryption, order-preserving encryption or deterministic encryption) are efficient and allow complex queries, but they are especially, vulnerable to statistical attacks.3. Structured encryption schemes (STE), an example of which is searchable symmetric encryption (SSE), are secure and efficient, but do not support expressive queries. Search Efficiency FHE and ORAM are prominent security schemes, however these schemes have associated computational overheads that make them impractical for working with complex queries and extremely large data sets (“Big Data”), respectively. The first FHE scheme was proposed by Craig Gentry in 2009, and is based on ideal lattices and allows for any arbitrary function or expression that can be “efficiently expressed as a circuit” to be applied to the encrypted data. The permissible functions and expressions include integer circuit values (specifically, addition and multiplication) and Boolean circuit values (specifically, AND and XOR), from which complex queries and operations can be constructed. The result is that FHE can effectively produce results for “complex selection, range, join or aggregation [queries].” FHE schemes can thus support expressive queries, however some processes associated with FHE schemes, such as bootstrapping, make the scheme slow if the queries are sufficiently complex. Bootstrapping is used in known FHE schemes because homomorphic operations produce noise, which hinder a correct decryption. To reduce this noise, specific circuits can be evaluated that “refresh” a ciphertext and decrease its noise vector—a process that grows, in one respect, with the size of the circuit. This process is referred to as bootstrapping. ORAM simulators, on the other hand, provide security by hiding an algorithm's access pattern. Software security through oblivious RAM was introduced by Rafail Ostrovsky in 1992. In that work, an oblivious machine was defined as a machine for which the sequence of accessed memory locations is the same for any two inputs with the same running time. Since, in GRAM, an oblivious RAM intercepts client-server communication, it can be used with different forms of encryption. For example, ORAM can be done via ME and SSE. The underlying cryptographic primitives and their corresponding data structures allow for different levels of overhead and query expressiveness. Overall, the use of ORAM is made slow by the computations performed at every fetch or store cycle to communicate between RAM and ORAM interfaces, which are responsible for hiding access patterns. The fastest scheme proposed by Ostrovsky had an O (log3t) amortized access cost, where t is the “current length of the access sequence” or the running time of the program simulated. Many schemes proposed since Ostrovsky's work, seeking to improve upon the overhead constraints, have nevertheless been practical only for small to medium collections. Security & Leakage Some schemes that are more efficient than ORAM and FHE sacrifice security for the sake of query expressiveness and efficiency. In 2011, for example, PPE was used to support search for a subset of structured query language (SQL) in a system called CryptDB. CryptDB made use of deterministic encryption (DTE) and order-preserving encryption (OPE) to allow for equality, comparison, sum and join queries. CryptDB's performance was reported to have a 14.5%-26% reduction in throughput when compared to MySQL. It has been shown, however, that CryptDB has serious security vulnerabilities. When researchers conducted a series of inference attacks on a database of electronic medical records, they were able to recover several OPE-encrypted attributes, such as age and disease severity, for more than 80% from 95% of the hospitals and several DTE-encrypted attributes, such as sex and race, for more than 60% of patients in more than 60% of the hospitals. DTE-encryption schemes, because they are constructed to produce the same ciphertext for the same keyword, are liable to attacks that look at the frequency of queries or frequency analysis attacks. One way to break into, or attack, DTE-encrypted columns of data, assuming the plaintext is strictly ordered (any i≈j), is to sort both the plaintext, Z, and its corresponding ciphertext, C, and align the frequencies of each element. A similar sorting attack can be used on a dense OPE-encrypted column of data, sorting the ciphertext C and the message M and mapping each c∈C to the equally ranked element in M. Similarly, SSE researchers have experimented with different data structures to expand the set of possible queries on data, while maintaining efficiency. These schemes, however, have not made advances in security. In 2013, for example, an efficient SSE scheme, henceforth referred to as highly-scalable SSE (HS-SSE), was put forth. It builds on a well-established and well-known SSE scheme proposed in 2006 known as the “inverted index solution” O(II-SSE). HS-SSE trades security for efficiency, as previous constructions supporting conjunctive queries were too slow and inflexible for large databases. HS-SSE makes use of an “expanded inverted index” and other data structures, as well as search protocols that make use of Diffie-Hellman elliptic curves, to return pointers to relevant ciphertexts. The search complexity is independent of the size of the database, and a search for a conjunction of key words scales with the number of documents pertaining to the least frequent keyword in a conjunction. HS-SSE, as the name suggests, can scale with large databases and supports Boolean, negations, disjunctions, threshold queries and more on arbitrarily-structured data, as well as free text. The precisely-defined leakage profile, includes the total size of the database, access patterns and search patterns or repetitions of queries. Therefore, over time, SSE, although not traditionally known as the scheme accommodating the widest class of expressive queries, has been developed for more practical use. Significant trade-offs persist, however, in that FHE and CRAM prove to be most secure, whereas SSE and PPE-based schemes continue to be vulnerable to their respective extents. Query Expressiveness As discussed above, research has been conducted on known schemes that have shown progress in query expressiveness. To review, FHE supports expressive queries built from circuits, and CRAM, via FHE or SSE, can take on the expressiveness of underlying cryptographic primitives. Moreover, PPE can take advantage of properties preserved in encryption to test the ciphertexts for equality (DTE) or comparisons (OPE), which can be used to support large classes of SQL queries on relational databases. Recently, SSE schemes have been constructed to support Boolean, sums, disjunction and conjunction queries, where they formerly only supported single-keyword search. The FHE, ORAM and PPE schemes, however, have drawbacks that make them impractical or unsafe, despite providing ample query operations. STE schemes, such as SSE, on the other hand, are relatively secure and efficient, but different queries are achieved by different schemes. For example, in 2014, another SSE scheme was developed to support range queries, but it did not include the query classes in HS-SSE. The scheme, henceforth referred to as range-SSE (R-SSE), is built on dynamic SSE (1)-SSE). D-SSE which allows for updates and deletions of elements in a database and is proven to be forward and backward-secure—security notions to suppress and measure leakage from dynamic operations. R-SSE uses tree-like indexes and is one of the most efficient schemes of its kind, with search having an overhead of O(wq), where wqis the number of keywords within the range query, in client computation; O(nq), where nqis the number of updates that contain the keywords in a range query since initialization, in server computation; and O(DB(q)), where DB(q) is the number of files matching a range query q, in server communication. Relative to II-SSE, which has an O(1) overhead in both communication and server computation, R-SSE has had to make trade-offs to allow for more expressive queries than its SSE predecessors. Finding a scheme that fits any industry or user's needs is, unfortunately, still a question of which trade-offs one is willing to make. Query expressiveness, which can be an important factor in the usefulness of any searchable encryption scheme is still an area of active interest and research. Disclosed Encrypted Search—A Novel, Secure and Efficient Solution Hash Vectorization (HV) Model According to some embodiments, the disclosed Encrypted Search (hereinafter “encrypted search”) and its underlying compression algorithm, Stealth (hereinafter the “stealth algorithm”), use hash vectorization (HV) models to facilitate secure searching of encrypted data. An HV model is a secure, one-way hash index that is produced as a byproduct of a compression process, for example during the modeling phase of Lempel-Ziv (LZ) parsing. Additional details regarding LZ parsing are set forth below (see “LZ Modeling” section) and can also be found in U.S. provisional patent application No. 63/056,160, filed Jul. 24, 2020 and titled “Double-Pass Lempel-Ziv Data Compression with Automatic Selection of Static Encoding Trees and Prefix Dictionaries,” the entire contents of which are incorporate by reference herein. In some embodiments, an HV model includes a hash filter and a chain vector (collectively, a “hash index”). The hash filter is a Boolean quotient filter (e.g., similar to Bloom filters) that allows for the quick elimination of negative query assessments. The hash filter is followed by a chain vector, which provides spatial modeling of hashed elements throughout the compressed data and the encrypted data, facilitating higher levels of accuracy, efficiency, and query expressiveness. In some embodiments, when a file is compressed using the stealth algorithm, the file is divided into chunks (also referred to herein as “data chunks”) of a predefined or specified size (e.g., 64 KB each). The first part of LZ-family compression includes modeling the input data (the chunks) to find redundancies and map entropy for compression. This process can be referred to as LZ parsing, and in the context of the stealth algorithm, this process can be referred to as stealth double pass modeling (SDPM). SDPM, as the name suggests, includes two passes. The first pass includes mapping out the entire input buffer by hashing strings of a fixed size (e.g., 4 bytes each), using a hash table to find the last position of each hash (the previous potential redundancy, or the location of the last occurrence of the hash within the input buffer), and placing the last positions in a hash chain (which may be similar to, for example, a Markov chain). By the end of the first pass, a hash chain, having a length that is the same as a length of the input data, is filled/populated with positions of matches, with each position linked to the previous position having the same hash value. In other words, the most recent occurrence of each hash is stored at a given position in the input bytestream, such that every byte has a reference to the previous hash match. Hence, the hash chain has a length that is the same as the input data/input bytestream length. The hash chain is used in the second pass of SDPM to enable compression by finding ideal matches in the data, mapping those ideal matches, and encoding the mapped ideal matches and any remaining bytes during an entropy coding phase. In known data compression environments, hash tables and Markov chains (or equivalents) are used solely for compression modeling, and are discarded after encoding. There is a considerable amount of information about the input data in hash tables and Markov chains, however, and that information can be repurposed for search purposes, as discussed in connection with embodiments of HV models set forth herein. HV Models—Part 1: Hash Filter As discussed above, in some embodiments, an HV model includes a hash filter and a chain vector. The hash filter is a “broad” O(or “coarse”) filter that quickly eliminates most negative query candidates (i.e., subsets of data in a data set that are determined not to satisfy the query or not likely to satisfy the query). Query candidates are also referred to herein as “match candidates.” By virtue of its need for fast performance, the hash filter is also elegantly designed. Hash filters are a type of quotient filter with Boolean (e.g., true or false) data points about each hash value in the compressed/encrypted data. The first pass of SDPM uses a hash table of a given size (e.g., 65,536 hash values). At the end of SDPM's first pass, when a given position in the hash table contains a value, it can be concluded that the corresponding hash value has occurred somewhere in the input data. Due to the entropic nature of hashing, this means that any of the potential byte strings producing that given hash value could have occurred in the hash filter, an uncertainty that could potentially lead to false positives (i.e., hash collisions). In some embodiments, one bit (0/1) is assigned to each hash value at the end of the first SDPM pass, to produce a hash filter. The size of this hash filter, in bits, will be equal to the size of the hash table (for example, a 65,536-hash value sized hash table will produce a hash filter of 65,536 bits, or 8,192 bytes). This size can be reduced significantly (as discussed below, in the “Optimizing HV Models” section). When performing a search, hash filters can be used, as a first test to eliminate most negative candidates, by hashing the search pattern or keyword in the same manner as SDPM's first pass, and assessing the corresponding bits of the hash filter for each computed hash value. If any of the bits are 0, it can be concluded that the search pattern or keyword did not occur in the input data. Otherwise (i.e., if none of the bits are 0), the search pattern or keyword may have occurred in the input data. Confirmation can be performed during the second part of the HV model—the chain vector, discussed further below. ITV Models Part 2: Chain Vector According to some embodiments, a second part of the HV model is the chain vector. A chain vector includes a “distilled” copy of the hash chain that is used for search purposes, and that includes a collection of n chains, where η is the number of different hash values occurring in the input data. Instead of containing the exact position of each potential value, the positions are approximated by grouping the positions into buckets (e.g., bucket 0: positions 0-255; bucket 1: positions 256-511, etc.), resulting in a “distilled” copy of the hash chain, which reduces storage space. Chain vectors, like hash filters, can be produced as a byproduct of the SDPM process, and can offer significantly more granular search capability than hash filters, though at the expense of greater computational complexity. As a result, chain vectors may be reserved for query candidates that are not eliminated by hash filters, and as such, applied to a considerably smaller subset of candidates, as compared with an initial set of query candidates processed by the hash filters. As discussed above, a first pass of SDPM can produce a hash chain, which is a linked list of positions sharing the hash values. In other words, the hash chain links a position of every byte string matching a hash value with a previous byte string having the same hash value. This process effectively creates a “road map” of the input data with respect to the hash value. The hash chain can be used for compression purposes, by “chaining” all possible matching values together and quickly identifying a best match. The spatial linking of byte string positions by hash value is an important step in identifying optimal or near-optimal matches for data compression purposes, but also proves highly efficient for evaluating the presence of complex, multi-hash patterns for queries. This can be achieved by turning the SDPM hash chain into a chain vector that can then be used for encrypted search purposes. In some embodiments, chain vectors describe the locations at which a given hash value occurs in the compressed/encrypted data (i.e., “position data” of the hash value). By hashing the sub-strings of a search pattern (e.g., using any hashing procedure set forth in the “LZ Modeling” section below), multiple hash values can be produced, and the chain vector can be used to determine whether all of these hash values occur in the same region of the data. If all of these hash values do occur in the same region of the data, it can be concluded that there is a statistically significant chance that the queried pattern occurs in the data. If all of these hash values do not occur in the same region of the data, it can be concluded with certainty that the pattern does not occur. In some embodiments, to transform a hash chain into a chain vector, individual chains for each occurring hash value can be extracted from the hash chain. As noted above, these individual chains include n positions, where n represents a number of occurrences of the given hash value in the input data. To avoid saving all positions in the chain vector, which would involve more storage space than the input data itself, the input buffer can be grouped into “buckets” O(or “groups”), such that all positions in the hash chain occurring within the range of a given bucket will be identified by that bucket. The process of grouping the input buffer contents into buckets can significantly reduce the number of hash elements that are saved in the chain vector, while also reducing the range of possible positions, thereby significantly reducing the size of the chain vector. The process of grouping the input buffer contents into buckets also has security advantages, in that it can mitigate I prevent the reconstruction of the original data from which the HV Model has been produced. The process of grouping the input buffer contents into buckets can be performed more aggressively (with larger bucket sizes) or less aggressively (with smaller bucket sizes), which will yield different tradeoffs between chain vector size and granularity. In some embodiments, the chain vector contains or enumerates the identifiers for each bucket that contains a given hash. In some embodiments, bucket sizes are customizable, for example depending on a size of the input, a type of data being processed, a desired size of the HV Model and/or a desired size of the filter. A maximum number of buckets per chain, also referred to herein as “chain size,” can be set such that every chain can be represented using the same number of bits independently of the number of buckets it represents. As a result, the number of occurrences of a hash value cannot be determined/inferred based on the chain size, and cryptoanalysis techniques such as frequency analysis are prevented, thereby improving the security of the chain vector. In other embodiments, to protect the integrity of the chain vector, some or all chains may be individually/independently encrypted, for example with a 80-bit private key and using a 24-round Feistel encryption. Alternatively, the encryption can use another cipher method and key size. The encryption can be applied in the same manner, or in a common manner, across all chains. A N-bit header can be appended to each encoded chain, specifying the encoding method used (e.g., specifying the bucket size used to encode that chain), where N=log2(Number of different bucket sizes). Searching with HV Models HV models of the present disclosure are constructed with security, query expressiveness, and efficiency in mind. To that end, in some embodiments, performing an HV model search includes three steps: hashing the search pattern, scanning the hash filter, and grouping the chain vector into buckets. Additional steps can be included in the HV model search, for example to expand query expressiveness (e.g., AND, OR, ranges, etc.). The foregoing three steps, however, give a broad overview of the main search procedure. As noted above, in some embodiments, the first step of an HV model search is hashing the search pattern. Unlike known hash-based search methods (e.g., quotient filters), according to some methods set forth herein, an entire search pattern (or “element”) is not hashed at once. Rather, the search pattern is divided into substrings that are independently hashed. This improves security by randomizing the hash filter while facilitating powerful querying techniques such as partial matching. In some embodiments, hashing the search pattern includes using a sliding window of a predefined fixed size such as 4 bytes, in which the search pattern is advanced or “slid” across, one byte at a time, and performing a hash of each subsequent substring. The number of hashes produced from a single search pattern can be equal to the difference of the pattern size and the hash plus one. For example, a search pattern of 5 bytes with a hash size of 4 bytes will produce 2 hashes (h1[0-3], h2[1-4]). Once the hashing step is completed, hash filter scanning can commence. As discussed above, hash filters include Boolean (true/false) values for each hash value occurring within the compressed encrypted data. The use of hash filters can involve minimal computation during searching, resulting in faster, more efficient performance. For each computed hash value searched, a corresponding/associated hash filter bit is checked (e.g., hash value 6,512 will correspond to the 6,512thbit). Should all corresponding bits be true (1), it can be concluded that there is a significantly high likelihood that the compressed/encrypted data contains the search pattern, and that compressed/encrypted data is flagged for chain vector grouping. Alternatively, should any of the bits be false (0), it can be concluded that there is a 0% chance that the pattern has occurred, effectively eliminating the compressed/encrypted data as a candidate for containing the search pattern. In some embodiments, in a next (optionally final) step, chain vector grouping, is performed, and is reserved for the subset of compressed/encrypted candidate chunks (or data chunks) that were not eliminated by hash filter scanning. As discussed herein, chain vectors are more granular than hash filters, facilitating higher levels of search accuracy and query expressiveness, while costing some computational overhead on the order of O(log(n)). During a search, the chain vector can be used to isolate areas of the compressed/encrypted data that contain the search hashes, and to group them and determine whether their approximate positions (also referred to herein as “position data”) would permit the original search term to occur or not. In some embodiments, chain vectors include two parts: a header including chain sizes, and vectorized chains. The hash filter can be used to determine which hash values occur in the chain vector. For each hash value that occurs in the chain vector, a corresponding or associated vector size can be stored in the header. Once the vector sizes for each search hash are determined using this header, the respective vectors for each hash can be read. Each vectorized chain represents the regions, or “buckets,” of the compressed/encrypted data in which the respective hash occurs. By comparing the vectors of each search hash, it can quickly, be determined whether they align properly (e.g., are adjacent or in close enough proximity) to form a match of the original search pattern. For example, should the two searched hashes occur in buckets 1 and 5, then they are not in the same region of data, and therefore cannot have occurred together to have formed the original search pattern. However, if they had both occurred in the same or adjoining buckets, then there is a very significant chance (e.g., >99%) that the compressed/encrypted data contains a match for the query. The relevant data region(s) can then be flagged for partial decryption & decompression, and used accordingly. Optimizing HV Models Three primary considerations for optimizing HV Models are accuracy, size, and performance. The accuracy and size of HV models follow a well-established direct correlation. For example, a larger HV model will produce more accurate results than a smaller HV model. Encrypted Search—Encryption In some embodiments, encrypted search includes a search capability as well as encryption. Encrypted search methods set forth herein, unlike known techniques, can use AES encryption and supports every cipher mode specified in the AES standard. As such, in some embodiments, encrypted search methods do not include any modifications to the encryption itself. The compatibility of encrypted search with existing AES encryption and its ability to support existing cipher modes are significant advantages over known techniques, since proposing new methods of encryption can involve extensive standardization, testing, and universal acceptance. These standardization and testing processes can take decades, as can be seen with AES's ongoing deployment (note that the standard was published in 2001). Any solution proposing new or modified encryption ciphers is therefore impractical for real-world applications. Some embodiments of encrypted search can leverage existing encryption, such as AES, since a full search capability for encrypted data is implemented via the compression and production of HV models, both of which occur prior to encryption. This enables the HV model to be independently decrypted (e.g., by a cloud computing service), securely searched, and in turn provide actionable results without decrypting the original data or leaking otherwise-unintended information. Salting & Token Randomization Some known hash-based algorithms can be vulnerable to statistical and preimage attacks. These types of attacks exploit the deterministic nature of hashing, and map out every possible input for a given hash value, and use the entropy of these hash collisions to gain useful information about the encrypted data. To secure encrypted searches against such attacks, in some embodiments, a salting process can be used. Salting is typically used for safe credential storage, where a credential such as a password is hashed and saved in a database. To protect against the types of attacks described above, passwords can be concatenated with a cryptographically random value (a “salt”), which is also saved in the database. As a result, two identical passwords, with different random salts, will produce two different hash values. This effectively randomizes the hashing function, and deters most statistical attacks. In some embodiments, an HV model employs a salting process, to a similar effect. For example, during a stealth compression process, a cryptographic nonce (a random value) is generated, and is used as a salt. Throughout the SDPM process, the salt is added to the input byte strings, consistently randomizing the hashing process. This has no impact on compression performance, but effectively randomizes the HV model. With this process, the output of the hashing process produced from identical data will be completely different, given the use of different salts. For applications involving network transmission of queries, such as queries of cloud-based databases, the hashing and salting of search patterns can be performed on a (trusted) client, and the randomized, salted hash values can be transmitted to an (untrusted) server where a search may be executed. Such an approach effectively renders the server-based query process to be fully opaque, with neither the query nor the HV models providing useful information or security leakage. Further protection can be applied via the use of transport-layer encryption. Query Approximation In some embodiments, given the hash-based nature of HV models, false positive results may occur, however, false negative results can never occur. The level of accuracy (and therefore the size) of an HV model has a direct, inverse relationship with false positive rates. In other words, a larger (and therefore more accurate) HV model can produce fewer false positives than a smaller HV model. The difference can range from 10% to <0.1% false positives using the full HV model (i.e., the hash filter and the chain vector), and can have a much larger range (approx. 50%-15%) when only employing a hash filter (with no chain vector). The range of false positives described above leads to query approximation—a degree of uncertainty with the veracity of provided results (some of which may also prove to be false positives). In a non-encrypted environment, this query approximation can be easily removed by confirming the search through a simple pattern matching algorithm (e.g., Boyer-Moore) on the original data. Encrypted search, however, prohibits the decryption of the data for searching purposes. As such, a degree of query approximation may be expected. Query approximation, while potentially obscuring granular query results, also provides a layer of additional security against security compromise. Since there is an inherent degree of uncertainty for each hash element in the HV model, any attempt to analyze the HV model (assuming it is in a decrypted form) will prove exponentially more difficult with uncertainty, with a complexity close to C)((w)nn), where w denotes the uncertainty plus one (between 1.0 and 2.0, inclusive) and n denotes the number of hash values to ascertain. This effectively adds a layer of security, should the encryption protection a HV model ever be compromised. Chunking & Partial Decryption In some embodiments, the compression algorithm underlying encrypted search—the stealth algorithm—divides input plaintext into chunks of data (e.g., 64 KB each), i.e., data chunks. This chunking, while sometimes performed for decompression efficiency purposes, also facilitates independent encryption and partial decryption of the searched data. Since the original data is segmented into independent chunks, each chunk can be compressed and encrypted independently, thereby facilitating independent decryption and independent decompression, should a given chunk be flagged during a search. For example, if a HV model search isolates a given chunk for a positive query match, this chunk can be independently accessed without decrypting the entirety of the data of the encrypted file, which would render the entire encrypted file vulnerable. The chain vectors, by virtue of their accuracy, can even isolate the location of a match within a given chunk, providing a greater degree of granularity for targeted decryption/decompression. Security Leakage Encrypted search embodiments set forth herein enhance/optimize data security without compromising search efficiency and query expressiveness. In addition to preventing security leakage, encrypted search can reside or be built on existing standards, making it suitable for use in commercial applications. The encrypted search methods described herein can be used for a variety of applications, including cloud computing, electronic health records management, finance, analytics, and social media. LZ Modeling In some embodiments, an encoder is part of a “Lempel-Ziv” O(“LZ”)-modeled encoder family. LZ modeling makes it possible for the encoder; compressor to identify byte sequences that are similar to one another within an input bit stream. The identified similar byte sequences can, in turn, be used to compress the data of the input bit stream. For example, the first time that a given byte sequence appears within the input bit stream, the LZ modeling function may identify that byte sequence as a “literal byte” sequence. Subsequently, whenever the same byte sequence occurs, the LZ modeling function can identify that byte sequence as a “match.” The foregoing process is referred to herein as “parsing” the data. As discussed above, when the parsing quality is higher, the compression ratio is typically also higher, however increasing the parsing quality can also result in a slower process. In view of this trade-off, multiple different embodiments of encoders (and associated methods) are presented herein, ranging from encoders having a fastest compression, to encoders having a slowest compression but a highest compression ratio. The encoder embodiments set forth herein leverage modern processor architectures, while innovating the manner in which data is parsed, for example using different numbers of passes based on the parsing quality selected. In some embodiments, LZ modeling is performed on the encoder but not on the associated decoder, and the quality of the parsing used on the encoder does not affect the decoder speed. Single-Pass Modeling In some embodiments, a processor-implemented encoder employs one-pass modeling, or single-pass modeling (SPM), operated by the function fast_search_match_multi_XH( ), and exhibits the fastest parsing of the encoders described herein. SPM includes creating a hash table to check and store the positions of each byte sequence in an input hit stream. Each instance of a byte sequence having a same hash value as a previously observed instance of the byte sequence is used to overwrite that previously observed instance. A size of the byte sequences can be, for example, four bytes or six bytes, and may be determined by a size of the input bit stream. In some implementations, a size of the hash table is relatively large (e.g., 64 kilobytes (KB)), e.g., to reduce the likelihood of collisions. The following code illustrates a process to hash a single byte sequence, according to some embodiments: // Hash a sequence of 4 bytesuint16_t hash_value = hash4B(new_position) ;// Get the previous positionprevious_position = hash_table[hash_value] ;// Update the position in the hash tablehash_table[hash_value] = new_position; In some embodiments, to leverage modern x86 architectures, SPM hashes four candidate byte sequences at a time (i.e., concurrently) before checking for a match against the hash table. This allows the processor to perform the comparisons Out-of-Order (OoO) and feed the pipeline. The following code illustrates a process to hash four consecutive candidate byte sequences, according to some embodiments: // Hash 4 consecutive sequences of 4 bytes and store the position of the// candidatecandidate[0] = pre_hash_4B(&length[0], hash_table, ip, begin, 0) ;candidate[1] = pre_hash_4B(&length[1], hash_table, ip, begin, 1) ;candidate[2] = pre_hash_4B(&length[2], hash_table, ip, begin, 2) ;candidate[3] = pre_hash_4B(&length[3], hash_table, ip, begin, 3) ; The hashes of the four candidate byte sequences are then sequentially compared to the hash table to attempt to identify a match. If a match is found, a function match_length_unlimited( ) is called and used to attempt to expand the size of the matching byte sequence in a forward direction within the input bit stream (e.g., incrementally expanding the byte sequence to include bits or bytes occurring subsequent to the byte sequence within the input bit stream). To obtain the size of a match, a De Bruijn sequence can be used, which allows a fast comparison of two byte sequences and returns the size of their common substring. Depending on the desired quality level, a match also can be expanded in a backward/reverse direction within the input bit stream (e.g., incrementally expanding the byte sequence to include bits or bytes preceding the byte sequence within the input bit stream) by the function LZsearch_backward( ). To store the match, a function save_triad_unlimited® is called. In some implementations, only the first match identified is stored, and the three other matches may be used as potential matches for future byte sequences, thereby improving the overall compression ratio of the encoder. If no matches are found among the four candidate byte sequences, the byte sequences may be stored (e.g., in a separate buffer) as byte literals. A match can be represented by a structure that includes the following three variables, collectively referred to herein as a “triad”:Length: the size of the byte substring returned by the De Bruijn technique+optional backward expansionOffset: the distance between the matching byte sequence and the current byte sequenceNumber of literals: the number of byte literals between the match found and the previous match, within the bit stream Example code illustrating the storage of the triad is as follows: /* stealth_triad_t: LZ triad storage */typedef struct {// Distance from the matchuint32_t offset;// Match length storageuint8_t length;Number of literal before the macthunit8_t nb_literal;} stealth_triad; In some embodiments, the foregoing process is repeated until an end of the input bit stream is reached, at which time the SPM returns the literal buffer and the triad buffer to be encoded (see “Byte Literal Encoding” and “Triad Encoding” sections, below). The offset portion of the triad is stored as a 32-bit integer, pre-encoded as shown below (e.g., for faster retrieval), while the length and number of literals are respectively stored as 8-bit integers. // Save it*storage = (uint32_t)(offset | (uint32_t)reduced « 20 | (uint32_t)acc « 28); AccuracyidentificationReduced offsetActual offset4 bits8 bits (up to 256)20 bits (up to 2 {circumflex over ( )}20 − 1) FIG.1is a system block diagram for an encrypted search engine, according to some embodiments. As shown inFIG.1, the system100includes an encrypted search engine120, which includes a processor121in communication with a memory122and a transceiver116for optional wireless and/or wireless communication (e.g., via a wireless network N) with a remote compute device110and/or data set124. The encrypted search engine120optionally includes a user interface118(e.g., a graphical user interface (GUI)) through which a user U can input a search term or other search criteria (as data input112), and through which a user can view search results114that are generated by the encrypted search engine120in response to the search term provided by the user. The memory122can store search patterns122A and/or queries122B, which may be received (at112) directly from a user via the user interface118and/or via network N and from the remote compute device110. The memory122can also store one or more hash tables122C associated with one or more data files (e.g., encrypted and/or compressed data files) of a data set (122F and or124) of the system100, a hash filter122B, a chain vector122E and/or the data set122F. The memory122also stores instructions122G, executable by the processor121to perform steps, such as those set forth in the discussion ofFIGS.2-3below. The encrypted search engine120can receive a query or search pattern112(where a search pattern can include, for example, a keyword) from the user U or from the remote user compute device110and/or can cause display of query result(s)114via the user interface118and/or can send query result(s)114to the remote user compute device110, for example wirelessly via network N. The query result(s)114can be generated by the encrypted search engine120, e.g., according to instructions122G, in response to the query or search pattern112. In some embodiments, the query result(s) can be further refined using a machine learning model (not shown) and/or can be sent to a machine learning platform (not shown) as training data for training of a machine learning model of the machine learning platform. FIG.2is a flow diagram showing a first method for performing encrypted searches, according to some embodiments. The method200ofFIG.2can be implemented, for example, using the system100ofFIG.1. As shown inFIG.2, the method200includes receiving, at202and at a processor, a query specifying a search pattern. The search pattern is hashed at204, using the processor, to produce a plurality of search hashes. The hashing performed at204can include, for example, a hashing procedure set forth in the “LZ Modeling” section above. The plurality of search hashes is compared, at206, to a hash filter stored in a memory operably coupled to the processor, to determine a set of match candidates for the query. At208, a data set is searched, based on the set of match candidates and using a chain vector, to identify a query result. The chain vector includes a plurality of chains, and each chain from the plurality of chains is associated with a hash value from a plurality of hash values of the data set. The method200also includes at least one of causing display of the result via a graphical user interface or causing transmission of a signal representing the result to a remote compute device, at210. In some implementations, the searching the data set includes identifying portions of the data set that include search hashes from the plurality of search hashes, and determining, based on position data associated with the identified portions of the data set, whether the search pattern is expected to occur within the identified portions of the data set. In some implementations, the method also includes mapping the data set by hashing each string from a plurality of strings of the data set, with each string having a predefined number of bytes, thereby generating a plurality of hashed strings. A last position is identified for each hashed string from the plurality of hashed strings, thereby generating a plurality of last positions, a hash chain is generated based on the plurality of last positions, and the chain vector is generated based on the hash chain. The generating the chain vector based on the hash chain can include extracting the plurality of chains from the hash chain, and grouping chains from the plurality of chains based on position data of the plurality of chains. In some implementations, the hashing the search pattern includes dividing the search pattern into a plurality of substrings, and independently hashing each substring from the plurality of substrings. In other implementations, the hashing the search pattern is performed using a sliding window having a predefined size (e.g., four bytes). In some implementations, the hash filter includes a plurality of Boolean values, each Boolean value from the plurality of Boolean values associated with a hash value from a plurality of hash values of the data set, the data set being at least one of compressed or encrypted. In some implementations, each match candidate from the set of match candidates includes a data chunk of the data set, the data set including at least one of compressed data or encrypted data. In some implementations, the comparing the plurality of search hashes to the hash filter includes comparing each search hash from the plurality search hashes to an associated bit of the hash filter. In some embodiments, a system for performing encrypted searches can include a processor and a memory that is operably coupled to the processor (e.g., as shown inFIG.1). The memory stores instructions that, when executed by the processor, cause the processor to perform, a method, such as the method ofFIG.3.FIG.3is a flow diagram showing a second method for performing encrypted searches, according to some embodiments. The method300ofFIG.3can be implemented, for example, using the system100ofFIG.1. As shown inFIG.3, the method300includes receiving a search pattern at302for a search of a data set, and hashing the search pattern at304, to produce a plurality of search hashes. The hashing performed at304can include, for example, a hashing procedure set forth in the “LZ Modeling” section above. The method300also includes scanning a hash filter at306, based on the plurality of search hashes, to determine a set of match candidates. At308, a result for the search is identified based on the set of match candidates and using a spatial model of the data set. The spatial model of the data set includes a linked set of byte string positions for each hash value from a plurality of hash values of the data set. At310, the query results is at least one of: caused to be displayed via a graphical user interface, or caused to be transmitted, via a signal, to a remote compute device. In some implementations, the instructions to cause the processor to hash the search pattern include instructions to divide the search pattern into a plurality of substrings, and independently hash each substring from the plurality of substrings. In some implementations, the instructions to cause the processor to hash the search pattern include instructions to hash the search pattern using a sliding window having a predefined size. In some implementations, the hash filter includes a plurality of Boolean values, each Boolean value from the plurality of Boolean values associated with a hash value from a plurality of hash values of the data set, the data set being at least one of compressed or encrypted. In some implementations, each match candidate from the set of match candidates includes a data chunk of the data set, the data set including at least one of compressed data or encrypted data. In some embodiments, a non-transitory, processor-readable medium stores instructions to cause a processor to receive a query, and to generate a plurality of search hashes based on the query. The non-transitory, processor-readable medium also stores instructions to compare the plurality of search hashes to a hash filter stored in a memory operably coupled to the processor, to determine a set of match candidates for the query. The non-transitory, processor-readable medium also stores instructions to search a data set, based on the set of match candidates and using a chain vector, to identify a query result. The chain vector includes a plurality of chains, and each chain from the plurality of chains is associated with a hash value from a plurality of hash values of the data set. The non-transitory, processor-readable medium also stores instructions to cause display of the query result via a graphical user interface and/or cause transmission of a signal representing the query result to a remote compute device. In some implementations, the hash filter includes a plurality of Boolean values, each Boolean value from the plurality of Boolean values associated with a hash value from a plurality of hash values of the data set, the data set being at least one of compressed or encrypted. In some implementations, each match candidate from the set of match candidates includes a data chunk of the data set, the data set including at least one of compressed data or encrypted data. In some implementations, the instructions to cause the processor to compare the plurality of search hashes to the hash filter include instructions to compare each search hash from the plurality search hashes to an associated bit of the hash filter. In some implementations, the instructions to generate the plurality of search hashes include instructions to divide a search pattern of the query into a plurality of substrings, and independently hash each substring from the plurality of substrings. All combinations of the foregoing concepts and additional concepts discussed here (provided such concepts are not mutually inconsistent) are contemplated as being part of the subject matter disclosed herein. The terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein. The skilled artisan will understand that the drawings primarily are for illustrative purposes, and are not intended to limit the scope of the subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements). To address various issues and advance the art, the entirety of this application (including the Cover Page, Title, Headings, Background, Summary, Brief Description of the Drawings, Detailed Description, Embodiments, Abstract, Figures, Appendices, and otherwise) shows, by way of illustration, various embodiments in which the embodiments may be practiced. The advantages and features of the application are of a representative sample of embodiments only, and are not exhaustive and/or exclusive. Rather, they are presented to assist in understanding and teach the embodiments, and are not representative of all embodiments. As such, certain aspects of the disclosure have not been discussed herein. That alternate embodiments may not have been presented for a specific portion of the innovations or that further undescribed alternate embodiments may be available for a portion is not to be considered to exclude such alternate embodiments from the scope of the disclosure. It will be appreciated that many of those undescribed embodiments incorporate the same principles of the innovations and others are equivalent. Thus, it is to be understood that other embodiments may be utilized and functional, logical, operational, organizational, structural and/or topological modifications may be made without departing from the scope and/or spirit of the disclosure. As such, all examples and/or embodiments are deemed to be non-limiting throughout this disclosure. Also, no inference should be drawn regarding those embodiments discussed herein relative to those not discussed herein other than it is as such for purposes of reducing space and repetition. For instance, it is to be understood that the logical and/or topological structure of any combination of any program components (a component collection), other components and/or any present feature sets as described in the figures and/or throughout are not limited to a fixed operating order and/or arrangement, but rather, any disclosed order is exemplary and all equivalents, regardless of order, are contemplated by the disclosure. Various concepts may be embodied as one or more methods, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Put differently, it is to be understood that such features may not necessarily be limited to a particular order of execution, but rather, any number of threads, processes, services, servers, and/or the like that may execute serially, asynchronously, concurrently, in parallel, simultaneously, synchronously, and/or the like in a manner consistent with the disclosure. As such, some of these features may be mutually contradictory, in that they cannot be simultaneously present in a single embodiment. Similarly, some features are applicable to one aspect of the innovations, and inapplicable to others. In addition, the disclosure may include other innovations not presently described. Applicant reserves all rights in such innovations, including the right to embodiment such innovations, file additional applications, continuations, continuations-in-part, divisionals, and/or the like thereof. As such, it should be understood that advantages, embodiments, examples, functional, features, logical, operational, organizational, structural, topological, and/or other aspects of the disclosure are not to be considered limitations on the disclosure as defined by the embodiments or limitations on equivalents to the embodiments. Depending on the particular desires and/or characteristics of an individual and/or enterprise user, database configuration and/or relational model, data type, data transmission and/or network framework, syntax structure, and/or the like, various embodiments of the technology disclosed herein may be implemented in a manner that enables a great deal of flexibility and customization as described herein. All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. As used herein, in particular embodiments, the terms “about” or “approximately” when preceding a numerical value indicates the value plus or minus a range of 10%. Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limit of that range and any other stated or intervening value in that stated range is encompassed within the disclosure. That the upper and lower limits of these smaller ranges can independently be included in the smaller ranges is also encompassed within the disclosure, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either or both of those included limits are also included in the disclosure. The indefinite articles “a” and “an,” as used herein in the specification and in the embodiments, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the embodiments, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B), in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein in the specification and in the embodiments, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of” or, when used in the embodiments, “consisting of” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the embodiments, shall have its ordinary meaning as used in the field of patent law. As used herein in the specification and in the embodiments, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” O(or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. In the embodiments, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. Some embodiments and/or methods described herein can be performed by software (executed on hardware), hardware, or a combination thereof. Hardware modules may include, for example, a processor, a field programmable gate array (FPGA), and/or an application specific integrated circuit (ASIC). Software modules (executed on hardware) can include instructions stored in a memory that is operably coupled to a processor, and can be expressed in a variety of software languages (e.g., computer code), including C, C++, Java™, Ruby, Visual Basic™, and/or other object-oriented, procedural, or other programming language and development tools. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code. The term “processor” should be interpreted broadly to encompass a general purpose processor, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a controller, a microcontroller, a state machine and so forth. Under some circumstances, a “processor” may refer to an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), etc. The term “processor” may refer to a combination of processing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core or any other such configuration. The term “memory” should be interpreted broadly to encompass any electronic component capable of storing electronic information. The term memory may refer to various types of processor-readable media such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, etc. Memory is said to be in electronic communication with a processor if the processor can read information from and/or write information to the memory. Memory that is integral to a processor is in electronic communication with the processor. The terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may comprise a single computer-readable statement or many computer-readable statements. While specific embodiments of the present disclosure have been outlined above, many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the embodiments set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the disclosure.
70,004
11860876
DETAILED DESCRIPTION FIG.1is an example environment100for identifying records in a first dataset and second dataset that likely refer to, or are associated with, the same individual or entity. As may be appreciated, integrating or combining two or more datasets may be extremely useful for a variety of fields and industries. However, before two datasets can be integrated, the records from each dataset that correspond to the same individual or entity must first be identified. Because of differences in how data is collected by different organizations, differences in how individuals report their information, distinct individuals with similar names, and movement of individuals, among other things, accurately determining which records are associated with the same individual may be difficult. For example, an individual named “Robert R Smith II” may be referred to “Rob Smith” in some datasets, “R. Smith” in other databases, and “Bobby Smith Jr.” in another dataset. In addition, Robert may have lived in different places at different times and may have different addresses listed in different datasets depending on when they were generated. Robert may also have lived with his father of the same name, which further complicates the analysis. As can be seen, determining which records in two distinct datasets belong to the same individual can be a difficult and inaccurate process, which often requires input or oversite by human reviewers, which is expensive and time consuming. Accordingly, to increase the accuracy of matching records to individuals, reduce the need for human reviewers, and increase the quality of combined datasets, the environment100includes the integration system110. As shown, the integration system110includes several components including, but not limited to, a cleaning engine120, a token engine130, a distance engine140, and a rules engine150. More or fewer components may be supported by the integration system110. Some or all of the components of the integration system110may be implemented together, or separately, by one or more computing devices, such as the computing system700illustrated with respect toFIG.7. At a high level, the integration system110may receive a first dataset106and a second dataset107and may combine the datasets to create an integrated dataset180. Each of the first dataset106and the second dataset106may include a plurality of records, and each record may include a plurality of attributes. Generally, each record may be associated with an individual or entity, and the attributes associated with the record may each describe a particular feature or status of the individual. Example attributes include name, address, date of birth, height, marital status, vaccination status, claim number, etc. Other attributes may be supported. In some embodiments, the first dataset106may be a social determinants of health (“SDoH”) dataset and the second dataset107may be a healthcare claims dataset. The integration system110may generate the integrated dataset180by determining what records from the first dataset106and the second dataset107are associated with the same individual or entity. The records determined to be associated with the same individual may then be combined, or otherwise linked with each other, to create the integrated dataset180. How the integration system110creates the integrated dataset180is described further below. The cleaning engine120may clean the records of the first dataset106and the second dataset107. In some embodiments, the cleaning engine120may clean the records by removing all non-alphanumeric characters from the attributes of the records. For example, the cleaning engine120may remove non-alphanumeric characters such as “#”, “?”, and “$”. In other embodiments, the cleaning engine120may further clean the records by replacing common abbreviations with their associated terms, fixing common or known misspellings, removing, or adding capitalization, etc. Any method for cleaning or standardizing datasets may be used. The token engine130may generate tokens for each record of the first dataset106and the second dataset107. Each token may correspond to a word in a record of a dataset. In some some embodiments, the token engine130may generate the tokens by, for each record, parsing the words found in the attributes associated with the record into one or more tokens. Depending on the embodiment, each record may have more associated tokens than attributes. The token engine130may use term frequency-inverse document frequency (“TF-IDF”) to generate token weights135that indicate how unique each token is with respect to the other tokens found in the first dataset106and the second dataset107. In particular, the token engine130may generate the weights135by first generating a non-deduplicated list of all of the tokens that were found in the records of the first dataset106and the second dataset107. Because the list is non-deduplicated, if a token such as “Andrew” appears 500 times in the first dataset106and the second dataset107it will appear 500 times in the list of tokens. The token engine130may use the generated list of tokens to generate the token weights135for each token. In some embodiments, a token weight135may be generated for each token such that the token weight135for a token is inversely proportional to the frequency of the token in the list of tokens. Put another way, the token that appears in the list of tokens the most may be assigned the lowest token weight, and the token that appears in the list of tokens the least may receive the highest token weight. After generating the token weights135, the token engine130may generate a vector137for each record of both the first dataset106and the second dataset107that includes an entry for each token. The entry for each token may include a count of the number of times that the token appears in an attribute of the record multiplied by the token weight135generated for that token. As may be appreciated, many of the entries in a vector137for each token are likely to be zero, since very few tokens are likely to appear in each record. FIG.2is an illustration of example vectors137generated for several records of a dataset. In the example shown, a set of vectors includes a vector137A associated with the record corresponding to “Patient #1”; a vector137B associated with the record corresponding to “Patient #2”; and a vector137C associated with the record corresponding to “Patient #3.” Each column in each vector137is associated with a unique token and includes the count for that token in the record multiplied by the token weight135determined for that token by the token engine130. As an example, each of the vectors137A,137B, and137C does not include the token corresponding to the first, second, and sixth column. As another example, the vector137A includes a value of 2.6 for the fifth column and the vector137B includes a value of 7.8 for the fifth column. Given a token weight135of 1.3 for the token corresponding to the fifth column, the token appeared in the record corresponding to the vector137A two times (i.e., 2×1.3) and in the record corresponding to the vector137B six times (i.e.,6x1.3). Returning toFIG.1, after generating the token weights135, and as part of generating the integrated dataset180, the integration system110, for each of the records of the first dataset106, may select a record from the first dataset106, and may determine records from the second dataset107that are likely associated with the same individual as the selected record. The selected record and the determined records may then be added or linked together as part of the integrated dataset180. The integration system110may continue until some or all of the records of the first dataset106have been considered. The selected record from the first dataset106is referred to herein as the query103. After a query103is selected from the first dataset106, the distance engine140may use the vectors137to calculate a distance between the vector137associated with the query103and the vector137associated with each record of the second dataset107. Depending on the embodiment, the distance may be a Euclidean distance. In general, the smaller the calculated distance for a record and the query103, the more similar the record and the query103. After calculating the distances, the distance engine140may select the records from the second dataset107with the smallest calculated distance. In some embodiments, the selected number may be fixed (e.g., select the top three, five, or ten records). The fixed number may be set by a user or administrator or may be based on the total number of records in the first dataset106and/or second dataset107. After the records have been selected based on the Euclidean distances, the distance engine140may further narrow the selection of records by computing the Levenshtein distance between each of the selected records and the record corresponding to the query103. Unlike the Euclidean distance, the Levenshtein distance may be calculated based on the attributes of the records, rather than the generated vectors137. In particular, the Levenshtein distance between two records is based on the number of characters in each attribute of the first record that have to be changed to match the corresponding attributes in the second record. As may be appreciated, computing the Levenshtein distance between two records is much more computationally expensive than computing the Euclidean distance between vectors137, thus, computing the Levenshtein distance only on the records that have been found to have a close Euclidian distance may save substantial computing resources. In some embodiments, the distance engine140may compute the Levenshtein distance between two records across all of the attributes. In other words, the Levenshtein distance for two records may be the sum of the Levenshtein distances computed for each pair of attributes. Alternatively, the Levenshtein distance may be calculated between two records across only one or more select attributes. In particular, the Levenshtein distance between two records may only be calculated using what are referred to herein as identity attributes. The identity attributes may include attributes that can identify the individual associated with the record and may include attributes such as first name and last name, for example. Other attributes may be considered for the Levenshtein distance. After computing the Levenshtein distances between the query103and each of the selected records, the distance engine140may further narrow the selected records to those that are below a Levenshtein distance threshold. Similar to the Euclidean distance threshold, the Levenshtein distance threshold may specify a fixed number of records (i.e., the top five records), or may be based on the total number of records in the first dataset106and/or second dataset107. The records with Levenshtein distances that are below the distance threshold may be provided in response to the query103and may be used with the record corresponding to the query103to create the integrated dataset180. In some embodiments, before providing the records, the selected records may be even further reduced or filtered using one or more rules155by the rules engine150. A rule155, as defined herein, may specify some combination of Levenshtein distances computed for each attribute and Euclidian distances that must exist for the record to be selected as a match. Each rule155may further specify a particular attribute as being free when it should not be considered by the rule. Those selected records that do not match at least one rule155may be discarded by the rules engine150. The remaining records may then be provided by the rules engine150in response to the query103. Continuing toFIG.3is illustrated a plurality of rules155(e.g., the rules155A,155B,155C, and155D) that may be used to narrow the matching records. As shown, each rule155specifies either a minimum Levenshtein distance for a plurality of attributes or specifies that the attribute is FREE, indicating that the attribute is not considered by the rule155. Finally, each rule155optionally includes a minimum Euclidean distance that is required. Note that while only five attributes are shown (i.e., “First Name”, “Last Name”, “Birthdate”, “Gender”, and “Address”) it is for illustrative purposes only; more or fewer attributes may be considered by each rule155. In the example shown, the rule155A specifies a minimum Euclidean distance of less than or equal to 1.4, specifies a minimum Levenshtein distance of 2 for the attribute “First Name”, specifies a minimum Levenshtein distance of 2 for the attribute “Last name”, is FREE for the attribute “Birthdate”, specifies a minimum Levenshtein distance of 0 for the attribute “Gender”, and specifies a minimum Levenshtein distance of 6 for the attribute “Address.” The rule155B specifies a minimum Euclidean distance of less than or equal to 1.4, specifies a minimum Levenshtein distance of 2 for the attribute “First Name”, specifies a minimum Levenshtein distance of 2 for the attribute “Last name”, is FREE for the attribute “Birthdate”, is FREE for the attribute “Gender”, and specifies a minimum Levenshtein distance of 6 for the attribute “Address.” The rule155C specifies a minimum Euclidean distance of less than or equal to 1.4, specifies a minimum Levenshtein distance of 2 for the attribute “First Name”, is FREE for the attribute “Last name”, specifies a minimum Levenshtein distance of 2 for the attribute “Birthdate”, specifies a minimum Levenshtein distance of 0 for the attribute “Gender”, and specifies a minimum Levenshtein distance of 6 for the attribute “Address.” Finally, the rule155D specifies a minimum Euclidean distance of less than or equal to 1.4, specifies a minimum Levenshtein distance of 2 for the attribute “First Name”, is FREE for the attribute “Last name”, specifies a minimum Levenshtein distance of 2 for the attribute “Birthdate”, is FREE for the attribute “Gender”, and specifies a minimum Levenshtein distance of 6 for the attribute “Address.” Retuning toFIG.1, after the rules engine150applies the rules155to the selected records, the rules engine150may provide the records that matched at least one rule155in response to the query103. The integration system110may then add the record corresponding to the query103along with the matching records into the integrated dataset180. In some embodiments, the integration system110may add the records to the dataset180by combining them into a single record and adding the combined record to the integrated dataset180. Alternatively, the integration system110may add some or all of the records to the integrated dataset180and may link them to the same individual or entity. As may be appreciated, in some embodiments, there may be millions of records in each of the first dataset106and the second dataset107, which may make processing all the of records as described above extremely computationally expensive. Accordingly, to reduce the number of records that are considered, before matching records from the first dataset106and second dataset106, the integration engine1110may use what is referred to herein as “strategic iterations” to reduce the total number or size of the records that are considered for each iteration. As one example, the datasets may be first filtered into one or more groups or smaller datasets based on certain attributes such as zip-code and/or first letter of first name. The records may then be matched as described above using each zip-code and first letter of first name group. Once the matching records have been determined for each of these zip-code groups, the integration engine110may remove all of the matched records from the first dataset106and second dataset107and may further remove all attributes related to addresses. This will capture records for individuals who may have moved at some point (i.e., do not share the same zip-codes) and, at the same time, reduce the overall sizes of the first dataset106and second dataset107. Other methods for reducing the sizes of the first dataset106and the second dataset107may include Principal Component Analysis, Product Quantization, and Polysemous Codes, for example. FIG.4is an illustration of a method400for generating token weights for records from a first dataset and second dataset. The method400may be implemented by the integration system110. The method400may be an example of a pre-processing phase that is performed before the datasets are integrated. At410, a first dataset and a second dataset are received. The first dataset106and the second dataset107may be received by the integration system110. Each dataset may include a plurality of records and each record may include a plurality of attributes. Generally, each record may be associated with an individual. In order to integrate the first dataset106and the second dataset107, the integration system110may further determine which records from the first dataset106and the second dataset107likely refer to the same individual. At420, the records are cleaned. The records may be cleaned by the cleaning engine120. In some embodiments, the records may be cleaned by removing any non-alphanumeric characters from the attributes of each record. Any method for cleaning attributes may be used. At430, tokens are generated for each record. The tokens may be generated by the token engine130from each of the attributes of each record of both the first dataset106and the second dataset107. Depending on the embodiment, each token may represent a word from the attributes of the datasets. Any method for parsing attributes or strings to generate tokens may be used. At440, a a token list is generated. The token list may be generated by the token engine130. The token list may include each token that is found in an attribute from each record of both the first dataset106and the second dataset107. The token list may be non-deduplicated so that any token that appears in multiple attributes and/or multiple records will appear multiple times in the token list. At450, tokens weights are generated. The token weights135may be generated by the token engine130using the the token list. The token weight135for a token may be inversely proportional to the number of times the token appears in the token list. At460, for each record, a vector is generated based on the token weights. The vector137may be generated by the token engine130. The vector137for a record may include an entry for each token along with a count of the number of times that the particular token appears in any attribute of the record. The count for each token in the vector137may be further multiplied by the token weight135determined for the token. FIG.5is an illustration of a method500for providing records in response to a query. The method500may be implemented by the integration system110. At510, a query is received. The query103may be a record from the first dataset106. As part of generating the integrated dataset180, the integration system110may first determine the records from the second dataset107that correspond to the same individuals as one or more records from the first dataset106. The integration system110may select a next record in the first dataset106as the query103. At520, for each record in the second dataset, a first distance between the record and the query is calculated. The first distance may be calculated by the distance engine140. The first distance between the query103and each record may be calculated by retrieving the vector137associated with the query103and the vector137associated with the record and calculating the first distance using the vectors137. The first distance between the query and the record may be a Euclidean distance. Other distance formulas may be used. At530, a first subset of records from the second dataset is selected based on the computed first distances. The first subset of records may be selected based on the first distances by the distance engine140. Depending on the embodiment, the distance engine140may select all records whose first distances are below a distance threshold or may select some predetermined number of records having the lowest first distances. At540, for each record in the first subset, a second distance between the record and the query is calculated. The second distances may be calculated by the distance engine140. The second distance between each record in the first subset and the query103may be calculated based on the attributes of the query103and the attributes of the record. The second distance may be a Levenshtein distance. Depending on the embodiment, the Levenshtein distance may be calculated on a per-attribute basis, or across all attributes in the query103and record. At550, a second subset of records from the first subset of records is selected based on the second distances. The second subset of records may be selected by the distance engine140. In some embodiments, the distance engine140may select the records having the lowest calculated second distances (e.g., Levenshtein distance), or may select all records having calculated distances that are below a threshold. At560, the records in the second subset are provided in response to the query. The records may be provided by the integration system110. The selected records may be used by the integration system, along with the query103, to create the integrated dataset180. After providing the selected records, the method500may return to510where a new record for the first dataset106may be received as a new query103. FIG.6is an illustration of a method600for filtering selected records using one or more rules, and for providing the filtered records in response to a query. The method600may be implemented by the integration system110. At610, a set of records matching a query is received. The set of records may be received by the rules engine150of the integration system110. The set of records may be those records from the second dataset107that satisfied both the first distance threshold (e.g., Euclidean distance) and the second distance threshold (e.g., Levenshtein distance). At620, rules are received. The rules155may be received by the rules engine150. Each rule155may include a minimum Levenshtein distance for each attribute, or an indication that the particular attribute is not considered by the rule (e.g., FREE). Each rule155may further include a minimum Euclidean distance. The rules155may be created by a user or administrator based on characteristics of the first dataset106and the second dataset107. At630, the rules are applied to the records in the first set of records. The rules155may be applied by the rules engine150. Any record that matches a rule155may be placed in a second set of records. Any method for applying rules to records may be used. At640, the second set of records is provided in response to the query. The second set of records may be provided by the rules engine150. The second set of records may be used by the integration system110, along with the query103, to create the integrated dataset180. FIG.7shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing device environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality. Numerous other general purpose or special purpose computing devices environments or configurations may be used. Examples of well-known computing devices, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like. Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices. With reference toFIG.7, an exemplary system for implementing aspects described herein includes a computing device, such as computing device700. In its most basic configuration, computing device700typically includes at least one processing unit702and memory704. Depending on the exact configuration and type of computing device, memory504may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated inFIG.7by dashed line706. Computing device700may have additional features/functionality. For example, computing device700may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.7by removable storage708and non-removable storage710. Computing device700typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device700and includes both volatile and non-volatile media, removable and non-removable media. Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory704, removable storage708, and non-removable storage710are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device700. Any such computer storage media may be part of computing device700. Computing device700may contain communication connection(s)712that allow the device to communicate with other devices. Computing device700may also have input device(s)714such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s)716such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here. It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include personal computers, network servers, and handheld devices, for example. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
29,059
11860877
DETAILED DESCRIPTION OF SOME EXAMPLE EMBODIMENTS Embodiments of the present invention generally relate to data analytics. More particularly, at least some embodiments of the invention relate to systems, hardware, software, computer-readable media, and methods for reducing or eliminating delays in evaluating and transforming raw data into actionable knowledge that may be used to support timely decision making. Some particular embodiments may employ graph models combined with analytics functions built right into the data pipelines to in order to reduce, or eliminate, one or more barriers to timely delivery of high value insights. Example embodiments may operate to maintain the high accuracy of data and to eliminate information loss due to the flattening of the data records in preparation for being streamed. Eliminating information loss due to streaming may enable high-value analytics to be pushed much earlier in the data processing cycle and much closer to where the data is streamed from. Example embodiments may shorten the time-to-insights and allows advanced analytics to be pushed all the way to the edge, that is, an edge computing environment. In order to address the need for on-time insights, while maintaining a high level of the quality of those insights, example embodiments may leverage a highly relational Graph data structure to underpin even the most complex data sets, and add technical enhancements to support the temporal nature of the real-time data processing. Note that it is often the case that high data complexity leads to highly insightful business intelligence. Embodiments of the invention, such as the examples disclosed herein, may be beneficial in a variety of respects. For example, and as will be apparent from the present disclosure, one or more embodiments of the invention may provide one or more advantageous and unexpected effects, in any combination, some examples of which are set forth below. It should be noted that such effects are neither intended, nor should be construed, to limit the scope of the claimed invention in any way. It should further be noted that nothing herein should be construed as constituting an essential or indispensable element of any invention or embodiment. Rather, various aspects of the disclosed embodiments may be combined in a variety of ways so as to define yet further embodiments. Such further embodiments are considered as being within the scope of this disclosure. As well, none of the embodiments embraced within the scope of this disclosure should be construed as resolving, or being limited to the resolution of, any particular problem(s). Nor should any such embodiments be construed to implement, or be limited to implementation of, any particular technical effect(s) or solution(s). Finally, it is not required that any embodiment implement any of the advantageous and unexpected effects disclosed herein. In particular, an embodiment may operate to reduce, or eliminate, delays in transforming raw data into actionable knowledge that can be used to support timely decision making. As another example, an embodiment may operate to transform raw data in a way that enables correct, and actionable, insights to be obtained from that data. Various other advantages of example embodiments of the invention will be apparent from this disclosure. It is noted that embodiments of the invention, whether claimed or not, cannot be performed, practically or otherwise, in the mind of a human. Accordingly, nothing herein should be construed as teaching or suggesting that any aspect of any embodiment of the invention could or would be performed, practically or otherwise, in the mind of a human. Further, and unless explicitly indicated otherwise herein, the disclosed methods, processes, and operations, are contemplated as being implemented by computing systems that may comprise hardware and/or software. That is, such methods processes, and operations, are defined as being computer-implemented. A. Overview Delays in making data-driven decisions can lead to significant business and personal losses. For example, the delayed decision of a CEO to act upon real-time competitive threats may leave room for competitors to build stronger presence, all at the expense of the increasingly costly efforts of the company. As another example, a doctor learning about patient preconditions, such as diabetes or heart conditions, after administering a treatment, may have devasting effects on the health and well-being of the patient. Similarly, a delayed decision by a network security engineer to turn off a network segment, such as in response to an active ransomware attack, may lead to business and financial liabilities measured in the millions of dollars, or more. B. Aspects of Some Example Embodiments In today's “everything now” world in which there is pressure to deliver information, results, and insights, as quickly as possible, delays, such as those examples noted above, can cause significant problems. To make timely decisions based on actionable knowledge, businesses demand technology solutions that put timely insights front and center to all aspects of the data services design and implementation, from the infrastructure design to service delivery and support. Thus, example embodiments may operate to reduce, or eliminate, delays in transforming raw data into actionable knowledge that may be used to support timely decisions. B.1 Context Example embodiments may take into consideration various limitations of conventional data pipeline solutions. One of such limitations is that conventional approaches perform a flattening process on data objects such as JSON, XML, or binary arrays, that leads to information loss that impacts all downstream clients and consumers of the flattened data. Another such limitation is that it is difficult, or impossible, for conventional approaches to tune each individual stream inside a data pipeline to specific operational requirements, such as data volume, transport latency, and observability. For example, existing pipeline solutions such as Kafka are simply not configured to provide that level of versatility. Further, applying processing to data while the data is being streamed is complex and introduces delays. The compute capacity, which may be implemented using VMs (virtual machines) and containers for example, to run the processing jobs is simply not present in conventional data pipelines. While some approaches attempt to avoid this problem, such as by using Flink jobs which turn the pipeline into micro-segments, those approaches nonetheless present significant technical problems in the areas of data insights quality, latency. Managing data state inside conventional data processing pipelines is very limited. More precisely, trying to cache specific data records inside the pipeline leads to extremely complex runtime configurations that are nearly impossible to scale. Once the data leaves the pipeline, the only way to access old data is retrieve it from disk, and that is only possible if the pipeline engineer has not forgotten to enable disk persistence in the pipeline runtime configuration. Querying old data from disk introduces major delays which will likely slow the whole pipeline to a crawl. B.2 General Considerations Limitations and problems such as those noted herein may be addressed by various aspects of example embodiments. Such aspects may include, but are not limited to the ability to stream not only flat, but also multi-dimensional, data structures, examples of which include graph data models. As another example, embodiments may implement real-time data streaming, as well as real-time analytics capabilities, such as analyzing data in real-time as the data is streamed, rather than using a batch analysis process. Further, embodiments may provide high accuracy and high value of the insights produced by analytics jobs by, for example, intelligently caching certain data records, rather than entire articles, and making those records available to analytics jobs on demand. As an illustration, suppose that there is a particular article published on the internet that has a paragraph of interest to a data analyst, while the rest of the article is of no particular interest. In a case like this, an embodiment may cache just the paragraph of interest, and not the entire article. This approach may also save processing time, and storage resources, relative to what would be required to process and store the entire article. As a final example, embodiments may be operable to able to scale up/out both data streaming and data processing. This scaling may be implemented, for example, by providing adaptive ways to tailor infrastructure resources such as, for example, containers, GPUs (graphics processing units), and network10capabilities, to match runtime demands. B.3 Particular Aspects of Some Example Embodiments Example embodiments embrace, among other things, various ways to use knowledge graphs, distributed analytics and intelligent workload management to significantly reduce, or eliminate, the delay between data ingest and the generation of analytical insights gained from processing that data. To the extent that embodiments are able to stream multi-dimensional data records, such as graphs for example, with little or no material loss of precision, which may be achieved through the use of serialization, such embodiments constitute an advance over conventional technology. As well, embodiments may operate to add real-time analytics to multi-dimensional data records as those records are being streamed. B.4 Implementation—Integrated Data Processing and Streaming With attention now toFIG.1, example embodiments embrace an architecture100, which may be referred to herein as a ‘stack’ or as an ‘analytics stack.’ In general, the architecture100may receive data101, such as streaming data for example, as input, and the architecture100may process the data101, possibly in real-time as the data101is being streamed into the architecture100, to generate ‘knowledge’103. The data101may comprise, for example, flat data structures, multi-dimensional data structures and multi-dimensional records, examples of which include, but are not limited to, graphs. The knowledge103that is generated, extracted, derived, and/or compiled, based on the analysis and other processing of the data101may comprise, but is not limited to, analytical insights, data trends, inferences based on part or all of the data, specific recommended actions, recommended timing for performance of specific recommended actions, identification of particular systems and persons recommended for performing one or more specific recommended actions, for example. B.4.1 Metadata Management Plane As shown in the example ofFIG.1, the architecture100may comprise various layers. The first layer, which may be the lowest layer is a metadata management plane102. Among other things, the metadata management plane102may make available all the metadata-related functionality to the rest of the components, that is, the other layers in the architecture100. In general, the metadata may relate to any or all of, the data101, the knowledge103, and any and all processes used to obtain the knowledge103from the data101. Such metadata-related functionality may include, but is not limited to, metadata creation, metadata discovery, metadata searching, metadata indexing, metadata annotations, metadata ownership, and metadata versioning, for example. Other elements of the architecture100may use the metadata management plane102to, for example, store configurations, maintain system state, discover and learn about capabilities offered by other components. In some embodiments, the metadata management plane102may be built on, or comprise elements of, a combination of three database engines, namely, MongoDB, Neo4j, and Riak, and may provide gRPC (Google Remote Procedure Call) microservices that may be implemented in Python. This configuration is provided only by way of example however, and is not intended to limit the scope of the invention in any way. B.4.2 Temporal Caches Layer With continued reference toFIG.1, the example architecture100may include a temporal caches layer104, which may sit above the metadata management plane102in the stack embodied by the architecture100. In some embodiments, the temporal caches layer104may comprise a variable, and time-bound, collection of micro compute environments, such as containers and accelerators for example, that may host small chunks of data and apply micro transformations, or micro analytics, to those small chunks of data based on preset or dynamic parameters, which may be individually controlled by those micro compute environments. The temporal caches layer104may operate to maintain the data state of ingested data, and may also cache ingested data for future use. In some embodiments, the temporal caches layer104may comprise, and/or, employ, an intelligent event-based architecture to scale up or down the micro compute environments as needed to process the data101once that data101has been ingested by the architecture100. This intelligent event-based architecture may also operate to maintain multiple versions of portions, or all, of the ingested data, whether those are materialized views of data, point in time and selective snapshots, or incremental update trails. To streamline software development at the same time with being able to simplify operational maintenance, the micro apps hosted by the micro compute environments of the temporal cache layer104may employ a particular architectural pattern called CQRS—Command-Query Responsibility Separation. The CQRS pattern may dictate a strict split within an application of the micro compute environment, that is, between the part of the application that deals with data processing and the part of the application that deals with handling external requests. For the purposes of this disclosure, analytics are, at least in part, concerned with forming virtual records by collating information from different places and applying different validation rules or ML (machine learning) inference models to allow only certain results to be promoted. More specifically, the inference models, or validation rules, may be used to select certain portions of the data101for analysis. The inference models may learn, over time, to better select data for analysis and, as such, the inference approach may be dynamic. On the other hand, validation rules may tend to be static and do not implement a learning function or capability. The inference models and/or the validation rules may use pattern recognition, that is, patterns in the incoming data, to select portions of the data101for analysis. As data is being selected for analysis, multiple new data representations are being produced. Approaches such as CQRS may enable the construction and use of analytics micro jobs that are able to separate original data models, such as inference models for example, from the analytics results that are being produced. By way of contrast, conventional data analytics platforms such as Tableau or MicroStrategy start off from the premise that the original data models and analytics insights need to be strongly correlated, and this is one reason why these conventional approaches have a difficult time handling multi cloud and edge data. To illustrate, if the data model comprises of purchase orders, those purchase orders may, or may not, provide adequate and relevant insights as to a monthly volume of business. Nonetheless, the purchase orders and monthly volume may be strongly correlated with each other in the sense that the business uses those purchase orders to make conclusions about monthly sales volume. With continued reference to analytics micro jobs, implemented in micro compute environments, example embodiments may operate to break a relatively larger analytics job into portions, or micro jobs. By way of illustration, the analysis of an article may be broken into micro jobs, one of which might be a micro job for analyzing just the table of contents of the article, and another micro job may be written, for example, to analyze just the footnotes in the article. The various micro jobs may be reused over and over so that new micro jobs do not have to be continuously written for data analysis. Advantageously, if a problem occurs with a micro job, the relatively small nature of the micro job may enable rapid identification and resolution of the problem. Absent the use of micro jobs, a data analysis operation may take a significantly longer period of time, and it may be timely and cumbersome to identify any problems. As well, because analytics may employ a group of micro jobs, each micro job may be tuned or modified on an individual basis, possibly separately from the other micro jobs and, in this way, the analysis of data may be fine tuned at a relatively granular level. To continue with the article example, it may be possible to modify only the table of contents micro job to provide better results from that portion of the analysis. Thus, the entire analytics process need not be evaluated and modified but, instead, only the particular area(s) of interest may be modified. Further, because the micro job is directed only to a certain portion of the article, that is, a subset of the data to be analyzed, the micro job may run faster and more efficiently, and more quickly produce results, than if micro jobs were not employed. Finally, micro jobs may also be deleted if/when they are no longer needed. B.4.3 Real-Time/Temporal Pipelines With continuing reference toFIG.1, the example architecture100may include one or more real-time/temporal pipelines106which may, among other things, receive the data101, and output the insights103. As noted herein, the data101may be operated on by any/all layers of the architecture100in order to enable the insights103to be produced. In some particular embodiments, the real-time/temporal pipelines106may be streaming data pipelines that act as the “plumbing” between the temporal caches of the temporal cache layer104, data ingest and data egress. A primary role of some embodiments of the real-time/temporal pipelines106is to bring together string of micro transformations or micro analytics applied to the data in the temporal cache layer104micro compute environments and transform the micro transformations or micro analytics into macro transformations or macro analytics that may produce the actionable insights that knowledge workers may need, and the real-time/temporal pipelines106may make those insights available when the knowledge workers need the insights. In some example embodiments, the real-time/temporal pipelines106component may be built on a version of ZeroMQ (https://zeromq.org/), a platform that implements a low latency messaging queue, using a publisher-subscriber model, that may be customized to handle input data that is in the form of graph data structures. B.4.4 Knowledge Graph Management Plane As shown inFIG.1, the example architecture100may comprise a knowledge graph management plane108, which may comprise a component that operates to determine the proper macro analytics and the data intakes101needed to produce the actionable insights103expected by a user. The knowledge graph management plane108may use knowledge graph constructs to associate analytics to data, change the temporal parameters of the delivery (content and reasoning), and, using feedback-based learning systems such as AI (artificial intelligence), adjust various parameters of both the data101intake and the analytics103delivery based on user feedback, performance and cost targets set via policies, and, ultimately, control how the knowledge transforms and accumulates over time. To carry out its functions, the knowledge graph management plane108may use graph data models, Python microservices, an Istio service mesh, and Python SciPy, NumPy and Pandas libraries, all running on k8s (Kubernetes open-source container orchestration system). As used herein, a graph data model may serve to connect and associate various heterogeneous data sets. To illustrate, suppose that a visit record is created when a patient visits a doctor. In connection with the visit, a prescription record, and a lab test record may also be created. Each of these three different records may comprise a different respective data model having a different respective structure. For example, the lab test record may be a spreadsheet, and the visit record may be a text file. Although the records may all have different structures, there is a need to ensure that they are all correlated with each other to ensure a correct and complete record for the patient. A graph data model may be used to connect and associate these records. In some embodiments, this connection and association may be expressed in a visual/visible manner. B.4.45 Service Supervisor and Orchestrator Finally, the example architecture100may comprise a service supervisor and orchestrator layer110. In general, embodiments of the service supervisor and orchestrator layer110may comprise an operational dashboard that service operators and data owners may use to interface with the system. The service supervisor and orchestrator layer110may visually display the operational health of the system and its performance. The service supervisor and orchestrator layer110may also enable, by accepting input from a CLI (command line interface) or GUI (graphical user interface), human or software operators to set policies, audit performance, and communicate outcomes. This service supervisor and orchestrator layer110may run on a combination of Grafana (open source visualization and analytics software)/Prometheus (open source systems monitoring and alerting toolkit)/Python (high-level, interpreted, general-purpose programming language) and may be able to integrate with application performance management solutions such as, but not limited to, Datadog, ServiceNow or AppDynamics. As well, embodiments of the service supervisor and orchestrator layer110may employ an lstio backend to route operational insights103to registered clients, either direct call, callback, or event-based messaging via Slack, for example. C. Example Methods It is noted with respect to the disclosed methods, including the example method ofFIG.2, that any operation(s) of any of these methods, may be performed in response to, as a result of, and/or, based upon, the performance of any preceding operation(s). Correspondingly, performance of one or more operations, for example, may be a predicate or trigger to subsequent performance of one or more additional operations. Thus, for example, the various operations that may make up a method may be linked together or otherwise associated with each other by way of relations such as the examples just noted. Finally, and while it is not required, the individual operations that make up the various example methods disclosed herein are, in some embodiments, performed in the specific sequence recited in those examples. In other embodiments, the individual operations that make up a disclosed method may be performed in a sequence other than the specific sequence recited. Directing attention now toFIG.2, an example method200is disclosed. The method200may be performed in part, or in while, by an analytics stack that is configured to receive a data stream. The example method200may begin with the receipt202, by the analytics stack, of a data stream. The data stream, which may be received202in real time from a data source as the data is generated by the data source, may comprise flattened data, multi-dimensional data structures such as graph data models, and/or un-serialized, data. The data stream may comprise one or more data sets, such as articles for example, that may each comprise one or more records, or other subsets of the data sets or articles. As the data is received202, metadata concerning the data may be generated and captured204. Inference models and/or validation rules may be used to identify, and cache206, specific records of interest to a user. The cached records may be available on-demand to one or more analytics micro jobs. As needed, the infrastructure associated with the analytics stack may be scaled208to accommodate the incoming data rate, and the analytics to be performed, and/or being performed, with respect to the incoming data. The scaling208may additionally, or alternatively, comprise scaling the data streaming rate, that is, the rate at which data is streamed to the analytics stack, and also the rate at which results are streamed from the analytics stack to a user. The cached206data, such as respective portions of the cached data, may be analyzed210by one or more micro jobs. Because the data may be cached for only a very short time, the analyzing210may be performed in real-time as the data is streamed to the analytics stack. As a result of the analyzing210, various insights concerning the data may be generated and output212, such as to a user for example. D. Further Example Embodiments Following are some further example embodiments of the invention. These are presented only by way of example and are not intended to limit the scope of the invention in any way. Embodiment 1. A method, comprising: receiving a data stream that comprises data in a form of multi-dimensional data structures; generating and storing metadata about the data; selecting, and caching, portions of the data; analyzing the cached data; and based on the analyzing, generating insights concerning the data that was analyzed. Embodiment 2. The method as recited in embodiment 1, wherein the data stream is received in real time as it is generated. Embodiment 3. The method as recited in any of embodiments 1-2, wherein the analyzing is performed in real time. Embodiment 4. The method as recited in any of embodiments 1-3, wherein the generating insights comprises using a graph data structure to associate various portions of the data with each other. Embodiment 5. The method as recited in any of embodiments 1-4, wherein the analyzing is performed using one or micro jobs. Embodiment 6. The method as recited in embodiment 5, wherein each micro job analyzes a respective portion of the data. Embodiment 7. The method as recited in any of embodiments 1-6, wherein the cache data resides in one or more temporal caches. Embodiment 8. The method as recited in any of embodiments 1-7, further comprising scaling an infrastructure based on runtime demands. Embodiment 9. The method as recited in any of embodiments 1-8, wherein the data stream further includes flat data structures. Embodiment 10. The method as recited in any of embodiments 1-9, further comprising, prior to receiving the data stream, determining one or more parameters of the data stream, and determining how the data will be analyzed in order to generate the insights. Embodiment 11. The method as recited in any of embodiments 1-10, further comprising using the original data or the derived knowledge to train an ML inference model to generate higher confidence/value knowledge/insights next time a user demand for analysis/insights is presented. Embodiment 12. A system, comprising hardware and/or software, operable to perform any of the operations, methods, or processes, or any portion of any of these, disclosed herein. Embodiment 13. A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising the operations of any one or more of embodiments 1-11. E. Example Computing Devices and Associated Media The embodiments disclosed herein may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below. A computer may include a processor and computer storage media carrying instructions that, when executed by the processor and/or caused to be executed by the processor, perform any one or more of the methods disclosed herein, or any part(s) of any method disclosed. As indicated above, embodiments within the scope of the present invention also include computer storage media, which are physical media for carrying or having computer-executable instructions or data structures stored thereon. Such computer storage media may be any available physical media that may be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer storage media may comprise hardware storage such as solid state disk/device (SSD), RAM, ROM, EEPROM, CD-ROM, flash memory, phase-change memory (“PCM”), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage devices which may be used to store program code in the form of computer-executable instructions or data structures, which may be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Combinations of the above should also be included within the scope of computer storage media. Such media are also examples of non-transitory storage media, and non-transitory storage media also embraces cloud-based storage systems and structures, although the scope of the invention is not limited to these examples of non-transitory storage media. Embodiments may employ various processing systems and components including, but not limited to, accelerators such as GPUs (graphics processing units), FPGAs (field-programmable gate arrays), and ASICs (application-specific integrated circuits). Computer-executable instructions comprise, for example, instructions and data which, when executed, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. As such, some embodiments of the invention may be downloadable to one or more systems or devices, for example, from a website, mesh topology, or other source. As well, the scope of the invention embraces any hardware system or device that comprises an instance of an application that comprises the disclosed executable instructions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts disclosed herein are disclosed as example forms of implementing the claims. As used herein, the term ‘module’ or ‘component’ may refer to software objects or routines that execute on the computing system. In some embodiments, a computing system may comprise the federation of compute resources across multiple computer nodes, such as servers. In other embodiments of a computing system, software routines, objects, for example, may transcend the confines of a single computing node (server) and operate collaboratively across a group of servers that may, or may not, be physically collocated (same rack, room, or data center) or geo-distributed across countries, continents or geo-regions. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system, for example, as separate threads. While the system and methods described herein may be implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated. In the present disclosure, a ‘computing entity’ may be any computing system as previously defined herein, or any module or combination of modules running on a computing system. In at least some instances, a hardware processor is provided that is operable to carry out executable instructions for performing a method or process, such as the methods and processes disclosed herein. The hardware processor may or may not comprise an element of other hardware, such as the computing devices and systems disclosed herein. In terms of computing environments, embodiments of the invention may be performed in client-server environments, whether network or local environments, or in any other suitable environment. Suitable operating environments for at least some embodiments of the invention include cloud computing environments where one or more of a client, server, or other machine may reside and operate in a cloud environment. With reference briefly now toFIG.3, any one or more of the entities disclosed, or implied, byFIGS.1-2and/or elsewhere herein, may take the form of, or include, or be implemented on, or hosted by, a physical computing device, one example of which is denoted at300. As well, where any of the aforementioned elements comprise or consist of a virtual machine (VM), that VM may constitute a virtualization of any combination of the physical components disclosed inFIG.3. In the example ofFIG.3, the physical computing device300includes a memory302which may include one, some, or all, of random access memory (RAM), non-volatile memory (NVM)304such as NVRAM for example, read-only memory (ROM), and persistent memory, one or more hardware processors306, non-transitory storage media308, UI (user interface) device310, and data storage312. One or more of the memory components302of the physical computing device300may take the form of solid state device (SSD) storage. As well, one or more applications314may be provided that comprise instructions executable by one or more hardware processors302to perform any of the operations, or portions thereof, disclosed herein. Such executable instructions may take various forms including, for example, instructions executable to perform any method or portion thereof disclosed herein, and/or executable by/at any of a storage site, whether on-premises at an enterprise, or a cloud computing site, client, datacenter, data protection site including a cloud storage site, or backup server, to perform any of the functions disclosed herein. As well, such instructions may be executable to perform any of the other operations and methods, and any portions thereof, disclosed herein. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
34,846
11860878
DETAILED DESCRIPTION OF THE INVENTION First Embodiment An outline of the present embodiment will first be described. The present embodiment relates to a relay device which relays inquiries related to a machine and/or a device made by an inquiry unit that is a terminal device of a user of the machine and/or the device to the inquiry a plurality of center systems of suppliers including the maker of the machine and/or the device. An end user site is, for example, a factory in which a machine and a device such as a machine tool and an industrial robot are operated. The inquiry center systems of the suppliers may include not only the manufacturers of the machine and the device but also a controller maker, a cutting tool maker, an automation system maker, an integrator, a machine tool sales company, a cutting tool sales company, auxiliary machine makers and the like. The relay device receives, from the terminal device in the end user site, first identification information which includes at least individual identification information related to the machine and/or the device. Then, the relay device selects, based on the individual identification information serving as the first identification information, an inquiry history serving as second identification information and the like, the inquiry center system of the supplier, and connects together the terminal device in the end user site and the terminal device of the inquiry center system of the supplier which is selected. In this way, in the present embodiment, it is possible to select the inquiry center system which provides an appropriate answer even when the user in the end user site does not have advanced expertise. The outline of the present embodiment has been described above. <Inquiry System100> The configuration of the present embodiment will then be described in detail with reference to drawings. FIG.1is a diagram showing an example of the configuration of an inquiry system100according to the present embodiment. As shown inFIG.1, the inquiry system100includes: a relay device1; a user side terminal4which serves as an inquiry unit and which is a terminal device in an end user site; and terminal devices8(1)-8(n)), which are connected to inquiry center systems C(1) to C(n) of suppliers (n is an integer of 2 or more). The relay device1, the user side terminal4and the terminal devices8(1)-8(n)) can communicate with each other through, for example, a communication network N. The communication network N is, for example, the Internet, a VPN (Virtual Private Network), a public telephone network or the like. A specific communication method in the communication network N, which one of wired connection and wireless connection is used and the like are not particularly limited. AlthoughFIG.1shows only one end user site, user side terminals4in a plurality of end user sites may be connected to the relay device1. Although in each of the end user site and the inquiry center systems C(1) to C(n) of the suppliers, one user side terminal4or one terminal device8is (e.g.,8(1)-8(n)) is indicated, each may include a plurality of terminal devices, and the terminal devices may be connected to the relay device1. Although inFIG.1, a machine tool9aand an industrial robot9bare illustrated in the end user site, they are examples of the machine and the device. The inquiry center systems C(1) to C(n) of the suppliers may be inquiry center systems of different sections in the same company or inquiry center systems of different companies. For example, the inquiry center system C(1) of the supplier may be an inquiry center system of support staff in a machine maker, and the inquiry center system C(2) of the supplier may be an inquiry center system of experts in the same machine maker. When in the following description, the inquiry center systems C(1) to C(n) of the suppliers do not need to be distinguished from each other, they are also collectively referred to as the “inquiry center systems C of the suppliers”. <Relay Device1> FIG.2is a functional block diagram showing an example of the functional configuration of the relay device according to the present embodiment. The relay device1is, for example, a dedicated server, a web server or the like, and includes a control unit10, a storage unit20and a communication unit30. The control unit10is a CPU (Central Processing Unit) or the like, and executes various types of programs which are stored in the program storage unit21of the storage unit20and which control the relay device1so as to perform centralized control on the relay device1. The control unit10includes a selection unit11and a determination unit12. These functional units are realized by the execution of a connection destination control program21astored in the program storage unit21with the control unit10. The selection unit11selects the inquiry center system C based on the first identification information of the machine and/or the device and the second identification information of the machine and/or the device. The first identification information includes, for example, any one of the individual identification information of the machine and/or the device, the model information of the machine and/or the device and the individual identification information of the two-dimensional code of the machine and/or the device. The second identification information includes, for example, any one of the inquiry history, the repair history of the machine and/or the device, the installation site of the machine and/or the device, the part information of the machine and/or the device, the version (version number) information of software of the machine and/or the device, the setting information of the machine and/or the device, alarm information generated in the machine and/or the device, the user information of the machine and/or the device, the manufacturer information of the machine and/or the device, and the sales maker information of the machine and/or the device. For example, the selection unit11selects an inquiry center system C(k) of a supplier based on the individual identification information that is received from the user side terminal4in the end user site and that is attached to the machine tool9aor the industrial robot9band information stored on the database of a history information storage unit22, a machine information storage unit23and the like in the storage unit20which will be described later. Here, k is an integer of any one of 1 to n. The selection unit11may select the inquiry center system C of the supplier based on information on the database associated with the individual identification information. Alternatively, the selection unit11may recommend the inquiry center systems C of a plurality of suppliers based on the information on the database associated with the individual identification information such that the user side terminal4makes a selection. More specifically, the selection unit11determines, based on information such as the inquiry history which is stored on the database of the history information storage unit22, the machine information storage unit23and the like in the storage unit20and which is associated with the individual identification information of the machine and the device in the end user site, a new inquiry when the inquiry history or the like corresponding to the individual identification information is not present. In this case, the selection unit11may select the inquiry center system C of the supplier which is frequently selected in the inquiry history of the machine and/or the device on the individual identification information. Alternatively, the selection unit11may select (recommend) the inquiry center system C of the supplier corresponding to the machine and/or the device installed in the end user site based on a correspondence table stored in a correspondence table storage unit25which will be described later. When the selection unit11detects, based on the received individual identification information and the history information of inquiries or the like, for example, that an inquiry was made to the inquiry center system C(k) of the supplier about a tool fitted to the machine tool9atwo days ago, the selection unit11determines that the inquiry is being continued. In this case, the selection unit11may select the same inquiry center system C of the supplier as last time. Then, the communication unit30which will be described later connects together the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. The determination unit12determines whether or not machine information needs to be stored in the storage unit20. For example, when the machine information of the machine and/or the device is received from the user side terminal4, the determination unit12determines whether or not the machine information needs to be stored in the machine information storage unit23which will be described later. More specifically, when the received machine information has already been stored in the machine information storage unit23, the determination unit12determines that the received machine information is not new machine information for the individual identification information, and thereby does not store the received machine information in the machine information storage unit23so as to prevent a duplicate registration. On the other hand, when the received machine information is not stored in the machine information storage unit23, the determination unit12determines that the received machine information is new machine information for the individual identification information, and thereby stores the received machine information in the machine information storage unit23. The machine information includes information on hardware such as the type of board which is inserted into a slot of the machine and/or the device in the end user site and the type of tool. The machine information includes versions of software (firmware) such as a CNC (Computerized Numerical Control) and a PLC (Programmable Logic Controller). The machine information may be included in the second identification information described previously. The determination unit12may notify, through the communication unit30, the machine information received from the user side terminal4to the inquiry center system C of the supplier which is selected. Alternatively, the determination unit12may notify, through the communication unit30, the machine information which is received from the user side terminal4and which is determined by the determination unit12to need to be stored to the inquiry center system C of the supplier which is selected. In other words, the notification of the machine information to the inquiry center system C may be performed based on whether or not the machine information is registered in the machine information storage unit23or may be performed regardless of whether or not the machine information is registered therein. The storage unit20stores the first identification information related to the machine and/or the device and the second identification information related to the machine and/or the device. For example, the storage unit20includes a storage region which stores programs that are executed with the control unit10and the like. The storage unit20also includes the program storage unit21, the history information storage unit22, the machine information storage unit23, a log-in information storage unit24and the correspondence table storage unit25. The program storage unit21stores various types of programs which are executed with the control unit10of the relay device1. The program storage unit21stores the connection destination control program21awhich executes various types of functions of the control unit10described above. The history information storage unit22is a storage region which stores, for each of the machine and the device or each piece of individual identification information, the history information such as the inquiry history of inquiries and answers to the inquiries, the alarm information generated in the machine and/or the device and the repair history of the machine and/or the device. The machine information storage unit23is a storage region which stores the machine information. The machine information storage unit23stores the machine information so as to associate the machine information with, for example, each of the machine and the device or each piece of individual identification information. The machine information storage unit23may store the model information of the machine and/or the device or the individual identification information of the two-dimensional code of the machine and/or the device. The machine information storage unit23may also store the installation site of the machine and/or the device, the part information of the machine and/or the device, the version (version number) information of software of the machine and/or the device and the setting information of the machine and/or the device. The log-in information storage unit24stores: the user information which is information on the user in the end user site who utilizes the inquiry system100, a person in charge of the inquiry center system C of the supplier and the like; log-in information for logging in to the inquiry system100; and the like. The correspondence table storage unit25stores the correspondence table (not shown) in which the inquiry center systems C of one or more suppliers (for example, the name of the supplier and contact information) are preset so as to correspond to the machines and the like (for example, a controller, various types of machine tools, a robot, a PLC and a laser oscillator) installed in each of the end user sites. The unillustrated correspondence table may include information which indicates whether or not each of the inquiry center systems C of the suppliers logs in to the relay device1. The selection unit11may reference the unillustrated correspondence table so as to select the inquiry center system C(k) of the supplier from among the inquiry center systems C of the suppliers which log in. Although in the inquiry system100according to the present embodiment, the case has been illustrated where the relay device1includes the storage unit (for example, the history information storage unit22and the machine information storage unit23) storing the first identification information related to the machine and/or the device and the second identification information related to the machine and/or the device, there is no limitation to this configuration. The storage unit which stores the first identification information related to the machine and/or the device and the second identification information related to the machine and/or the device may be configured as a system that the relay device1can access. Specifically, the storage unit may be provided as one or more file servers which are connected to the relay device1so as to be able to communicate therewith. In this case, the storage unit may be installed near the inquiry center system C. The storage unit may be provided, in a distributed manner, in the machine tool9aand the industrial robot9bserving as the machine and/or the device, the relay device1, the user side terminal4which serves as the inquiry unit and which is the terminal device in the end user site and the inquiry center systems C(1) to C(n) of the suppliers serving as the inquiry center systems or may be provided in any one of them in a concentrated manner. The communication unit30is a communication control device which transmits and receives data to and from external devices (for example, the user side terminal4and the terminal devices8(1)-8(n)). <User Side Terminal4> The user side terminal4is a portable terminal such as a smartphone or a tablet or a wearable device such as smart glasses which is carried by the user within the factory in the end user site or the like. FIG.3is a functional block diagram showing an example of the functional configuration of the user side terminal in the present embodiment. As shown inFIG.3, the user side terminal4includes a control unit40, a display unit50, a camera60and a communication unit70. The control unit40is a CPU or the like, and executes various types of programs (not shown) stored in a storage unit (not shown) included in the user side terminal4so as to perform centralized control on the user side terminal4. The control unit40includes an acquisition unit41. The acquisition unit41acquires the first identification information related to the machine and/or the device. Specifically, for example, as shown inFIG.4A, the acquisition unit41may read, through the camera60, a QR code (registered trademark) provided to the machine tool9aor the industrial robot9bso as to acquire information (also referred to as “entity information”) such as a URL (Uniform Resource Locator) for accessing the relay device1, the individual identification information, a manufacturing number, the machine information and the identification information (for example, a telephone number) of the user side terminal4. Specifically, a configuration may be adopted in which the QR code is made to have, for example, the information (entity information) such as the URL, the individual identification information, the machine information and the identification information (for example, the telephone number) of the user side terminal4, and in which thus acquisition unit41reads the QR code so as to be able to directly acquire these pieces of information. The QR code may be prevented from having the entity information as described above, and may have, for example, a code (referred to as an “identification code”) such as a number which does not have a meaning, which is sufficiently redundant and which is different from the manufacturing number (which includes a character string (which is long)). Here, the identification code may be previously and separately set in an entity information table (not shown) so as to uniquely correspond to the entity information such as the URL, the individual identification information, the manufacturing number, the machine information and the identification information of the user side terminal4. In this way, for example, (1) even after the provision of the QR code, the QR code can be associated with the manufacturing number, and (2) it is possible to obtain an advantage in which the identification code to be printed is set sufficiently redundant for a shipped quantity so as to prevent forgery. The entity information table may be stored in the above-described storage unit (which stores the first identification information related to the machine and/or the device and the second identification information related to the machine and/or the device). Since the correspondence between the identification code and the entity information is provided on the side of the entity information table, for example, when the entity information (for example, the manufacturing number after the installation) is changed or when entity information is added, for example, the entity information can be added or the entity information can be changed without the QR code being changed. The QR code is not necessarily limited to the QR code which is provided to the machine tool9aor the industrial robot9b. For example, as shown inFIG.4B, a QR code91may be displayed on a display device92such as a liquid crystal display included in the machine tool9a. There is no limitation to the QR code, and instead of the QR code, another arbitrary code (such as a two-dimensional code or a barcode) may be applied. When the QR code is not set, the acquisition unit41may directly acquire, through wired communication or wireless communication with the machine tool9a, the industrial robot9bor the like, the entity information such as the URL and the individual identification information from the machine tool9a, the industrial robot9bor the like. The display unit50is a liquid crystal display or the like, and may display an answer received from the inquiry center system C of the supplier or the like. For example, the camera60shoots the QR code so as to read the QR code. The communication unit70is a communication control device which transmits and receives data to and from the relay device1. Then, the user side terminal4accesses the relay device1based on the URL read from the QR code, is passed through, for example, user authentication and is connected to the relay device1. The URL for accessing the relay device1does not need to be printed on the QR code, and for example, the URL may be previously set on the user side terminal4as a constant (variable) of an application which is downloaded in order to utilize the inquiry system100. After being connected to the relay device1, the user side terminal4transmits the QR code (for example, the identification code) read with the camera60to the relay device1. The relay device1acquires, based on the received QR code and the entity information table described previously, for example, the entity information which includes the individual identification information of the machine tool9aor the like. As described previously, the entity information table may be not necessarily included in the relay device1, and may be stored in an arbitrary storage unit which the relay device1can access. Thereafter, the user side terminal4may provide, through the relay device1, a user interface function of performing communication such as a chat with the inquiry center system C of the supplier which is selected. In other words, the user side terminal4may display, on the display unit50, a user interface for displaying an input of an inquiry or a received answer. The communication between the user side terminal4and the inquiry center system C of the supplier which is selected is not limited to a chat, and may be performed by mail, a voice call, an inquiry field in a web page or the like. <Inquiry Center System C of Supplier> The inquiry center system C of the supplier is, for example, a computer system which is provided within the inquiry center of the supplier. The inquiry center system C of the supplier includes, a control unit, a storage unit, an input unit, a display unit, a communication unit and the like which are not shown and which are provided in a computer system. The inquiry center system C of the supplier provides a user interface function for an operator. The inquiry center system C of the supplier may be a personal computer or the like, and may provide a user interface function of performing communication such as a chat with the user side terminal4. <Relay Processing of Relay Device1> An operation related to the relay processing of the relay device1according to the present embodiment will then be described. FIG.5is a flowchart illustrating the relay processing of the relay device1. It is assumed that in the relay processing, the user side terminal4is connected to the relay device1. In step S11, the communication unit30receives, from the user side terminal4in the end user site, the first identification information of the machine tool9aor the industrial robot9b, the machine information and the like. Specifically, the communication unit30acquires, through the QR code received from the user side terminal4, the entity information including the individual identification information of the machine tool9a, the industrial robot9bor the like, and receives the machine information from the user side terminal4. In step S12, the determination unit12determines whether or not the machine information of the machine tool9a, the industrial robot9bor the like received in step S11is new machine information for the individual identification information. When the machine information is new machine information, the processing proceeds to step S13. On the other hand, when the machine information is not new machine information, the processing proceeds to step S14. In step S13, the determination unit12registers and stores the received machine information in the machine information storage unit23. In step S14, the selection unit11selects, based on the first identification information and the second identification information received in step S11, the inquiry center system C(k) of the supplier. In step S15, the communication unit30connects together the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In step S16, the communication unit30notifies the inquiry and the machine information from the user side terminal4in the end user site to the inquiry center system C(k) of the supplier. The relay device1completes the inquiry destination selection processing. Thereafter, the relay device1relays the details of the inquiry exchanged between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In this way, the relay device1of the first embodiment receives at least the individual identification information of the machine and/or the device from the user side terminal4in the end user site. Then, the relay device1selects, based on the first identification information and the second identification information, the inquiry center system C of the supplier. The relay device1connects together the user side terminal4in the end user site and the inquiry center system C of the supplier which is selected. In this way, even when the user in the end user site does not have advanced expertise, the relay device1can select, based on the individual identification information which is acquired with the user side terminal4as the first identification information, the inquiry center system C of the supplier which provides an appropriate answer. Then, it is possible to prevent the user in the end user site from being passed from one to another within the inquiry system100, with the result that it is possible to solve the problem in an early stage. The relay device1determines whether or not the machine information received from the user side terminal4is new machine information for the individual identification information, and when the machine information is determined to be new machine information, the relay device1registers the machine information in the machine information storage unit23. In this way, the relay device1can acquire the latest machine information of the machine and/or the device in the end user site so as to be able to prevent a duplicate registration. The first embodiment has been described above. Variation 1 of First Embodiment Although in the first embodiment described above, the selection unit11selects the inquiry center system C(k) of the supplier based on the individual identification information received from the user side terminal4in the end user site and the information stored in the history information storage unit22and the machine information storage unit23, there is no limitation to this configuration. For example, the communication unit30may receive, from the user side terminal4in the end user site, the individual identification information and position information (for example, GeoIP or GPS) of the user side terminal4. Then, the selection unit11may select the inquiry center system C(k) of the supplier based on the individual identification information, the information stored in the history information storage unit22and the machine information storage unit23and the position information. In this way, the selection unit11can select the position information of the user side terminal4, that is, the nearest inquiry center system C of the supplier in which, for example, a maintenance person is easily arranged according to the installation position (including a country) of the machine tool9a, the industrial robot9bor the like. Second Embodiment A second embodiment will then be described. In the second embodiment, a relay device1A further includes, in addition to the function of the first embodiment, a function in which when the position information from the end user site is different from the installation site of the machine and/or the device in the information stored on the database, the inquiry unit is made to select the installation site of the machine and/or the device. In this way, the relay device1A of the second embodiment can more accurately select the nearest inquiry center system of the supplier in which the arrangement of the maintenance person or the like is easy. The second embodiment will be described below. An inquiry system according to the second embodiment has the same configuration as the inquiry system100shown inFIG.1and according to the first embodiment. The user side terminal4in the end user site and the inquiry center system C of the supplier in the second embodiment have the same configurations as in the first embodiment. <Relay Device1A> FIG.6is a functional block diagram showing an example of the functional configuration of the relay device according to the second embodiment. Elements which have the same functions as the elements of the relay device1inFIG.1are identified with the same reference numerals, and the detailed description thereof will be omitted. The relay device1A according to the second embodiment includes, as with the relay device1according to the first embodiment, a control unit10a, the storage unit20and the communication unit30. The control unit10aincludes a selection unit11aand the determination unit12. These functional units are realized by the execution of the connection destination control program21astored in the program storage unit21with the control unit10a. The selection unit11aselects the inquiry center system C based on the first identification information of the machine and/or the device, the second identification information of the machine and/or the device and the position information received from the user side terminal4. For example, the selection unit11acompares the installation site of the machine tool9aor the industrial robot9bwhich is received from the user side terminal4, which is associated with the individual identification information attached to the machine tool9aor the industrial robot9band which is stored on the database of the history information storage unit22, the machine information storage unit23and the like with the position information (for example, GeoIP or GPS) acquired from the user side terminal4, and thereby determines whether or not they agree with each other. When the installation site of the machine tool9aor the industrial robot9bagrees with the position information of the user side terminal4, the selection unit11aselects the inquiry center system C as with the selection unit11of the first embodiment. On the other hand, when the installation site of the machine tool9aor the industrial robot9bdoes not agree with the position information of the user side terminal4, the selection unit11acannot select the optimal inquiry center system C of the supplier. Hence, the selection unit11amakes the user of the user side terminal4select the installation site of the machine tool9aor the industrial robot9b. More specifically, for example, when the machine tool9aor the industrial robot9bis first installed in a factory in an “X country”, and is thereafter transferred to a factory in an “A country”, the installation site of the machine tool9aor the industrial robot9bwhich is associated with the individual identification information attached to the machine tool9aor the industrial robot9band which is stored on the database remains the “X country” as long as the information on the database is not updated such as by the user of the user side terminal4. On the other hand, it is likely that for example, the position information of the user side terminal4acquired by the user side terminal4from a GPS signal indicates the “A country” and that the position information acquired from the position information (GeoIP) indicated by the connection base of a communication company with which the user side terminal4contracts indicates a “B country”. Hence, for example, the selection unit11adisplays, for the user side terminal4, a list such as “The inquiry region cannot be identified. Make a selection from 1. A country (GPS), 2. B country (GeoIP) and 3. X country (database)” so as to make the user of the user side terminal4select the country. For example, when the user of the user side terminal4selects the “A country”, the selection unit11acan select the nearest inquiry center system C of the supplier in the selected “A country” in which for example, a maintenance person is easily arranged. The installation site and the position information are not limited to the country unit, and may be a region in a prefecture, a municipality or the like. Instead of displaying the acquired position information as a list, the selection unit11amay display, for the user side terminal4, the acquired position information on a map so as to make the user side terminal4select the position information (region) displayed on the map. Alternatively, the selection unit11amay display an input field for making the user side terminal4input an address, a zip code or the like so as to make the user side terminal4input the address, the zip code or the like. The determination unit12has a function similar to that of the determination unit12in the first embodiment. <Relay Processing of Relay Device1A> An operation related to the relay processing of the relay device1A according to the present embodiment will be then described. FIG.7is a flowchart illustrating the relay processing of the relay device1A. In the relay processing shown inFIG.7, processing from step S22to step S23and processing from step S26to step S28are the same as that from step S12to step S13and that from step S14to step S16in the first embodiment ofFIG.5, and thus the description thereof will be omitted. In step S21, the communication unit30receives, from the user side terminal4in the end user site, the first identification information of the machine tool9aor the industrial robot9b, the machine information, the position information of the user side terminal4and the like. In step S24, the selection unit11adetermines whether or not the installation site of the machine tool9aor the industrial robot9bwhich is received in step S21, which is associated with the individual identification information of the machine tool9aor the industrial robot9band which is stored on the database agrees with the position information of the user side terminal4. When the installation site of the machine tool9aor the industrial robot9bagrees with the position information of the user side terminal4, the processing proceeds to step S26. On the other hand, when the installation site of the machine tool9aor the industrial robot9bdoes not agree with the position information of the user side terminal4, the processing proceeds to step S25. In step S25, the selection unit11adisplays a list for selecting the installation site of the machine tool9aor the industrial robot9bso as to make the user side terminal4select the installation site. In this way, the relay device1A of the second embodiment receives, from the user side terminal4in the end user site, the individual identification information of the machine and/or the device, the machine information and the position information of the user side terminal4. The relay device1A determines whether or not the installation site of the machine and/or the device associated with the individual identification information and based on the information on the database agrees with the position information of the user side terminal4, and when they do not agree with each other, the user side terminal4is made to select the installation site of the machine and/or the device. In this way, the relay device1A can select the inquiry center system C of the supplier based on the first identification information, the second identification information and the installation site selected by the user of the user side terminal4. The relay device1A can connect together the user side terminal4in the end user site and the inquiry center system C of the supplier which is selected. As described above, even when the user in the end user site does not have advanced expertise, the relay device1A can select, based on the individual identification information which is acquired with the user side terminal4as the first identification information, the inquiry center system C of the supplier which provides an appropriate answer. Then, it is possible to prevent the user in the end user site from being passed from one to another within the inquiry system100, with the result that it is possible to solve the problem in an early stage. When the installation site of the machine and/or the device associated with the individual identification information and based on the information on the database does not agree with the position information of the user side terminal4, by making the user side terminal4select the installation site of the machine and/or the device, the relay device1A can more accurately select the nearest inquiry center system C of the supplier in which the arrangement of the maintenance person or the like is easy. The second embodiment has been described above. Third Embodiment A third embodiment will then be described. In the third embodiment, a relay device1B further includes, in addition to the function of the first embodiment, a function of monitoring the details of the inquiry from the end user site. In this way, the relay device1B of the third embodiment searches for an answer to the inquiry based on the result of the monitoring of the details of the inquiry, the individual identification information serving as the first identification information, the inquiry history serving as the second identification information and the like. The relay device1B transmits the answer found from the search to the terminal device in the end user site without providing notification to the inquiry center system of the supplier. In this way, the relay device1B searches the history information of inquiries and the like based on the details of the inquiry from the end user site, and transmits the answer found from the search to the end user site so as to be able to rapidly answer to the end user site without being connected to the inquiry center system of the supplier. The third embodiment will be described below. The inquiry system according to the third embodiment has the same configuration as the inquiry system100shown inFIG.1and according to the first embodiment. The user side terminal4in the end user site and the inquiry center system C of the supplier in the third embodiment have the same configurations as in the first embodiment. <Relay Device1B> FIG.8is a functional block diagram showing an example of the functional configuration of the relay device according to the third embodiment. Elements which have the same functions as the elements of the relay device1inFIG.1are identified with the same reference numerals, and the detailed description thereof will be omitted. The relay device1B according to the third embodiment includes, as with the relay device1according to the first embodiment, a control unit10b, the storage unit20and the communication unit30. The control unit10bincludes the selection unit11, the determination unit12and a proposal unit13. These functional units are realized by the execution of the connection destination control program21astored in the program storage unit21with the control unit10b. The selection unit11and the determination unit12have functions similar to those of the selection unit11and the determination unit12in the first embodiment. The proposal unit13monitors the details of the inquiry from the user side terminal4, and proposes an answer (solution) to the user side terminal4before notification to the inquiry center system when the answer is present on the database. The proposal unit13monitors the details of the inquiry from the user side terminal4in the end user site serving as the inquiry unit, and proposes the solution to the user side terminal4before notification to the inquiry center system C of the supplier when the solution is present on the database of the history information storage unit22, the machine information storage unit23and the like. In an example, for example, when the first inquiry from the user side terminal4in the end user site is related to an alarm, and an “alarm100” or the like is found as a result of monitoring of the details of the inquiry, the proposal unit13automatically returns, to the user side terminal4in the end user site, an answer (solution) “A state where the parameter setting can be changed is entered. Set PWE on the setting screen to 0 for safety” which is present on the database of the history information storage unit22, the machine information storage unit23and the like. As described above, the relay device1B proposes, before the selection of the inquiry center system C of the supplier, the answer (solution) found for the user side terminal4in the end user site, and thereby can rapidly solve the problem which occurs in the end user site. In the monitoring of the details of the inquiry with the proposal unit13, the storage unit20may previously store, for example, keywords such as the “alarm100” for each of the machine and/or the device. Then, the proposal unit13may use, for example, a method such as known pattern matching so as to determine whether or not the answer (solution) is present on the database by comparison of the keyword and the details of the inquiry. <Relay Processing of Relay Device1B> An operation related to the relay processing of the relay device1B according to the present embodiment will be then described. FIG.9is a flowchart illustrating the relay processing of the relay device1B. In the relay processing shown inFIG.9, processing from step S31to step S33and processing from step S36to step S38are the same as that from step S11to step S13and that from step S14to step S16in the first embodiment ofFIG.5, and thus the description thereof will be omitted. In step S34, the proposal unit13performs answer proposal processing on the details of the inquiry from the user side terminal4in the end user site which receives the first identification information in step S31. The detailed flow of the answer proposal processing will be described later. In step S35, the proposal unit13determines, by the answer proposal processing in step S34, whether or not the inquiry of the user side terminal4is solved. When the inquiry is solved, the inquiry destination selection processing is completed. On the other hand, when the inquiry is not solved, the processing from step S36to step S38is performed. The relay device1B completes the inquiry destination selection processing. Thereafter, the relay device1B relays the details of the inquiry exchanged between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. FIG.10is a flowchart illustrating the details of the answer proposal processing shown in step S34ofFIG.9. In the flowchart ofFIG.10, steps S341to S343indicate the flow of processing of the proposal unit13. In step S341, the proposal unit13monitors the details of the inquiry from the user side terminal4in the end user site. In step S342, the proposal unit13determines, based on the result of the monitoring in step S341, whether or not the answer (solution) is present on the database. When the answer (solution) is present on the database, the processing proceeds to step S343. On the other hand, when the answer (solution) is not present on the database, the processing proceeds to step S35. In step S343, the proposal unit13returns, through the communication unit30, the answer (solution) present on the database to the user side terminal4. The flow of the answer proposal processing has been completed, and the processing proceeds to step S35. In this way, the relay device1B of the third embodiment monitors the details of the inquiry from the user side terminal4in the end user site. When the answer is present on the database, the relay device1B proposes the answer (solution) for the user side terminal4in the end user site before notification to the inquiry center system C of the supplier. In this way, the relay device1B can rapidly solve the problem which occurs in the end user site in addition to the effects that can be achieved in the first embodiment. The third embodiment has been described above. Variation 1 of Third Embodiment Although in the third embodiment described above, the relay device1B returns the answer (solution) present on the database to the user side terminal4in the end user site, when the problem is not solved by the answer (solution), the answer (solution) may be notified to the inquiry center system C(k) of the supplier which is selected. In this way, the inquiry center system C(k) of the supplier can check whether or not the answer received from the relay device1B is correct, and thereby can more accurately answer to the inquiry from the user side terminal4in the end user site. Fourth Embodiment A fourth embodiment will then be described. The relay device1B according to the fourth embodiment further includes, in addition to the function of the third embodiment, a function of monitoring the details of the inquiry transmitted and received to and from the inquiry center system of the supplier which is selected. In this way, the relay device1B according to the fourth embodiment determines, based on the result of the monitoring of the details of the inquiry, whether or not machine data indicating the state of the machine and/or the device is needed, and acquires the machine data automatically or by the operation of the user side terminal4in the end user site. The relay device1B according to the fourth embodiment transfers the machine data acquired from the user side terminal4to the inquiry center system of the supplier which is selected, and thus the inquiry center system of the supplier can analyze the received machine data, with the result that it is possible to rapidly solve the problem which occurs in the end user site. The fourth embodiment will be described below. The inquiry system according to the fourth embodiment has the same configuration as the inquiry system100shown inFIG.1and according to the first embodiment. The user side terminal4in the end user site and the inquiry center system C of the supplier in the fourth embodiment have the same configurations as in the first embodiment. The relay device according to the fourth embodiment has the same configuration as the relay device1B shown inFIG.8and according to the third embodiment. When the proposal unit13monitors the details of the inquiry and determines that the machine data indicating the state of the machine and/or the device is needed, the proposal unit13acquires the machine data automatically or by the operation of the user side terminal4, and provides the acquired machine data to the selected inquiry center system. The proposal unit13monitors the details of the inquiry between the user side terminal4in the end user site and the selected inquiry center system, and determines, based on the result of the monitoring of the details of the inquiry, whether or not the machine data indicating the state of the machine and/or the device is needed. The proposal unit13further includes a function of acquiring, when determining that the machine data is needed, the machine data automatically or by the operation of the user side terminal4with the user and providing the acquired machine data to the inquiry center system C of the supplier which is selected. Here, the machine data refers to physical data which is stored or collected with the controller of a machine tool, a robot or the like and an external measurement device, and includes setting data such as a parameter, alarm data, the observation data of an internal state, the observation data of a sensor and the like. For example, the proposal unit13monitors the details of the inquiry between the user side terminal4in the end user site and the inquiry center system C of the supplier which is selected, and determines, based on the result of the monitoring of the details of the inquiry, whether or not the machine data indicating the state of the machine and/or the device is needed. When the proposal unit13determines that the machine data is needed, the proposal unit13acquires the machine data automatically or by the operation of the user side terminal4with the user. In an example, when the proposal unit13finds the details of a dialog on “servo adjustment” from the details of the inquiry in a chat or the like between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected, the proposal unit13transmits an instruction to request an NC parameter as the machine data to the user side terminal4. The user side terminal4transmits, based on the instruction from the relay device1B, the data (machine data) of the NC parameter to the relay device1B. The communication unit30of the relay device1B provides the data of the NC parameter acquired from the user side terminal4to the inquiry center system C(k) of the supplier which is selected. The inquiry center system C(k) of the supplier analyzes the received NC parameter so as to be able to provide an instruction to adjust a servo speed gain parameter to the user side terminal4. In this way, the inquiry center system C(k) of the supplier can rapidly solve the problem which occurs in the end user site. <Machine Data Acquisition Processing of Relay Device1B> An operation related to the machine data acquisition processing of the relay device1B according to the present embodiment will then be described. FIG.11is a flowchart illustrating the machine data acquisition processing of the relay device1B. The machine data acquisition processing shown inFIG.11is executed, for example, when the inquiry destination selection processing shown inFIG.9is performed and thereafter an exchange on the inquiry is made between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In step S41, the proposal unit13monitors the details of the inquiry exchanged between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In step S42, the proposal unit13determines, based on the result of the monitoring in step S41, whether or not the machine data indicating the state of the machine and/or the device is needed. When the proposal unit13determines that the machine data is needed, the processing proceeds to step S43. On the other hand, when the proposal unit13determines that the machine data is not needed, the machine data acquisition processing is completed. In step S43, the proposal unit13acquires the machine data automatically or by the operation of the user side terminal4with the user. In step S44, the proposal unit13provides, through the communication unit30, the machine data acquired from the user side terminal4to the inquiry center system C(k) of the supplier which is selected. In this way, the relay device1B of the fourth embodiment monitors the details of the inquiry between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected, and thereby determines whether or not the machine data indicating the state of the machine and/or the device is needed. When the relay device1B determines that the machine data is needed, the relay device1B acquires the machine data automatically or by the operation of the user side terminal4. In this way, in addition to the effects that can be achieved in the third embodiment, the relay device1B provides the machine data acquired from the user side terminal4to the inquiry center system of the supplier which is selected, and thus the inquiry center system C of the supplier can analyze the received machine data, with the result that it is possible to rapidly solve the problem which occurs in the end user site. The fourth embodiment has been described above. Fifth Embodiment A fifth embodiment will then be described. The relay device1B according to the fifth embodiment further includes, in addition to the function of the fourth embodiment, a function of proposing, when determining that the inquiry center system needs to be changed, a new inquiry center system of a supplier for either or both of the end user site and the inquiry center system. In this way, the relay device1B according to the fifth embodiment can select a more appropriate inquiry center system, and thus the inquiry center system can rapidly solve the problem which occurs in the end user site. The fifth embodiment will be described below. The inquiry system according to the fifth embodiment has the same configuration as the inquiry system100shown inFIG.1and according to the first embodiment. The user side terminal4in the end user site and the inquiry center system C of the supplier in the fifth embodiment have the same configurations as in the first embodiment. The relay device according to the fifth embodiment has the same configuration as the relay device1B shown inFIG.8and according to the third embodiment. When the proposal unit13monitors the details of the inquiry, and determines that the inquiry center system needs to be changed, the proposal unit13proposes a new inquiry center system for either or both of the user side terminal4and the selected inquiry center system. For example, the proposal unit13monitors the details of the inquiry exchanged between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected, and determines, based on the result of the monitoring of the details of the inquiry, whether or not the inquiry center system C(k) of the supplier which is selected needs to be changed. When the proposal unit13determines that the inquiry center system C(k) of the supplier needs to be changed, the proposal unit13proposes a new inquiry center system C(i) of a supplier for either or both of the user side terminal4in the end user site and the inquiry center system C(k) of the supplier. Here, i is an integer of any one of 1 to n, and is a value different from k. In an example, the proposal unit13monitors the details of the inquiry between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is first selected based on an inquiry on a “machining failure” from the user side terminal4. When the proposal unit13finds the details of a dialog on “resonance caused by an excessive servo gain” from the details of the inquiry in a chat or the like between the user side terminal4and the inquiry center system C(k) of the supplier, the proposal unit13may propose a consultation with the new inquiry center system C(i) of the supplier for the current inquiry center system C(k) of the supplier. The proposal unit13may propose the change to the new inquiry center system C(i) of the supplier for the user side terminal4. Thereafter, the current inquiry center system C(k) of the supplier contacts the new inquiry center system C(i) of the supplier after receiving approval of the end user site so as to request the subsequent actions. In other words, the inquiry system100has a transfer function and a sharing function. In this way, the relay device1B can select a more appropriate inquiry center system C of the supplier, and thus the inquiry center system C of the supplier can rapidly solve the problem which occurs in the end user site. <Inquiry Center Proposal Processing of Relay Device1B> An operation related to the inquiry center proposal processing of the relay device1B according to the present embodiment will then be described. FIG.12is a flowchart illustrating the inquiry center proposal processing of the relay device1B. The inquiry center proposal processing shown inFIG.12is executed, for example, when the inquiry destination selection processing shown inFIG.9is performed and thereafter an exchange on the inquiry is made between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In step S51, the proposal unit13monitors the details of the inquiry exchanged between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected. In step S52, the proposal unit13determines, based on the result of the monitoring in step S51, whether or not the inquiry center system C(k) of the supplier which is selected needs to be changed. When the proposal unit13determines that the inquiry center system C(k) of the supplier needs to be changed, the processing proceeds to step S53. On the other hand, when the proposal unit13determines that the inquiry center system C(k) of the supplier does not need to be changed, the inquiry center proposal processing is completed. In step S53, the proposal unit13proposes the new inquiry center system C(i) of the supplier for either or both of the user side terminal4in the end user site and the inquiry center system C(k) of the supplier. Thereafter, for example, the current inquiry center system C(k) of the supplier contacts the new inquiry center system C(i) of the supplier after receiving approval of the end user site so as to request the subsequent actions. The flow of the inquiry center proposal processing has been completed. In this way, the relay device1B of the fifth embodiment monitors the details of the inquiry between the user side terminal4in the end user site and the inquiry center system C(k) of the supplier which is selected, and thereby determines whether or not the inquiry center system C(k) of the supplier which is selected needs to be changed. When the relay device1B determines that the inquiry center system C(k) of the supplier needs to be changed, the relay device1B proposes the new inquiry center system C(i) of the supplier for either or both of the user side terminal4and the current inquiry center system C(k) of the supplier. In this way, in addition to the effects that can be achieved in the fourth embodiment, the relay device1B can select a more appropriate inquiry center system C of the supplier, and thus the inquiry center system C of the supplier can rapidly solve the problem which occurs in the end user site. The fifth embodiment has been described above. Although the first to fifth embodiments have been described above, the relay devices1,1A and1B are not limited to the embodiments described above, and variations, modifications and the like are included therein as long as the purposes thereof can be achieved. <Variation 1> Although in the first to fifth embodiments discussed above, the case where the QR code is provided to the machine tool9aor the industrial robot9bis described as an example, there is no limitation to this configuration. For example, a two-dimensional code other than the QR code may be provided or a barcode or the like may be provided. The entity information corresponding to the QR code (identification code) provided to the machine tool9aor the industrial robot9bis not limited to the individual identification information, and the entity information may be device identification information (model information) with which it is possible to identify the machine tool9aor the industrial robot9b. As the device identification information (model information), for example, a serial number, a unique name provided to the machine tool9aor the industrial robot9bin the end user site or the like with which it is possible to identify the machine tool9aor the industrial robot9bmay be provided. <Variation 2> For example, although in the first to fifth embodiments discussed above, as the relay device1,1A or1B, one relay device is provided, there is no limitation to this configuration. For example, the functions of the selection unit11and the determination unit12in the relay device1, the selection unit11aand the determination unit12in the relay device1A or the selection unit11, the determination unit12and the proposal unit13in the relay device1B may be realized by utilization of, for example, virtual server functions on a cloud. As the relay device1,1A or1B, a distributed processing system may be used in which the functions of the relay device1,1A or1B are distributed to a plurality of servers as necessary. The storage unit (for example, the history information storage unit22and the machine information storage unit23) which is included in the storage unit20of the relay device1,1A or1B and which stores the first identification information related to the machine and/or the device and the second identification information related to the machine and/or the device may be arranged in a server separate from the relay device1,1A or1B, and part or the whole thereof may be arranged in any one of the machine tool9a, the industrial robot9b, the user side terminal4and the inquiry center system C of the supplier. The individual functions of the inquiry systems100and the relay devices1,1A and1B according to the first to fifth embodiments can be realized by hardware, software or a combination thereof. Here, the realization by software means realization by reading and executing programs with a computer. Individual constituent units included in the inquiry systems100and the relay devices1,1A and1B can be realized by hardware including an electronic circuit and the like, software or a combination thereof. The programs are stored with various types of non-transitory computer readable media and can be supplied to the computer. The non-transitory computer readable media include various types of tangible storage media. Examples of the non-transitory computer readable medium include magnetic recording media (for example, a flexible disk, a magnetic tape and a hard disk drive), a magneto-optical recording medium (for example, a magneto-optical disk), a CD-ROM (Read Only Memory), a CD-R, a CD-R/W and semiconductor memories (for example, a mask ROM, a PROM (Programmable ROM), an EPROM (Erasable PROM), a flash ROM and a RAM). The programs may be supplied to the computer with various types of transitory computer readable media. Examples of the transitory computer readable medium include electrical signals, optical signals and electromagnetic waves. The transitory computer readable media can supply the programs to the computer through a wired communication path such as an electric wire or an optical fiber or a wireless communication path. Steps in which the programs are described so as to be recorded in the recording medium include not only processing which is chronologically performed along the order thereof but also processing which is not always chronologically performed and which is executed in parallel or individually. In other words, the inquiry system and the relay device of the present disclosure can take various types of embodiments having configurations as described below. (1) An inquiry system100of the present disclosure is an inquiry system in which an inquiry unit that is a terminal device (user side terminal4) of a user of a machine and/or a device makes inquiries related to the machine and/or the device to a plurality of inquiry center systems C of suppliers including a maker of the machine and/or the device, and includes: a relay device1which connects together the user side terminal4and the inquiry center systems C; and a storage unit20which stores first identification information related to the machine and/or the device and second identification information related to the machine and/or the device, the user side terminal4includes: an acquisition unit41which acquires the first identification information related to the machine and/or the device; and a first communication unit (communication unit70) which transmits the first identification information acquired by the acquisition unit41to the relay device1, the relay device1includes: a second communication unit (communication unit30) which communicates between the user side terminal4and the inquiry center systems C; and a selection unit11which selects, based on the first identification information and the second identification information, the inquiry center system C and the second communication unit connects together the user side terminal4and the inquiry center system C selected by the selection unit11. In the inquiry system100, it is possible to select the inquiry center system C which provides an appropriate answer even when the user in an end user site does not have advanced expertise. (2) Preferably, in the inquiry system100described in (1), the acquisition unit41acquires machine information of the machine and/or the device, the relay device1includes a determination unit12which determines whether or not the machine information needs to be stored in the storage unit20and the determination unit12stores, in the storage unit20, the machine information which is determined to need to be stored. In this way, it is possible to acquire the latest machine information of the machine and/or the device in the end user site so as to be able to prevent a duplicate registration. (3) Preferably, in the inquiry system100described in (2), the determination unit12notifies the machine information acquired by the acquisition unit41to the selected inquiry center system C. In this way, even in the selected inquiry center system C of the supplier, it is possible to acquire the latest information of the device and/or the device in the end user site. (4) Preferably, in the inquiry system100described in (2), the determination unit12notifies the machine information determined by the determination unit12to need to be stored to the selected inquiry center system C. In this way, even in the selected inquiry center system C of the supplier, it is possible to acquire the latest and different machine information of the device and/or the device in the end user site. (5) Preferably, in the inquiry system100described in any one of (1) to (4), the first identification information includes either of individual identification information of the machine and/or the device and model information of the machine and/or the device. In this way, only the individual identification information (model information) is acquired from the user side terminal4, and thus it is possible to select the inquiry center system C of the supplier which provides an appropriate answer. (6) Preferably, in the inquiry system100described in any one of (1) to (5), the second identification information includes any one of an inquiry history, a repair history of the machine and/or the device, an installation site of the machine and/or the device, part information of the machine and/or the device, version information of software of the machine and/or the device, setting information of the machine and/or the device, alarm information generated in the machine and/or the device, user information of the machine and/or the device, manufacturer information of the machine and/or the device, and sales maker information of the machine and/or the device. In this way, the inquiry center system C of the supplier which provides an appropriate an answer can be selected with high accuracy. (7) Preferably, in the inquiry system100described in any one of (2) to (4), the machine information acquired by the acquisition unit41includes the second identification information. In this way, the same effects as in (2) to (4) can be achieved. (8) Preferably, in the inquiry system100described in any one of (1) to (7), the communication unit30receives the first identification information and position information of the user side terminal4, and the selection unit11aselects the inquiry center system C based on the first identification information, the second identification information, and the position information of the user side terminal4. In this way, it is possible to select the nearest inquiry center system C of the supplier in which the arrangement of a maintenance person or the like is easy. (9) Preferably, in the inquiry system100described in (8), at least when the installation site of the machine and/or the device based on the first identification information and the second identification information is different from the position information of the user side terminal4, the selection unit11amakes the user side terminal4select the installation site of the machine and/or the device. In this way, even when the installation site of the machine and/or the device associated with the individual identification information and based on information on database is different from the position information of the user side terminal4, it is possible to select the nearest inquiry center system C of the supplier in which the arrangement of a maintenance person or the like is easy. (10) Preferably, in the inquiry system100described in (9), the selection unit11adisplays, as a list, for the user side terminal4, the installation site of the machine and/or the device based on the first identification information and the second identification information and the position information of the user side terminal4so as to make the user side terminal4make a selection. In this way, the user of the user side terminal4can easily select the installation site. (11) Preferably, in the inquiry system100described in (9), the selection unit11adisplays, as a map, for the user side terminal4, the installation site of the machine and/or the device based on the first identification information and the second identification information and the position information of the user side terminal4so as to make the user side terminal4select a region. In this way, the user of the user side terminal4can easily select the installation site. (12) Preferably, in the inquiry system100described in (11), the region is a country. In this way, the user of the user side terminal4can easily select the installation site. (13) Preferably, in the inquiry system100described in (9), the selection unit11adisplays an input field for making the user side terminal4input a zip code or an address of the installation site of the machine and/or the device so as to make the user side terminal4input the zip code or the address. In this way, the user of the user side terminal4can easily specify the installation site. (14) Preferably, in the inquiry system100described in any one of (1) to (13), the relay device1B further includes a proposal unit13which monitors details of the inquiry from the user side terminal4and which proposes, when an answer is present on a database, the answer for the user side terminal4before notification to the inquiry center system C. In this way, the relay device1B can rapidly solve the problem which occurs in the end user site. (15) Preferably, in the inquiry system100described in (14), the communication unit30notifies the answer proposed by the proposal unit13to the selected inquiry center system C. In this way, the inquiry center system C of the supplier can check whether or not the answer received from the relay device1B is correct, and thereby can more accurately answer to the inquiry from the user side terminal4in the end user site. (16) Preferably, in the inquiry system100described in (14) or (15), the user side terminal4includes a display unit50which displays the answer proposed by the proposal unit13. In this way, it is possible to rapidly solve the problem which occurs in the end user site. (17) Preferably, in the inquiry system100described in any one of (14) to (16), the proposal unit13monitors the details of the inquiry, acquires, when determining that machine data indicating a state of the machine and/or the device is needed, the machine data automatically or by an operation of the user side terminal4, and provides the acquired machine data to the selected inquiry center system C. In this way, it is possible to rapidly solve the problem which occurs in the end user site. (18) Preferably, in the inquiry system100described in any one of (14) to (17), the proposal unit13monitors the details of the inquiry, and proposes, when determining that the inquiry center system C needs to be changed, a new inquiry center system C for either or both of the user side terminal4and the selected inquiry center system C. In this way, it is possible to select a more appropriate inquiry center system C of the supplier, and thus it is possible to rapidly solve the problem which occurs in the end user site. (19) A relay device1of the present disclosure is a relay device which relays inquiries related to a machine and/or a device made by an inquiry unit that is a terminal device (user side terminal4) of a user of the machine and/or the device to a plurality of inquiry center systems C of suppliers including a maker of the machine and/or the device, and includes: a communication unit30which communicates between the user side terminal4and the inquiry center systems C; and a selection unit11which selects the inquiry center system C based on first identification information related to the machine and/or the device and second identification information related to the machine and/or the device that are stored in a storage unit20and the communication unit30connects together the user side terminal4and the inquiry center system C selected by the selection unit11. In the relay device1, it is possible to select the inquiry center system C which provides an appropriate answer even when the user in the end user site does not have advanced expertise. (20) Preferably, the relay device1described in (19) includes: a determination unit12which determines, when the communication unit30receives machine information on the machine and/or the device from the user side terminal4, whether or not the machine information needs to be stored in the storage unit20, and the determination unit12stores, in the storage unit20, the machine information which is determined to need to be stored. In this way, it is possible to acquire the latest machine information of the machine and/or the device in the end user site so as to be able to prevent a duplicate registration. (21) Preferably, in the relay device1described in (20), the determination unit12notifies the received machine information to the selected inquiry center system C. In this way, even in the selected inquiry center system C of the supplier, it is possible to acquire the latest information of the device and/or the device in the end user site. (22) Preferably, in the relay device1described in (20), the determination unit12notifies the machine information determined by the determination unit12to need to be stored to the selected inquiry center system C. In this way, even in the selected inquiry center system C of the supplier, it is possible to acquire the latest and different machine information of the device and/or the device in the end user site. (23) Preferably, in the relay device1described in any one of (19) to (22), the first identification information includes either of individual identification information of the machine and/or the device and model information of the machine and/or the device. In this way, only the individual identification information (model information) is acquired from the user side terminal4, and thus it is possible to select the inquiry center system C of the supplier which provides an appropriate answer. (24) Preferably, in the relay device1described in any one of (19) to (23), the second identification information includes any one of an inquiry history, a repair history of the machine and/or the device, an installation site of the machine and/or the device, part information of the machine and/or the device, version information of software of the machine and/or the device, setting information of the machine and/or the device, alarm information generated in the machine and/or the device, user information of the machine and/or the device, manufacturer information of the machine and/or the device, and sales maker information of the machine and/or the device. In this way, the inquiry center system C of the supplier which provides an appropriate an answer can be selected with high accuracy. (25) Preferably, in the relay device1A described in any one of (19) to (24), the communication unit30receives the first identification information and position information of the user side terminal4, and the selection unit11aselects the inquiry center system C based on the first identification information, the second identification information, and the position information of the user side terminal4. In this way, it is possible to select the nearest inquiry center system C of the supplier in which the arrangement of a maintenance person or the like is easy. (26) Preferably, in the relay device1A described in (25), at least when the installation site of the machine and/or the device based on the first identification information and the second identification information is different from the position information of the user side terminal4, the selection unit11amakes the user side terminal4select the installation site of the machine and/or the device. In this way, even when the installation site of the machine and/or the device associated with the individual identification information and based on the information on the database is different from the position information of the user side terminal4, it is possible to select the nearest inquiry center system C of the supplier in which the arrangement of a maintenance person or the like is easy. (27) Preferably, in the relay device1A described in (26), the selection unit11adisplays, as a list, for the user side terminal4, the installation site of the machine and/or the device based on the first identification information and the second identification information and the position information of the user side terminal4so as to make the user side terminal4make a selection. In this way, the user of the user side terminal4can easily select the installation site. (28) Preferably, in the relay device1A described in (26), the selection unit11adisplays, as a map, for the user side terminal4, the installation site of the machine and/or the device based on the first identification information and the second identification information and the position information of the user side terminal4so as to make the user side terminal4select a region. In this way, the user of the user side terminal4can easily select the installation site. (29) Preferably, in the relay device1A described in (28), the region is a country. In this way, the user of the user side terminal4can easily select the installation site. (30) Preferably, in the relay device1A described in (26), the selection unit11adisplays an input field for making the user side terminal4input a zip code or an address of the installation site of the machine and/or the device so as to make the user side terminal4input the zip code or the address. In this way, the user of the user side terminal4can easily specify the installation site. (31) Preferably, the relay device1B described in any one of (19) to (30) includes: a proposal unit13which monitors details of the inquiry from the user side terminal4and which proposes, when an answer is present on a database, the answer for the user side terminal4before notification to the inquiry center system C. In this way, it is possible to rapidly solve the problem which occurs in the end user site. (32) Preferably, in the relay device1B described in (31), the proposal unit13monitors the details of the inquiry, acquires, when determining that machine data indicating a state of the machine and/or the device is needed, the machine data automatically or by an operation of the user side terminal4, and provides the acquired machine data to the selected inquiry center system C. In this way, it is possible to rapidly solve the problem which occurs in the end user site. (33) Preferably, in the relay device1B described in (31) or (22), the proposal unit13monitors the details of the inquiry, and proposes, when determining that the inquiry center system needs to be changed, a new inquiry center system C for either or both of the user side terminal4and the selected inquiry center system C. In this way, it is possible to select a more appropriate inquiry center system C of the supplier, and thus it is possible to rapidly solve the problem which occurs in the end user site. EXPLANATION OF REFERENCE NUMERALS 1,1A,1B relay device4user side terminal8(1)-8(n) terminal device10,10a,10bcontrol unit11,11aselection unit12determination unit13proposal unit20storage unit30communication unitC(1) to C(n) inquiry center system
81,155
11860879
DETAILED DESCRIPTION Generally described, aspects of the present disclosure relate to handling requests to read or write to data objects on an object storage system. More specifically, aspects of the present disclosure relate to modification of an input/output (I/O) path for an object storage service, such that one or more data manipulations can be inserted into the I/O path to modify the data to which a called request method is applied, without requiring a calling client device to specify such data manipulations. In one embodiment, data manipulations occur through execution of user-submitted code, which may be provided for example by an owner of a collection of data objects on an object storage system in order to control interactions with that data object. For example, in cases where an owner of an object collection wishes to ensure that end users do not submit objects to the collection including any personally identifying information (to ensure end user's privacy), the owner may submit code executable to strip such information from a data input. The owner may further specify that such code should be executed during each write of a data object to the collection. Accordingly, when an end user attempts to write input data to the collection as a data object (e.g., via an HTTP PUT method), the code may be first executed against the input data, and resulting output data may be written to the collection as the data object. Notably, this may result in the operation requested by the end user—such as a write operation—being applied not to the end user's input data, but instead to the data output by the data manipulation (e.g., owner-submitted) code. In this way, owners of data collections control I/O to those collections without relying on end users to comply with owner requirements. Indeed, end users (or any other client device) may be unaware that modifications to I/O are occurring. As such, embodiments of the present disclosure enable modification of I/O to an object storage service without modification of an interface to the service, ensuring inter-compatibility with other pre-existing software utilizing the service. In some embodiments of the present disclosure, data manipulations may occur on an on-demand code execution system, sometimes referred to as a serverless execution system. Generally described, on-demand code execution systems enable execution of arbitrary user-designated code, without requiring the user to create, maintain, or configure an execution environment (e.g., a physical or virtual machine) in which the code is executed. For example, whereas conventional computing services often require a user to provision a specific device (virtual or physical), install an operating system on the device, configure application, define network interfaces, and the like, an on-demand code execution system may enable a user to submit code and may provide to the user an application programming interface (API) that, when used, enables the user to request execution of the code. On receiving a call through the API, the on-demand code execution system may generate an execution environment for the code, provision the environment with the code, execute the code, and provide a result. Thus, an on-demand code execution system can remove a need for a user to handle configuration and management of environments for code execution. Example techniques for implementing an on-demand code execution system are disclosed, for example, within U.S. Pat. No. 9,323,556, entitled “PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE,” and filed Sep. 30, 2014 (the “'556 patent”), the entirety of which is hereby incorporated by reference. Due to the flexibility of on-demand code execution system to execute arbitrary code, such a system can be used to create a variety of network services. For example, such a system could be used to create a “micro-service,” a network service that implements a small number of functions (or only one function), and that interacts with other services to provide an application. In the context of on-demand code execution systems, the code executed to create such a service is often referred to as a “function” or a “task,” which can be executed to implement the service. Accordingly, one technique for performing data manipulations within the I/O path of an object storage service may be to create a task on an on-demand code execution system that, when executed, performs the required data manipulation. Illustratively, the task could provide an interface similar or identical to that of the object storage service, and be operable to obtain input data in response to a request method call (e.g., HTTP PUT or GET calls), execute the code of the task against the input data, and perform a call to the object storage service for implementation of the request method on resulting output data. A downside of this technique is a complexity. For example, end users might be required under this scenario to submit I/O requests to the on-demand code execution system, rather than the object storage service, to ensure execution of the task. Should an end user submit a call directly to the object storage service, task execution may not occur, and thus an owner would not be enabled to enforce a desired data manipulation for an object collection. In addition, this technique may require that code of a task be authored to both provide an interface to end users that enables handling of calls to implement request methods on input data, and an interface that enables performance of calls from the task execution to the object storage service. Implementation of these network interfaces may significantly increase the complexity of the required code, thus disincentivizing owners of data collections from using this technique. Moreover, where user-submitted code directly implements network communication, that code may need to be varied according to the request method handled. For example, a first set of code may be required to support GET operations, a second set of code may be required to support PUT operations, etc. Because embodiments of the present disclosure relieve the user-submitted code of the requirement of handling network communications, one set of code may in some cases be enabled to handle multiple request methods. To address the above-noted problems, embodiments of the present disclosure can enable strong integration of serverless task executions with interfaces of an object storage service, such that the service itself is configured to invoke a task execution on receiving an I/O request to a data collection. Moreover, generation of code to perform data manipulations may be simplified by configuring the object storage service to facilitate data input and output from a task execution, without requiring the task execution to itself implement network communications for I/O operations. Specifically, an object storage service and on-demand code execution system can be configured in one embodiment to “stage” input data to a task execution in the form of a handle (e.g., a POSIX-compliant descriptor) to an operating-system-level input/output stream, such that code of a task can manipulate the input data via defined-stream operations (e.g., as if the data existed within a local file system). This stream-level access to input data can be contrasted, for example, with network-level access of input data, which generally requires that code implement network communication to retrieve the input data. Similarly, the object storage service and on-demand code execution system can be configured to provide an output stream handle representing an output stream to which a task execution may write output. On detecting writes to the output stream, the object storage service and on-demand code execution system may handle such writes as output data of the task execution, and apply a called request method to the output data. By enabling a task to manipulate data based on input and output streams passed to the task, as opposed to requiring the code to handle data communications over a network, the code of the task can be greatly simplified. Another benefit of enabling a task to manipulate data based on input and output handles is increased security. A general-use on-demand code execution system may operate permissively with respect to network communications from a task execution, enabling any network communication from the execution unless such communication is explicitly denied. This permissive model is reflective of the use of task executions as micro-services, which often require interaction with a variety of other network services. However, this permissive model also decreases security of the function, since potentially malicious network communications can also reach the execution. In contrast to a permissive model, task executions used to perform data manipulations on an object storage system's I/O path can utilize a restrictive model, whereby only explicitly-allowed network communications can occur from an environment executing a task. Illustratively, because data manipulation can occur via input and output handles, it is envisioned that many or most tasks used to perform data manipulation in embodiments of the present disclosure would require no network communications to occur at all, greatly increasing security of such an execution. Where a task execution does require some network communications, such as to contact an external service to assist with a data manipulation, such communications can be explicitly allowed, or “whitelisted,” thus exposing the execution in only a strictly limited manner. In some embodiments, a data collection owner may require only a single data manipulation to occur with respect to I/O to the collection. Accordingly, the object storage service may detect I/O to the collection, implement the data manipulation (e.g., by executing a serverless task within an environment provisioned with input and output handles), and apply the called request method to the resulting output data. In other embodiments, an owner may request multiple data manipulations occur with respect to an I/O path. For example, to increase portability and reusability, an owner may author multiple serverless tasks, which may be combined in different manners on different I/O paths. Thus, for each path, the owner may define a series of serverless tasks to be executed on I/O to the path. Moreover, in some configurations, an object storage system may natively provide one or more data manipulations. For example, an object storage system may natively accept requests for only portions of an object (e.g., of a defined byte range), or may natively enable execution of queries against data of an object (e.g., SQL queries). In some embodiments, any combination of various native manipulations and serverless task-based manipulations may be specified for a given I/O path. For example, an owner may specify that, for a particular request to read an object, a given SQL query be executed against the object, the output of which is processed via a first task execution, the output of which is processed via a second task execution, etc. The collection of data manipulations (e.g., native manipulations, serverless task-based manipulations, or a combination thereof) applied to an I/O path is generally referred to herein as a data processing “pipeline” applied to the I/O path. In accordance with aspects of the present disclosure, a particular path modification (e.g., the addition of a pipeline) applied to an I/O path may vary according to attributes of the path, such as a client device from which an I/O request originates or an object or collection of objects within the request. For example, pipelines may be applied to individual objects, such that the pipeline is applied to all I/O requests for the object, or a pipeline may be selectively applied only when certain client devices access the object. In some instances, an object storage service may provide multiple I/O paths for an object or collection. For example, the same object or collection may be associated with multiple resource identifiers on the object storage service, such that the object or collection can be accessed through the multiple identifiers (e.g., uniform resource identifiers, or URIs), which illustratively correspond to different network-accessible endpoints. In one embodiment, different pipelines may be applied to each I/O path for a given object. For example, a first I/O path may be associated with unprivileged access to a data set, and thus be subject to data manipulations that remove confidential information from the data set prior during retrieval. A second I/O path may be associated with privileged access, and thus not be subject to those data manipulations. In some instances, pipelines may be selectively applied based on other criteria. For example, whether a pipeline is applied may be based on time of day, a number or rate of accesses to an object or collection, etc. As will be appreciated by one of skill in the art in light of the present disclosure, the embodiments disclosed herein improve the ability of computing systems, such as object storage systems, to provide and enforce data manipulation functions against data objects. Whereas prior techniques generally depend on external enforcement of data manipulation functions (e.g., requesting that users strip personal information before uploading it), embodiments of the present disclosure enable direct insertion of data manipulation into an I/O path for the object storage system. Moreover, embodiments of the present disclosure provide a secure mechanism for implementing data manipulations, by providing for serverless execution of manipulation functions within an isolated execution environment. Embodiments of the present disclosure further improve operation of serverless functions, by enabling such functions to operate on the basis of local stream (e.g., “file”) handles, rather than requiring that functions act as network-accessible services. The presently disclosed embodiments therefore address technical problems inherent within computing systems, such as the difficulty of enforcing data manipulations at storage systems and the complexity of creating external services to enforce such data manipulations. These technical problems are addressed by the various technical solutions described herein, including the insertion of data processing pipelines into an I/O path for an object or object collection, potentially without knowledge of a requesting user, the use of serverless functions to perform aspects of such pipelines, and the use of local stream handles to enable simplified creation of serverless functions. Thus, the present disclosure represents an improvement on existing data processing systems and computing systems in general. The general execution of tasks on the on-demand code execution system will now be discussed. As described in detail herein, the on-demand code execution system may provide a network-accessible service enabling users to submit or designate computer-executable source code to be executed by virtual machine instances on the on-demand code execution system. Each set of code on the on-demand code execution system may define a “task,” and implement specific functionality corresponding to that task when executed on a virtual machine instance of the on-demand code execution system. Individual implementations of the task on the on-demand code execution system may be referred to as an “execution” of the task (or a “task execution”). In some cases, the on-demand code execution system may enable users to directly trigger execution of a task based on a variety of potential events, such as transmission of an application programming interface (“API”) call to the on-demand code execution system, or transmission of a specially formatted hypertext transport protocol (“HTTP”) packet to the on-demand code execution system. In accordance with embodiments of the present disclosure, the on-demand code execution system may further interact with an object storage system, in order to execute tasks during application of a data manipulation pipeline to an I/O path. The on-demand code execution system can therefore execute any specified executable code “on-demand,” without requiring configuration or maintenance of the underlying hardware or infrastructure on which the code is executed. Further, the on-demand code execution system may be configured to execute tasks in a rapid manner (e.g., in under 100 milliseconds [ms]), thus enabling execution of tasks in “real-time” (e.g., with little or no perceptible delay to an end user). To enable this rapid execution, the on-demand code execution system can include one or more virtual machine instances that are “pre-warmed” or pre-initialized (e.g., booted into an operating system and executing a complete or substantially complete runtime environment) and configured to enable execution of user-defined code, such that the code may be rapidly executed in response to a request to execute the code, without delay caused by initializing the virtual machine instance. Thus, when an execution of a task is triggered, the code corresponding to that task can be executed within a pre-initialized virtual machine in a very short amount of time. Specifically, to execute tasks, the on-demand code execution system described herein may maintain a pool of executing virtual machine instances that are ready for use as soon as a request to execute a task is received. Due to the pre-initialized nature of these virtual machines, delay (sometimes referred to as latency) associated with executing the task code (e.g., instance and language runtime startup time) can be significantly reduced, often to sub-100 millisecond levels. Illustratively, the on-demand code execution system may maintain a pool of virtual machine instances on one or more physical computing devices, where each virtual machine instance has one or more software components (e.g., operating systems, language runtimes, libraries, etc.) loaded thereon. When the on-demand code execution system receives a request to execute program code (a “task”), the on-demand code execution system may select a virtual machine instance for executing the program code of the user based on the one or more computing constraints related to the task (e.g., a required operating system or runtime) and cause the task to be executed on the selected virtual machine instance. The tasks can be executed in isolated containers that are created on the virtual machine instances, or may be executed within a virtual machine instance isolated from other virtual machine instances acting as environments for other tasks. Since the virtual machine instances in the pool have already been booted and loaded with particular operating systems and language runtimes by the time the requests are received, the delay associated with finding compute capacity that can handle the requests (e.g., by executing the user code in one or more containers created on the virtual machine instances) can be significantly reduced. As used herein, the term “virtual machine instance” is intended to refer to an execution of software or other executable code that emulates hardware to provide an environment or platform on which software may execute (an example “execution environment”). Virtual machine instances are generally executed by hardware devices, which may differ from the physical hardware emulated by the virtual machine instance. For example, a virtual machine may emulate a first type of processor and memory while being executed on a second type of processor and memory. Thus, virtual machines can be utilized to execute software intended for a first execution environment (e.g., a first operating system) on a physical device that is executing a second execution environment (e.g., a second operating system). In some instances, hardware emulated by a virtual machine instance may be the same or similar to hardware of an underlying device. For example, a device with a first type of processor may implement a plurality of virtual machine instances, each emulating an instance of that first type of processor. Thus, virtual machine instances can be used to divide a device into a number of logical sub-devices (each referred to as a “virtual machine instance”). While virtual machine instances can generally provide a level of abstraction away from the hardware of an underlying physical device, this abstraction is not required. For example, assume a device implements a plurality of virtual machine instances, each of which emulate hardware identical to that provided by the device. Under such a scenario, each virtual machine instance may allow a software application to execute code on the underlying hardware without translation, while maintaining a logical separation between software applications running on other virtual machine instances. This process, which is generally referred to as “native execution,” may be utilized to increase the speed or performance of virtual machine instances. Other techniques that allow direct utilization of underlying hardware, such as hardware pass-through techniques, may be used, as well. While a virtual machine executing an operating system is described herein as one example of an execution environment, other execution environments are also possible. For example, tasks or other processes may be executed within a software “container,” which provides a runtime environment without itself providing virtualization of hardware. Containers may be implemented within virtual machines to provide additional security, or may be run outside of a virtual machine instance. The foregoing aspects and many of the attendant advantages of this disclosure will become more readily appreciated as the same become better understood by reference to the following description, when taken in conjunction with the accompanying drawings. FIG.1is a block diagram of an illustrative operating environment100in which a service provider system110operates to enable client devices102to perform I/O operations on objects stored within an object storage service160and to apply path modifications to such I/O operations, which modifications may include execution of user-defined code on an on-demand code execution system120. By way of illustration, various example client devices102are shown in communication with the service provider system110, including a desktop computer, laptop, and a mobile phone. In general, the client devices102can be any computing device such as a desktop, laptop or tablet computer, personal computer, wearable computer, server, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, set-top box, voice command device, camera, digital media player, and the like. Generally described, the object storage service160can operate to enable clients to read, write, modify, and delete data objects, each of which represents a set of data associated with an identifier (an “object identifier” or “resource identifier”) that can be interacted with as an individual resource. For example, an object may represent a single file submitted by a client device102(though the object storage service160may or may not store such an object as a single file). This object-level interaction can be contrasted with other types of storage services, such as block-based storage services providing data manipulation at the level of individual blocks or database storage services providing data manipulation at the level of tables (or parts thereof) or the like. The object storage service160illustratively includes one or more frontends162, which provide an interface (a command-line interface (CLIs), application programing interface (APIs), or other programmatic interface) through which client devices102can interface with the service160to configure the service160on their behalf and to perform I/O operations on the service160. For example, a client device102may interact with a frontend162to create a collection of data objects on the service160(e.g., a “bucket” of objects) and to configure permissions for that collection. Client devices102may thereafter create, read, update, or delete objects within the collection based on the interfaces of the frontends162. In one embodiment, the frontend162provides a REST-compliant HTTP interface supporting a variety of request methods, each of which corresponds to a requested I/O operation on the service160. By way of non-limiting example, request methods may include:a GET operation requesting retrieval of an object stored on the service160by reference to an identifier of the object;a PUT operation requesting storage of an object to be stored on the service160, including an identifier of the object and input data to be stored as the object;a DELETE operation requesting deletion of an object stored on the service160by reference to an identifier of the object; anda LIST operation requesting listing of objects within an object collection stored on the service160by reference to an identifier of the collection. A variety of other operations may also be supported. For example, the service160may provide a POST operation similar to a PUT operation but associated with a different upload mechanism (e.g., a browser-based HTML upload), or a HEAD operation enabling retrieval of metadata for an object without retrieving the object itself. In some embodiments, the service160may enable operations that combine one or more of the above operations, or combining an operation with a native data manipulation. For example, the service160may provide a COPY operation enabling copying of an object stored on the service160to another object, which operation combines a GET operation with a PUT operation. As another example, the service160may provide a SELECT operation enabling specification of an SQL query to be applied to an object prior to returning the contents of that object, which combines an application of an SQL query to a data object (a native data manipulation) with a GET operation. As yet another example, the service160may provide a “byte range” GET, which enables a GET operation on only a portion of a data object. In some instances, the operation requested by a client device102on the service160may be transmitted to the service via an HTTP request, which itself may include an HTTP method. In some cases, such as in the case of a GET operation, the HTTP method specified within the request may match the operation requested at the service160. However, in other cases, the HTTP method of a request may not match the operation requested at the service160. For example, a request may utilize an HTTP POST method to transmit a request to implement a SELECT operation at the service160. During general operation, frontends162may be configured to obtain a call to a request method, and apply that request method to input data for the method. For example, a frontend162can respond to a request to PUT input data into the service160as an object by storing that input data as the object on the service160. Objects may be stored, for example, on object data stores168, which correspond to any persistent or substantially persistent storage (including hard disk drives (HDDs), solid state drives (SSDs), network accessible storage (NAS), storage area networks (SANs), non-volatile random access memory (NVRAM), or any of a variety of storage devices known in the art). As a further example, the frontend162can respond to a request to GET an object from the service160by retrieving the object from the stores168(the object representing input data to the GET resource request), and returning the object to a requesting client device102. In some cases, calls to a request method may invoke one or more native data manipulations provided by the service160. For example, a SELECT operation may provide an SQL-formatted query to be applied to an object (also identified within the request), or a GET operation may provide a specific range of bytes of an object to be returned. The service160illustratively includes an object manipulation engine170configured to perform native data manipulations, which illustratively corresponds to a device configured with software executable to implement native data manipulations on the service160(e.g., by stripping non-selected bytes from an object for a byte-range GET, by applying an SQL query to an object and returning results of the query, etc.). In accordance with embodiments of the present disclosure, the service160can further be configured to enable modification of an I/O path for a given object or collection of objects, such that a called request method is applied to an output of a data manipulation function, rather than the resource identified within the call. For example, the service160may enable a client device102to specify that GET operations for a given object should be subject to execution of a user-defined task on the on-demand code execution system120, such that the data returned in response to the operation is the output of a task execution rather than the requested object. Similarly, the service160may enable a client device102to specify that PUT operations to store a given object should be subject to execution of a user-defined task on the on-demand code execution system120, such that the data stored in response to the operation is the output of a task execution rather than the data provided for storage by a client device102. As will be discussed in more detail below, path modifications may include specification of a pipeline of data manipulations, including native data manipulations, task-based manipulations, or combinations thereof. Illustratively, a client device102may specify a pipeline or other data manipulation for an object or object collection through the frontend162, which may store a record of the pipeline or manipulation in the I/O path modification data store164, which store164, like the object data stores168, can represent any persistent or substantially persistent storage. While shown as distinct inFIG.1, in some instances the data stores164and168may represent a single collection of data stores. For example, data modifications to objects or collections may themselves be stored as objects on the service160. To enable data manipulation via execution of user-defined code, the system further includes an on-demand code execution system120. In one embodiment, the system120is solely usable by the object storage service160in connection with data manipulations of an I/O path. In another embodiment, the system120is additionally accessible by client devices102to directly implement serverless task executions. For example, the on-demand code execution system120may provide the service160(and potentially client devices102) with one or more user interfaces, command-line interfaces (CLIs), application programing interfaces (APIs), or other programmatic interfaces for generating and uploading user-executable code (e.g., including metadata identifying dependency code objects for the uploaded code), invoking the user-provided code (e.g., submitting a request to execute the user codes on the on-demand code execution system120), scheduling event-based jobs or timed jobs, tracking the user-provided code, or viewing other logging or monitoring information related to their requests or user codes. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces. The client devices102, object storage service160, and on-demand code execution system120may communicate via a network104, which may include any wired network, wireless network, or combination thereof. For example, the network104may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. As a further example, the network104may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the network104may be a private or semi-private network, such as a corporate or university intranet. The network104may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network104can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network104may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein. To enable interaction with the on-demand code execution system120, the system120includes one or more frontends130, which enable interaction with the on-demand code execution system120. In an illustrative embodiment, the frontends130serve as a “front door” to the other services provided by the on-demand code execution system120, enabling users (via client devices102) or the service160to provide, request execution of, and view results of computer executable code. The frontends130include a variety of components to enable interaction between the on-demand code execution system120and other computing devices. For example, each frontend130may include a request interface providing client devices102and the service160with the ability to upload or otherwise communication user-specified code to the on-demand code execution system120and to thereafter request execution of that code. In one embodiment, the request interface communicates with external computing devices (e.g., client devices102, frontend162, etc.) via a graphical user interface (GUI), CLI, or API. The frontends130process the requests and make sure that the requests are properly authorized. For example, the frontends130may determine whether the user associated with the request is authorized to access the user code specified in the request. References to user code as used herein may refer to any program code (e.g., a program, routine, subroutine, thread, etc.) written in a specific program language. In the present disclosure, the terms “code,” “user code,” and “program code,” may be used interchangeably. Such user code may be executed to achieve a specific function, for example, in connection with a particular data transformation developed by the user. As noted above, individual collections of user code (e.g., to achieve a specific function) are referred to herein as “tasks,” while specific executions of that code (including, e.g., compiling code, interpreting code, or otherwise making the code executable) are referred to as “task executions” or simply “executions.” Tasks may be written, by way of non-limiting example, in JavaScript (e.g., node.js), Java, Python, or Ruby (or another programming language). To manage requests for code execution, the frontend130can include an execution queue, which can maintain a record of requested task executions. Illustratively, the number of simultaneous task executions by the on-demand code execution system120is limited, and as such, new task executions initiated at the on-demand code execution system120(e.g., via an API call, via a call from an executed or executing task, etc.) may be placed on the execution queue and processed, e.g., in a first-in-first-out order. In some embodiments, the on-demand code execution system120may include multiple execution queues, such as individual execution queues for each user account. For example, users of the service provider system110may desire to limit the rate of task executions on the on-demand code execution system120(e.g., for cost reasons). Thus, the on-demand code execution system120may utilize an account-specific execution queue to throttle the rate of simultaneous task executions by a specific user account. In some instances, the on-demand code execution system120may prioritize task executions, such that task executions of specific accounts or of specified priorities bypass or are prioritized within the execution queue. In other instances, the on-demand code execution system120may execute tasks immediately or substantially immediately after receiving a call for that task, and thus, the execution queue may be omitted. The frontend130can further include an output interface configured to output information regarding the execution of tasks on the on-demand code execution system120. Illustratively, the output interface may transmit data regarding task executions (e.g., results of a task, errors related to the task execution, or details of the task execution, such as total time required to complete the execution, total data processed via the execution, etc.) to the client devices102or the object storage service160. In some embodiments, the on-demand code execution system120may include multiple frontends130. In such embodiments, a load balancer may be provided to distribute the incoming calls to the multiple frontends130, for example, in a round-robin fashion. In some embodiments, the manner in which the load balancer distributes incoming calls to the multiple frontends130may be based on the location or state of other components of the on-demand code execution system120. For example, a load balancer may distribute calls to a geographically nearby frontend130, or to a frontend with capacity to service the call. In instances where each frontend130corresponds to an individual instance of another component of the on-demand code execution system120, such as the active pool148described below, the load balancer may distribute calls according to the capacities or loads on those other components. Calls may in some instances be distributed between frontends130deterministically, such that a given call to execute a task will always (or almost always) be routed to the same frontend130. This may, for example, assist in maintaining an accurate execution record for a task, to ensure that the task executes only a desired number of times. For example, calls may be distributed to load balance between frontends130. Other distribution techniques, such as anycast routing, will be apparent to those of skill in the art. The on-demand code execution system120further includes one or more worker managers140that manage the execution environments, such as virtual machine instances150(shown as VM instance150A and150B, generally referred to as a “VM”), used for servicing incoming calls to execute tasks. While the following will be described with reference to virtual machine instances150as examples of such environments, embodiments of the present disclosure may utilize other environments, such as software containers. In the example illustrated inFIG.1, each worker manager140manages an active pool148, which is a group (sometimes referred to as a pool) of virtual machine instances150executing on one or more physical host computing devices that are initialized to execute a given task (e.g., by having the code of the task and any dependency data objects loaded into the instance). Although the virtual machine instances150are described here as being assigned to a particular task, in some embodiments, the instances may be assigned to a group of tasks, such that the instance is tied to the group of tasks and any tasks of the group can be executed within the instance. For example, the tasks in the same group may belong to the same security group (e.g., based on their security credentials) such that executing one task in a container on a particular instance150after another task has been executed in another container on the same instance does not pose security risks. As discussed below, a task may be associated with permissions encompassing a variety of aspects controlling how a task may execute. For example, permissions of a task may define what network connections (if any) can be initiated by an execution environment of the task. As another example, permissions of a task may define what authentication information is passed to a task, controlling what network-accessible resources are accessible to execution of a task (e.g., objects on the service160). In one embodiment, a security group of a task is based on one or more such permissions. For example, a security group may be defined based on a combination of permissions to initiate network connections and permissions to access network resources. As another example, the tasks of the group may share common dependencies, such that an environment used to execute one task of the group can be rapidly modified to support execution of another task within the group. Once a triggering event to execute a task has been successfully processed by a frontend130, the frontend130passes a request to a worker manager140to execute the task. In one embodiment, each frontend130may be associated with a corresponding worker manager140(e.g., a worker manager140co-located or geographically nearby to the frontend130) and thus, the frontend130may pass most or all requests to that worker manager140. In another embodiment, a frontend130may include a location selector configured to determine a worker manager140to which to pass the execution request. In one embodiment, the location selector may determine the worker manager140to receive a call based on hashing the call, and distributing the call to a worker manager140selected based on the hashed value (e.g., via a hash ring). Various other mechanisms for distributing calls between worker managers140will be apparent to one of skill in the art. Thereafter, the worker manager140may modify a virtual machine instance150(if necessary) and execute the code of the task within the instance150. As shown inFIG.1, respective instances150may have operating systems (OS)152(shown as OS152A and152B), language runtimes154(shown as runtime154A and154B), and user code156(shown as user code156A and156B). The OS152, runtime154, and user code156may collectively enable execution of the user code to implement the task. Thus, via operation of the on-demand code execution system120, tasks may be rapidly executed within an execution environment. In accordance with aspects of the present disclosure, each VM150additionally includes staging code157executable to facilitate staging of input data on the VM150and handling of output data written on the VM150, as well as a VM data store158accessible through a local file system of the VM150. Illustratively, the staging code157represents a process executing on the VM150(or potentially a host device of the VM150) and configured to obtain data from the object storage service160and place that data into the VM data store158. The staging code157can further be configured to obtain data written to a file within the VM data store158, and to transmit that data to the object storage service160. Because such data is available at the VM data store158, user code156is not required to obtain data over a network, simplifying user code156and enabling further restriction of network communications by the user code156, thus increasing security. Rather, as discussed above, user code156may interact with input data and output data as files on the VM data store158, by use of file handles passed to the code156during an execution. In some embodiments, input and output data may be stored as files within a kernel-space file system of the data store158. In other instances, the staging code157may provide a virtual file system, such as a filesystem in userspace (FUSE) interface, which provides an isolated file system accessible to the user code156, such that the user code's access to the VM data store158is restricted. As used herein, the term “local file system” generally refers to a file system as maintained within an execution environment, such that software executing within the environment can access data as file, rather than via a network connection. In accordance with aspects of the present disclosure, the data storage accessible via a local file system may itself be local (e.g., local physical storage), or may be remote (e.g., accessed via a network protocol, like NFS, or represented as a virtualized block device provided by a network-accessible service). Thus, the term “local file system” is intended to describe a mechanism for software to access data, rather than physical location of the data. The VM data store158can include any persistent or non-persistent data storage device. In one embodiment, the VM data store158is physical storage of the host device, or a virtual disk drive hosted on physical storage of the host device. In another embodiment, the VM data store158is represented as local storage, but is in fact a virtualized storage device provided by a network accessible service. For example, the VM data store158may be a virtualized disk drive provided by a network-accessible block storage service. In some embodiments, the object storage service160may be configured to provide file-level access to objects stored on the data stores168, thus enabling the VM data store158to be virtualized based on communications between the staging code157and the service160. For example, the object storage service160can include a file-level interface166providing network access to objects within the data stores168as files. The file-level interface166may, for example, represent a network-based file system server (e.g., a network file system (NFS)) providing access to objects as files, and the staging code157may implement a client of that server, thus providing file-level access to objects of the service160. In some instances, the VM data store158may represent virtualized access to another data store executing on the same host device of a VM instance150. For example, an active pool148may include one or more data staging VM instances (not shown inFIG.1), which may be co-tenanted with VM instances150on the same host device. A data staging VM instance may be configured to support retrieval and storage of data from the service160(e.g., data objects or portions thereof, input data passed by client devices102, etc.), and storage of that data on a data store of the data staging VM instance. The data staging VM instance may, for example, be designated as unavailable to support execution of user code156, and thus be associated with elevated permissions relative to instances150supporting execution of user code. The data staging VM instance may make this data accessible to other VM instances150within its host device (or, potentially, on nearby host devices), such as by use of a network-based file protocol, like NFS. Other VM instances150may then act as clients to the data staging VM instance, enabling creation of virtualized VM data stores158that, from the point of view of user code156A, appear as local data stores. Beneficially, network-based access to data stored at a data staging VM can be expected to occur very quickly, given the co-location of a data staging VM and a VM instance150within a host device or on nearby host devices. While some examples are provided herein with respect to use of IO stream handles to read from or write to a VM data store158, IO streams may additionally be used to read from or write to other interfaces of a VM instance150(while still removing a need for user code156to conduct operations other than stream-level operations, such as creating network connections). For example, staging code157may “pipe” input data to an execution of user code156as an input stream, the output of which may be “piped” to the staging code157as an output stream. As another example, a staging VM instance or a hypervisor to a VM instance150may pass input data to a network port of the VM instance150, which may be read-from by staging code157and passed as an input stream to the user code157. Similarly, data written to an output stream by the task code156may be written to a second network port of the instance150A for retrieval by the staging VM instance or hypervisor. In yet another example, a hypervisor to the instance150may pass input data as data written to a virtualized hardware input device (e.g., a keyboard) and staging code157may pass to the user code156a handle to the IO stream corresponding to that input device. The hypervisor may similarly pass to the user code156a handle for an IO stream corresponding to an virtualized hardware output device, and read data written to that stream as output data. Thus, the examples provided herein with respect to file streams may generally be modified to relate to any IO stream. The object storage service160and on-demand code execution system120are depicted inFIG.1as operating in a distributed computing environment including several computer systems that are interconnected using one or more computer networks (not shown inFIG.1). The object storage service160and on-demand code execution system120could also operate within a computing environment having a fewer or greater number of devices than are illustrated inFIG.1. Thus, the depiction of the object storage service160and on-demand code execution system120inFIG.1should be taken as illustrative and not limiting to the present disclosure. For example, the on-demand code execution system120or various constituents thereof could implement various Web services components, hosted or “cloud” computing environments, or peer to peer network configurations to implement at least a portion of the processes described herein. In some instances, the object storage service160and on-demand code execution system120may be combined into a single service. Further, the object storage service160and on-demand code execution system120may be implemented directly in hardware or software executed by hardware devices and may, for instance, include one or more physical or virtual servers implemented on physical computer hardware configured to execute computer executable instructions for performing various features that will be described herein. The one or more servers may be geographically dispersed or geographically co-located, for instance, in one or more data centers. In some instances, the one or more servers may operate as part of a system of rapidly provisioned and released computing resources, often referred to as a “cloud computing environment.” In the example ofFIG.1, the object storage service160and on-demand code execution system120are illustrated as connected to the network104. In some embodiments, any of the components within the object storage service160and on-demand code execution system120can communicate with other components of the on-demand code execution system120via the network104. In other embodiments, not all components of the object storage service160and on-demand code execution system120are capable of communicating with other components of the virtual environment100. In one example, only the frontends130and162(which may in some instances represent multiple frontends) may be connected to the network104, and other components of the object storage service160and on-demand code execution system120may communicate with other components of the environment100via the respective frontends130and162. While some functionalities are generally described herein with reference to an individual component of the object storage service160and on-demand code execution system120, other components or a combination of components may additionally or alternatively implement such functionalities. For example, while the object storage service160is depicted inFIG.1as including an object manipulation engine170, functions of that engine170may additionally or alternatively be implemented as tasks on the on-demand code execution system120. Moreover, while the on-demand code execution system120is described as an example system to apply data manipulation tasks, other compute systems may be used to execute user-defined tasks, which compute systems may include more, fewer or different components than depicted as part of the on-demand code execution system120. In a simplified example, the object storage service160may include a physical computing device configured to execute user-defined tasks on demand, thus representing a compute system usable in accordance with embodiments of the present disclosure. Thus, the specific configuration of elements withinFIG.1is intended to be illustrative. FIG.2depicts a general architecture of a frontend server200computing device implementing a frontend162ofFIG.1. The general architecture of the frontend server200depicted inFIG.2includes an arrangement of computer hardware and software that may be used to implement aspects of the present disclosure. The hardware may be implemented on physical electronic devices, as discussed in greater detail below. The frontend server200may include many more (or fewer) elements than those shown inFIG.2. It is not necessary, however, that all of these generally conventional elements be shown in order to provide an enabling disclosure. Additionally, the general architecture illustrated inFIG.2may be used to implement one or more of the other components illustrated inFIG.1. As illustrated, the frontend server200includes a processing unit290, a network interface292, a computer readable medium drive294, and an input/output device interface296, all of which may communicate with one another by way of a communication bus. The network interface292may provide connectivity to one or more networks or computing systems. The processing unit290may thus receive information and instructions from other computing systems or services via the network104. The processing unit290may also communicate to and from primary memory280or secondary memory298and further provide output information for an optional display (not shown) via the input/output device interface296. The input/output device interface296may also accept input from an optional input device (not shown). The primary memory280or secondary memory298may contain computer program instructions (grouped as units in some embodiments) that the processing unit290executes in order to implement one or more aspects of the present disclosure. These program instructions are shown inFIG.2as included within the primary memory280, but may additionally or alternatively be stored within secondary memory298. The primary memory280and secondary memory298correspond to one or more tiers of memory devices, including (but not limited to) RAM, 3D XPOINT memory, flash memory, magnetic storage, and the like. The primary memory280is assumed for the purposes of description to represent a main working memory of the worker manager140, with a higher speed but lower total capacity than secondary memory298. The primary memory280may store an operating system284that provides computer program instructions for use by the processing unit290in the general administration and operation of the frontend server200. The memory280may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory280includes a user interface unit282that generates user interfaces (or instructions therefor) for display upon a computing device, e.g., via a navigation or browsing interface such as a browser or application installed on the computing device. In addition to or in combination with the user interface unit282, the memory280may include a control plane unit286and data plane unit288each executable to implement aspects of the present disclosure. Illustratively, the control plane unit286may include code executable to enable owners of data objects or collections of objects to attach manipulations, serverless functions, or data processing pipelines to an I/O path, in accordance with embodiments of the present disclosure. For example, the control plane unit286may enable the frontend162to implement the interactions ofFIG.3. The data plane unit288may illustratively include code enabling handling of I/O operations on the object storage service160, including implementation of manipulations, serverless functions, or data processing pipelines attached to an I/O path (e.g., via the interactions ofFIGS.5A-6B, implementation of the routines ofFIGS.7-8, etc.). The frontend server200ofFIG.2is one illustrative configuration of such a device, of which others are possible. For example, while shown as a single device, a frontend server200may in some embodiments be implemented as multiple physical host devices. Illustratively, a first device of such a frontend server200may implement the control plane unit286, while a second device may implement the data plane unit288. While described inFIG.2as a frontend server200, similar components may be utilized in some embodiments to implement other devices shown in the environment100ofFIG.1. For example, a similar device may implement a worker manager140, as described in more detail in U.S. Pat. No. 9,323,556, entitled “PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE,” and filed Sep. 30, 2014 (the “'556 patent”), the entirety of which is hereby incorporated by reference. With reference toFIG.3, illustrative interactions are depicted for enabling a client device102A to modify an I/O path for one or more objects on an object storage service160by inserting a data manipulation into the I/O path, which manipulation is implemented within a task executable on the on-demand code execution system120. The interactions ofFIG.3begin at (1), where the client device102A authors the stream manipulation code. The code can illustratively function to access an input file handle provided on execution of the program (which may, for example, be represented by the standard input stream for a program, commonly “stdin”), perform manipulations on data obtained from that file handle, and write data to an output file handle provided on execution of the program (which may, for example, by represented by the standard output stream for a program, commonly “stdout”). While examples are discussed herein with respect to a “file” handle, embodiments of the present disclosure may utilize handles providing access to any operating-system-level input/output (IO) stream, examples of which include byte streams, character streams, file streams, and the like. As used herein, the term operating-system-level input/output stream (or simply an “IO stream”) is intended to refer to a stream of data for which an operating system provides a defined set of functions, such as seeking within the stream, reading from a stream, and writing to a stream. Streams may be created in various manners. For example, a programming language may generate a stream by use of a function library to open a file on a local operating system, or a stream may be created by use of a “pipe” operator (e.g., within an operating system shell command language). As will be appreciated by one skilled in the art, most general purpose programming languages include, as basic functionality of the code, the ability to interact with streams. In accordance with embodiments of the present disclosure, task code may be authored to accept, as a parameter of the code, an input handle and an output handle, both representing IO streams (e.g., an input stream and an output stream, respectively). The code may then manipulate data of the input stream, and write an output to the output stream. Given use of a general purpose programming language, any of a variety of functions may be implemented according to the desires of the user. For example, a function may search for and remove confidential information from the input stream. While some code may utilize only input and output handles, other code may implement additional interfaces, such as network communication interfaces. However, by providing the code with access to input and output streams (via respective handles) created outside of the code, the need for the code to create such streams is removed. Moreover, because streams may be created outside of the code, and potentially outside of an execution environment of the code, stream manipulation code need not necessarily be trusted to conduct certain operations that may be necessary to create a stream. For example, a stream may represent information transmitted over a network connection, without the code being provided with access to that network connection. Thus, use of IO streams to pass data into and out of code executions can simplify code while increasing security. As noted above, the code may be authored in a variety of programming languages. Authoring tools for such languages are known in the art and thus will not be described herein. While authoring is described inFIG.3as occurring on the client device102A, the service160may in some instances provide interfaces (e.g., web GUIs) through which to author or select code. At (2), the client device102A submits the stream manipulation code to the frontend162of the service160, and requests that an execution of the code be inserted into an I/O path for one or more objects. Illustratively, the frontends162may provide one or more interfaces to the device102A enabling submission of the code (e.g., as a compressed file). The frontends162may further provide interfaces enabling designation of one or more I/O paths to which an execution of the code should be applied. Each I/O path may correspond, for example, to an object or collection of objects (e.g., a “bucket” of objects). In some instances, an I/O path may further corresponding to a given way of accessing such object or collection (e.g., a URI through which the object is created), to one or more accounts attempting to access the object or collection, or to other path criteria. Designation of the path modification is then stored in the I/O path modification data store164, at (3). Additionally, the stream manipulation code is stored within the object data stores166at (4). As such, when an I/O request is received via the specified I/O path, the service160is configured to execute the stream manipulation code against input data for the request (e.g., data provided by the client device102A or an object of the service160, depending on the I/O request), before then applying the request to the output of the code execution. In this manner, a client device102A (which inFIG.3illustratively represents an owner of an object or object collection) can obtain greater control over data stored on and retrieved from the object storage service160. The interactions ofFIG.3generally relate to insertion of a single data manipulation into the I/O path of an object or collection on the service160. However, in some embodiments of the present disclosure an owner of an object or collection is enabled to insert multiple data manipulations into such an I/O path. Each data manipulation may correspond, for example, to a serverless code-based manipulation or a native manipulation of the service160. For example, assume an owner has submitted a data set to the service160as an object, and that the owner wishes to provide an end user with a filtered view of a portion of that data set. While the owner could store that filtered view of the portion as a separate object and provide the end user with access to that separate object, this results in data duplication on the service160. In the case that the owner wishes to provide multiple end users with different portions of the data set, potentially with customized filters, that data duplication grows, resulting in significant inefficiencies. In accordance with the present disclosure, another option may be for the owner to author or obtain custom code to implement different filters on different portions of the object, and to insert that code into the I/O path for the object. However, this approach may require the owner to duplicate some native functionality of the service160(e.g., an ability to retrieve a portion of a data set). Moreover, this approach would inhibit modularity and reusability of code, since a single set of code would be required to conduct two functions (e.g., selecting a portion of the data and filtering that portion). To address these shortcomings, embodiments of the present disclosure enable an owner to create a pipeline of data manipulations to be applied to an I/O path, linking together multiple data manipulations, each of which may also be inserted into other I/O paths. An illustrative visualization of such a pipeline is shown inFIG.4as pipeline400. Specifically, the pipeline400illustrates a series of data manipulations that an owner specifies are to occur on calling of a request method against an object or object collection. As shown inFIG.4, the pipeline begins with input data, specified within the call according to a called request method. For example, a PUT call may generally include the input data as the data to be stored, while a GET call may generally include the input data by reference to a stored object. A LIST call may specify a directory, a manifest of which is the input data to the LIST request method. Contrary to typical implementations of request methods, in the illustrative pipeline400, the called request method is not initially applied to the input data. Rather, the input data is initially passed to an execution of “code A”404, where code A represents a first set of user-authored code. The output of that execution is then passed to “native function A”406, which illustratively represents a native function of the service160, such as a “SELECT” or byte-range function implemented by the object manipulation engine170. The output of that native function406is then passed to an execution of “code B”408, which represents a second set of user-authored code. Thereafter, the output of that execution408is passed to the called request method410(e.g., GET, PUT, LIST, etc.). Accordingly, rather than the request method being applied to the input data as in conventional techniques, in the illustration ofFIG.4, the request method is applied to the output of the execution408, which illustratively represents a transformation of the input data according to one or more owner-specified manipulations412. Notably, implementation of the pipeline400may not require any action or imply any knowledge of the pipeline400on the part of a calling client device102. As such, implementation of pipelines can be expected not to impact existing mechanisms of interacting with the service160(other than altering the data stored on or retrieved from the service160in accordance with the pipeline). For example, implementation of a pipeline can be expected not to require reconfiguration of existing programs utilizing an API of the service160. While the pipeline400ofFIG.4is linear, in some embodiments the service160may enable an owner to configure non-linear pipelines, such as by include conditional or branching nodes within the pipeline. Illustratively, as described in more detail below, data manipulations (e.g., serverless-based functions) can be configured to include a return value, such as an indication of successful execution, encountering an error, etc. In one example, the return value of a data manipulation may be used to select a conditional branch within a branched pipeline, such that a first return value causes the pipeline to proceed on a first branch, while a second return value causes the pipeline to proceed on a second branch. In some instances, pipelines may include parallel branches, such that data is copied or divided to multiple data manipulations, the outputs of which are passed to a single data manipulation for merging prior to executing the called method. The service160may illustratively provide a graphical user interface through which owners can create pipelines, such as by specifying nodes within the pipeline and linking those nodes together via logical connections. A variety of flow-based development interfaces are known and may be utilized in conjunction with aspects of the present disclosure. Furthermore, in some embodiments, a pipeline applied to a particular I/O path may be generated on-the-fly, at the time of a request, based on data manipulations applied to the path according to different criteria. For example, an owner of a data collection may apply a first data manipulation to all interactions with objects within a collection, and a second data manipulation to all interactions obtained via a given URI. Thus, when a request is received to interact with an object within the collection and via the given URI, the service160may generate a pipeline combining the first and second data manipulations. The service160may illustratively implement a hierarchy of criteria, such that manipulations applied to objects are placed within the pipeline prior to manipulations applied to a URI, etc. In some embodiments, client devices102may be enabled to request inclusion of a data manipulation within a pipeline. For example, within parameters of a GET request, a client device102may specify a particular data manipulation to be included within a pipeline applied in connection with the request. Illustratively, a collection owner may specify one or more data manipulations allowed for the collection, and further specify identifiers for those manipulations (e.g., function names). Thus, when requesting to interact with the collection, a client device102may specify the identifier to cause the manipulation to be included within a pipeline applied to the I/O path. In one embodiment, client-requested manipulations are appended to the end of a pipeline subsequent to owner-specified data manipulations and prior to implementing the requested request method. For example, where a client device102requests to GET a data set, and requests that a search function by applied to the data set before the GET method is implemented, the search function can receive as input data the output of an owner-specified data manipulations for the data set (e.g., manipulations to remove confidential information from the data set). In addition, requests may in some embodiments specify parameters to be passed to one or more data manipulations (whether specified within the request or not). Accordingly, while embodiments of the present disclosure can enable data manipulations without knowledge of those manipulations on the part of client devices102, other embodiments may enable client devices102to pass information within an I/O request for use in implementing data manipulations. Moreover, while example embodiments of the present disclosure are discussed with respect to manipulation of input data to a called method, embodiments of the present disclosure may further be utilized to modify aspects of a request, including a called method. For example, a serverless task execution may be passed the content of a request (including, e.g., a called method and parameters) and be configured to modify and return, as a return value to a frontend162, a modified version of the method or parameters. Illustratively, where a client device102is authenticated as a user with access to only a portion of a data object, a serverless task execution may be passed a call to “GET” that data object, and may transform parameters of the GET request such that it applies only to a specific byte range of the data object corresponding to the portion that the user may access. As a further example, tasks may be utilized to implement customized parsing or restrictions on called methods, such as by limiting the methods a user may call, the parameters to those methods, or the like. In some instances, application of one or more functions to a request (e.g., to modify the method called or method parameters) may be viewed as a “pre-data processing” pipeline, and may thus be implemented prior to obtaining the input data within the pipeline400(which input data may change due to changes in the request), or may be implemented independently of a data manipulation pipeline400. Similarly, while example embodiments of the present disclosure are discussed with respect to application of a called method to output data of one or more data manipulations, in some embodiments manipulations can additionally or alternatively occur after application of a called method. For example, a data object may contain sensitive data that a data owner desires to remove prior to providing the data to a client. The owner may further enable a client to specify native manipulations to the data set, such as conducting a database query on the dataset (e.g., via a SELECT resource method). While the owner may specify a pipeline for the data set to cause filtering of sensitive data to be conducted prior to application of the SELECT method, such an order of operations may be undesirable, as filtering may occur with respect to the entire data object rather than solely the portion returned by the SELECT query. Accordingly, additionally or alternatively to specifying manipulations that occur prior to satisfying a request method, embodiments of the present disclosure can enable an owner to specify manipulations to occur subsequent to application of a called method but prior to conducting a final operation to satisfy a request. For example, in the case of a SELECT operation, the service160may first conduct the SELECT operation against specified input data (e.g., a data object), and then pass the output of that SELECT operation to a data manipulation, such as a serverless task execution. The output of that execution can then be returned to a client device102to satisfy the request. WhileFIG.3andFIG.4are generally described with reference to serverless tasks authored by an owner of an object or collection, in some instances the service160may enable code authors to share their tasks with other users of the service160, such that code of a first user is executed in the I/O path of an object owned by a second user. The service160may also provide a library of tasks for use by each user. In some cases, the code of a shared task may be provided to other users. In other cases, the code of the shared task may be hidden from other users, such that the other users can execute the task but not view code of the task. In these cases, other users may illustratively be enabled to modify specific aspects of code execution, such as the permissions under which the code will execute. With reference toFIGS.5A and5B, illustrative interactions will be discussed for applying a modification to an I/O path for a request to store an object on the service160, which request is referred to in connection with these figures as a “PUT” request or “PUT object call.” While shown in two figures, numbering of interactions is maintained acrossFIGS.5A and5B. The interactions begin at (1), where a client device102A submits a PUT object call to the storage service160, corresponding to a request to store input data (e.g., included or specified within the call) on the service160. The input data may correspond, for example, to a file stored on the client device102A. As shown inFIG.5A, the call is directed to a frontend162of the service162that, at (2), retrieves from the I/O path modification data store164an indication of modifications to the I/O path for the call. The indication may reflect, for example, a pipeline to be applied to calls received on the I/O path. The I/O path for a call may generally be specified with respect to a request method included within a call, an object or collection of objects indicated within the call, a specific mechanism of reaching the service160(e.g., protocol, URI used, etc.), an identity or authentication status of the client device102A, or a combination thereof. For example, inFIG.5A, the I/O path used can correspond to use of a PUT request method directed to a particular URI (e.g., associated with the frontend162) to store an object in a particular logical location on the service160(e.g., a specific bucket). InFIGS.5A and5B, it is assumed that an owner of that logical location has previously specified a modification to the I/O path, and specifically, has specified that a serverless function should be applied to the input data before a result of that function is stored in the service160. Accordingly, at (3), the frontend162detects within the modifications for the I/O path inclusion of a serverless task execution. Thus, at (4), the frontend162submits a call to the on-demand code execution system120to execute the task specified within the modifications against the input data specified within the call. The on-demand code execution system120, at (5), therefore generates an execution environment502in which to execute code corresponding to the task. Illustratively, the call may be directed to a frontend130of the system, which may distribute instructions to a worker manager140to select or generate a VM instance150in which to execute the task, which VM instance150illustratively represents the execution environment502. During generation of the execution environment502, the system120further provisions the environment with code504of the task indicated within the I/O path modification (which may be retrieved, for example, from the object data stores166). While not shown inFIG.5A, the environment502further includes other dependencies of the code, such as access to an operating system, a runtime required to execute the code, etc. In some embodiments, generation of the execution environment502can include configuring the environment502with security constraints limiting access to network resources. Illustratively, where a task is intended to conduct data manipulation without reference to network resources, the environment502can be configured with no ability to send or receive information via a network. Where a task is intended to utilize network resources, access to such resources can be provided on a “whitelist” basis, such that network communications from the environment502are allowed only for specified domains, network addresses, or the like. Network restrictions may be implemented, for example, by a host device hosting the environment502(e.g., by a hypervisor or host operating system). In some instances, network access requirements may be utilized to assist in placement of the environment502, either logically or physically. For example, where a task requires no access to network resources, the environment502for the task may be placed on a host device that is distant from other network-accessible services of the service provider system110, such as an “edge” device with a lower-quality communication channel to those services. Where a task requires access to otherwise private network services, such as services implemented within a virtual private cloud (e.g., a local-area-network-like environment implemented on the service160on behalf of a given user), the environment502may be created to exist logically within that cloud, such that a task execution502accesses resources within the cloud. In some instances, a task may be configured to execute within a private cloud of a client device102that submits an I/O request. In other instances, a task may be configured to execute within a private cloud of an owner of the object or collection referenced within the request. In addition to generating the environment502, at (6), the system120provisions the environment with stream-level access to an input file handle506and an output file handle508, usable to read from and write to the input data and output data of the task execution, respectively. In one embodiment, files handle506and508may point to a (physical or virtual) block storage device (e.g., disk drive) attached to the environment502, such that the task can interact with a local file system to read input data and write output data. For example, the environment502may represent a virtual machine with a virtual disk drive, and the system120may obtain the input data from the service160and store the input data on the virtual disk drive. Thereafter, on execution of the code, the system120may pass to the code a handle of the input data as stored on the virtual disk drive, and a handle of a file on the drive to which to write output data. In another embodiment, files handle506and508may point to a network file system, such as an NFS-compatible file system, on which the input data has been stored. For example, the frontend162during processing of the call may store the input data as an object on the object data stores166, and the file-level interface166may provide file-level access to the input data and to a file representing output data. In some cases, the file handles506and508may point to files on a virtual file system, such as a file system in user space. By providing handles506and508, the task code504is enabled to read the input data and write output data using stream manipulations, as opposed to being required to implement network transmissions. Creation of the handles506and508(or streams corresponding to the handles) may illustratively be achieved by execution of staging code157within or associated with the environment502. The interactions ofFIG.5Aare continued inFIG.5B, where the system120executes the task code504. As the task code504may be user-authored, any number of functionalities may be implemented within the code504. However, for the purposes of description ofFIGS.5A and5B, it will be assumed that the code504, when executed, reads input data from the input file handle506(which may be passed as a commonly used input stream, such as stdin), manipulates the input data, and writes output data to the output file handle508(which may be passed as a commonly used output stream, such as stdout). Accordingly, at (8), the system120obtains data written to the output file (e.g., the file referenced in the output file handle) as output data of the execution. In addition, at (9), the system120obtains a return value of the code execution (e.g., a value passed in a final call of the function). For the purposes of description ofFIGS.5A and5B, it will be assumed that the return value indicates success of the execution. At (10), the output data and the success return value are then passed to the frontend162. While shown as a single interaction inFIG.5B, in some embodiments output data of a task execution and a return value of that execution may be returned separately. For example, during execution, task code504may write to an output file through the handle508, and this data may be periodically or iteratively returned to the service160. Illustratively, where the output file exists on a file system in user space implemented by staging code, the staging code may detect and forward each write to the output file to the frontend162. Where the output file exists on a network file system, writes to the file may directly cause the written data to be transmitted to the interface166and thus the service160. In some instances, transmitting written data iteratively may reduce the amount of storage required locally to the environment502, since written data can, according to some embodiments, be deleted from local storage of the environment502. In addition, while a success return value is assumed inFIGS.5A and5B, other types of return value are possible and contemplated. For example, an error return value may be used to indicate to the frontend162that an error occurred during execution of task code504. As another example, user-defined return values may be used to control how conditional branching within a pipeline proceeds. In some cases, the return value may indicate to the frontend162a request for further processing. For example, a task execution may return to the frontend162a call to execute another serverless task (potentially not specified within a path modification for the current I/O path). Moreover, return values may specify to the frontend162what return value is to be returned to the client device102A. For example, a typical PUT request method called at the service160may be expected to return an HTTP 200 code (“OK”). As such, a success return value from the task code may further indicate that the frontend162should return an HTTP 200 code to the client device102A. An error return value may, for example, indicate that the frontend162should return a 3XX HTTP redirection or 4XX HTTP error code to the client device102A. Still further, in some cases, return values may specify to the frontend162content of a return message to the client device102A other than a return value. For example, the frontend162may be configured to return a given HTTP code (e.g., 200) for any request from the client device102A that is successfully retrieved at the frontend162and invokes a data processing pipeline. A task execution may then be configured to specify, within its return value, data to be passed to the client device102A in addition to that HTTP code. Such data may illustratively include structured data (e.g., extensible markup language (XML) data) providing information generated by the task execution, such as data indicating success or failure of the task. This approach may beneficially enable the frontend162to quickly respond to requests (e.g., without awaiting execution of a task) while still enabling a task execution to pass information to the client device102. For purposes of the present illustration, it will be assumed that the success return value of the task indicates that an HTTP 2XX success response should be passed to the device102A. Accordingly, on receiving output data, the frontend162stores the output data as an object within the object data stores166, (11). Interaction (11) illustratively corresponds to implementation of the PUT request method, initially called for by the client device102A, albeit by storing the output of the task execution rather than the provided input data. After implementing the called PUT request method, the frontend162, at (12), returns to the client device102A the success indicator indicated by the success return value of the task (e.g., an HTTP 200 response code). Thus, from the perspective of the client device102A, a call to PUT an object on the storage service160resulted in creation of that object on the service160. However, rather than storing the input data provided by the device102A, the object stored on the service160corresponds to output data of an owner-specified task, thus enabling the owner of the object greater control over the contents of that object. In some use cases, the service160may additionally store the input data as an object (e.g., where the owner-specified task corresponds to code executable to provide output data usable in conjunction with the input data, such as checksum generated from the input data). With reference toFIGS.6A and6B, illustrative interactions will be discussed for applying a modification to an I/O path for a request to retrieve an object on the service160, which request is referred to in connection with these figures as a “GET” request or “GET call.” While shown in two figures, numbering of interactions is maintained acrossFIGS.6A and6B. The interactions begin at (1), where a client device102A submits a GET call to the storage service160, corresponding to a request to obtain data of an object (identified within the call) stored on the service160. As shown inFIG.6A, the call is directed to a frontend162of the service160that, at (2), retrieves from the I/O path modification data store164an indication of modifications to the I/O path for the call. For example, inFIG.6A, the I/O path used can correspond to use of a GET request method directed to a particular URI (e.g., associated with the frontend162) to retrieve an object in a particular logical location on the service160(e.g., a specific bucket). InFIGS.6A and6B, it is assumed that an owner of that logical location has previously specified a modification to the I/O path, and specifically, has specified that a serverless function should be applied to the object before a result of that function is returned to the device102A as the requested object. Accordingly, at (3), the frontend162detects within the modifications for the I/O path inclusion of a serverless task execution. Thus, at (4), the frontend162submits a call to the on-demand code execution system120to execute the task specified within the modifications against the object specified within the call. The on-demand code execution system120, at (5), therefore generates an execution environment502in which to execute code corresponding to the task. Illustratively, the call may be directed to a frontend130of the system, which may distribute instructions to a worker manager140to select or generate a VM instance150in which to execute the task, which VM instance150illustratively represents the execution environment502. During generation of the execution environment502, the system120further provisions the environment with code504of the task indicated within the I/O path modification (which may be retrieved, for example, from the object data stores166). While not shown inFIG.6A, the environment502further includes other dependencies of the code, such as access to an operating system, a runtime required to execute the code, etc. In addition, at (6), the system120provisions the environment with file-level access to an input file handle506and an output file handle508, usable to read from and write to the input data (the object) and output data of the task execution, respectively. As discussed above, files handle506and508may point to a (physical or virtual) block storage device (e.g., disk drive) attached to the environment502, such that the task can interact with a local file system to read input data and write output data. For example, the environment502may represent a virtual machine with a virtual disk drive, and the system120may obtain the object referenced within the call from the service160, at (6′), and store the object on the virtual disk drive. Thereafter, on execution of the code, the system120may pass to the code a handle of the object as stored on the virtual disk drive, and a handle of a file on the drive to which to write output data. In another embodiment, files handle506and508may point to a network file system, such as an NFS-compatible file system, on which the object has been stored. For example, the file-level interface166may provide file-level access to the object as stored within the object data stores, as well as to a file representing output data. By providing handles506and508, the task code504is enabled to read the input data and write output data using stream manipulations, as opposed to being required to implement network transmissions. Creation of the handles506and508may illustratively be achieved by execution of staging code157within or associated with the environment502. The interactions ofFIG.6Aare continued inFIG.6B, where the system120executes the task code504at (7). As the task code504may be user-authored, any number of functionalities may be implemented within the code504. However, for the purposes of description ofFIGS.6A and6B, it will be assumed that the code504, when executed, reads input data (corresponding to the object identified within the call) from the input file handle506(which may be passed as a commonly used input stream, such as stdin), manipulates the input data, and writes output data to the output file handle508(which may be passed as a commonly used output stream, such as stdout). Accordingly, at (8), the system120obtains data written to the output file (e.g., the file referenced in the output file handle) as output data of the execution. In addition, at (9), the system120obtains a return value of the code execution (e.g., a value passed in a final call of the function). For the purposes of description ofFIGS.6A and6B, it will be assumed that the return value indicates success of the execution. At (10), the output data and the success return value are then passed to the frontend162. On receiving output data and the return value, the frontend162returns the output data of the task execution as the requested object. Interaction (11) thus illustratively corresponds to implementation of the GET request method, initially called for by the client device102A, albeit by returning the output of the task execution rather than the object specified within the call. From the perspective of the client device102A, a call to GET an object from the storage service160therefore results in return of data to the client device102A as the object. However, rather than returning the object as stored on the service160, the data provided to the client device102A corresponds to output data of an owner-specified task, thus enabling the owner of the object greater control over the data returned to the client device102A. Similarly to as discussed above with respect toFIGS.5A and5B, while shown as a single interaction inFIG.6B, in some embodiments output data of a task execution and a return value of that execution may be returned separately. In addition, while a success return value is assumed inFIGS.6A and6B, other types of return value are possible and contemplated, such as error values, pipeline-control values, or calls to execute other data manipulations. Moreover, return values may indicate what return value is to be returned to the client device102A (e.g., as an HTTP status code). In some instances, where output data is iteratively returned from a task execution, the output data may also be iteratively provided by the frontend162to the client device102A. Where output data is large (e.g., on the order of hundreds of megabytes, gigabytes, etc.), iteratively returning output data to the client device102A can enable that data to be provided as a stream, thus speeding delivery of the content to the device102A relative to delaying return of the data until execution of the task completes. While illustrative interactions are described above with reference toFIGS.5A-6B, various modifications to these interactions are possible and contemplated herein. For example, while the interactions described above relate to manipulation of input data, in some embodiments a serverless task may be inserted into the I/O path of the service160to perform functions other than data manipulation. Illustratively, a serverless task may be utilized to perform validation or authorization with respect to a called request method, to verify that a client device102A is authorized to perform the method. Task-based validation or authorization may enable functions not provided natively by the service160. For example, consider a collection owner who wishes to limit certain client devices102to accessing only objects in the collection created during a certain time range (e.g., the last 30 days, any time excluding the last 30 days, etc.). While the service160may natively provide authorization on a per-object or per-collection basis, the service160may in some cases not natively provide authorization on a duration-since-creation basis. Accordingly, embodiments of the present disclosure enable the owner to insert into an I/O path to the collection (e.g., a GET path using a given URI to the collection) a serverless task that determines whether the client is authorized to retrieve a requested object based on a creation time of that object. Illustratively, the return value provided by an execution of the task may correspond to an “authorized” or “unauthorized” response. In instances where a task does not perform data manipulation, it may be unnecessary to provision an environment of the task execution with input and output stream handles. Accordingly, the service160and system120can be configured to forego provisioning the environment with such handles in these cases. Whether a task implements data manipulation may be specified, for example, on creation of the task and stored as metadata for the task (e.g., within the object data stores166). The service160may thus determine from that metadata whether data manipulation within the task should be supported by provisioning of appropriate stream handles. While some embodiments may utilize return values without use of stream handles, other embodiments may instead utilize stream handles without use of return values. For example, while the interactions described above relate to providing a return value of a task execution to the storage service160, in some instances the system120may be configured to detect completion of a function based on interaction with an output stream handle. Illustratively, staging code within an environment (e.g., providing a file system in user space or network-based file system) may detect a call to deallocate the stream handle (e.g., by calling a “file.close( )” function or the like). The staging code may interpret such a call as successful completion of the function, and notify the service160of successful completion without requiring the task execution to explicitly provide return value. While the interactions described above generally relate to passing of input data to a task execution, additional or alternative information may be passed to the execution. By way of non-limiting example, such information may include the content of the request from the client device102(e.g., the HTTP data transmitted), metadata regarding the request (e.g., a network address from which the request was received or a time of the request), metadata regarding the client device102(e.g., an authentication status of the device, account time, or request history), or metadata regarding the requested object or collection (e.g., size, storage location, permissions, or time created, modified, or accessed). Moreover, in addition or as an alternative to manipulation of input data, task executions may be configured to modify metadata regarding input data, which may be stored together with the input data (e.g., within the object) and thus written by way of an output stream handle, or which may be separately stored and thus modified by way of a metadata stream handle, inclusion of metadata in a return value, or separate network transmission to the service160. With reference toFIG.7, an illustrative routine700for implementing owner-defined functions in connection with an I/O request obtained at the object storage service ofFIG.1over an I/O path will be described. The routine700may illustratively be implemented subsequent to association of an I/O path (e.g., defined in terms of an object or collection, a mechanism of access to the object or collection, such as a URI, an account transmitting an IO request, etc.) with a pipeline of data manipulations. For example, the routine700may be implemented prior to the interactions ofFIG.3, discussed above. The routine700is illustratively implemented by a frontend162. The routine700begins at block702, where the frontend162obtains a request to apply an I/O method to input data. The request illustratively corresponds to a client device (e.g., an end user device). The I/O method may correspond, for example, to an HTTP request method, such as GET, PUT, LIST, DELETE, etc. The input data may be included within the request (e.g., within a PUT request), or referenced in the request (e.g., as an existing object on the object storage service160. At block704, the frontend162determines one or more data manipulations in the I/O path for the request. As noted above, the I/O path may be defined based on a variety of criteria (or combinations thereof), such as the object or collection referenced in the request, a URI through which the request was transmitted, an account associated with the request, etc. Manipulations for each defined I/O path may illustratively be stored at the object storage service160. Accordingly, at block704, the frontend162may compare parameters of the I/O path for the request to stored data manipulations at the object storage service160to determine data manipulations inserted into the I/O path. In one embodiment, the manipulations form a pipeline, such as the pipeline400ofFIG.4, which may be previously stored or constructed by the frontend162at block704(e.g., by combining multiple manipulations that apply to the I/O path). In some instances, an additional data manipulation may be specified within the request, which data manipulation may be inserted, for example, prior to pre-specified data manipulations (e.g., not specified within the request). In other instances, the request may exclude reference to any data manipulation. At block706, the frontend162passes input data of the I/O request to an initial data manipulation for the I/O path. The initial data manipulation may include, for example, a native manipulation of the object storage service160or a serverless task defined by an owner of the object or collection referenced in the call. Illustratively, where the initial data manipulation is a native manipulation, the frontend162may pass the input to the object manipulation engine170ofFIG.1. Where the initial data manipulation is a serverless task, the frontend162can pass the input to the on-demand code execution system120ofFIG.1for processing via an execution of the task. An illustrative routine for implementing a serverless task is described below with reference toFIG.8. WhileFIG.7illustratively describes data manipulations, in some instances other processing may be applied to an I/O path by an owner. For example, an owner may insert into an I/O path for an object or collection a serverless task that provides authentication independent of data manipulation. Accordingly, in some embodiments block706may be modified such that other data, such as metadata regarding a request or an object specified in the request, is passed to an authentication function or other path manipulation. Thereafter, the routine700proceeds to block708, where the implementation of the routine700varies according to whether additional data manipulations have been associated with the I/O path. If so, the routine700proceeds to block710, where an output of a prior manipulation is passed to a next manipulation associated with the I/O path (e.g., a subsequent stage of a pipeline). Subsequent to block710, the routine700then returns to block708, until no additional manipulations exist to be implemented. The routine700then proceeds to block712, where the frontend162applies the called I/O method (e.g., GET, PUT, POST, LIST, DELETE, etc.) to the output of the prior manipulation. For example, the frontend162may provide the output as a result of a GET or LIST request, or may store the output as a new object as a result of a PUT or POST request. The frontend162may further provide a response to the request to a requesting device, such as an indication of success of the routine700(or, in cases of failure, failure of the routine). In one embodiment, the response may be determined by a return value provided by a data manipulation implemented at blocks706or710(e.g., the final manipulation implemented before error or success). For example, a manipulation that indicates an error (e.g., lack of authorization) may specify an HTTP code indicating that error, while a manipulation that proceeds successfully may instruct the frontend162to return an HTTP code indicating success, or may instruct the frontend162to return a code otherwise associated with application of the I/O method (e.g., in the absence of data manipulations). The routine700thereafter ends at block714. Notably, application of the called method to that output, as opposed to input specified in an initial request, may alter data stored in or retrieved from the object storage service160. For example, data stored on the service160as an object may differ from the data submitted within a request to store such data. Similarly, data retrieved from the system as an object may not match the object as stored on the system. Accordingly, implementation of routine700enables an owner of data objects to assert greater control over I/O to an object or collection stored on the object storage service160on behalf of the owner. In some instances, additional or alternative blocks may be included within the routine700, or implementation of such blocks may include additional or alternative operations. For example, as discussed above, in addition to or as an alternative to providing output data, serverless task executions may provide a return value. In some instances, this return value may instruct a frontend162as to further actions to take in implementing the manipulation. For example, an error return value may instruct the frontend162to halt implementation of manipulations, and provide a specified error value (e.g., an HTTP error code) to a requesting device. Another return value may instruct the frontend162to implement an additional serverless task or manipulation. Thus, the routine700may in some cases be modified to include, subsequent to blocks706and710for example, handling of the return value of a prior manipulation (or block708may be modified to include handling of such a value). Thus, the routine700is intended to be illustrative in nature. With reference toFIG.8, an illustrative routine800will be described for executing a task on the on-demand code execution system ofFIG.1to enable data manipulations during implementation of an owner-defined function. The routine800is illustratively implemented by the on-demand code execution system120ofFIG.1. The routine800begins at block802, where the system120obtains a call to implement a stream manipulation task (e.g., a task that manipulations data provided as an input IO stream handle). The call may be obtained, for example, in conjunction with blocks706or710of the routine700ofFIG.7. The call may include input data for the task, as well as other metadata, such as metadata of a request that preceded the call, metadata of an object referenced within the call, or the like. At block804, the system120generates an execution environment for the task. Generation of an environment may include, for example, generation of a container or virtual machine instance in which the task may execute and provisioning of the environment with code of the task, as well as any dependencies of the code (e.g., runtimes, libraries, etc.). In one embodiment, the environment is generated with network permissions corresponding to permissions specified for the task. As discussed above, such permissions may be restrictively (as opposed to permissively) set, according to a whitelist for example. As such, absent specification of permissions by an owner of an I/O path, the environment may lack network access. Because the task operates to manipulate streams, rather than network data, this restrictive model can increase security without detrimental effect on functionality. In some embodiments, the environment may be generated at a logical network location providing access to otherwise restricted network resources. For example, the environment may be generated within a virtual private local area network (e.g., a virtual private cloud environment) associated with a calling device. At block806, the system120stages the environment with an IO stream representing to input data. Illustratively, the system120may configure the environment with a file system that includes the input data, and pass to the task code a handle enabling access of the input data as a file stream. For example, the system120may configure the environment with a network file system, providing network-based access to the input data (e.g., as stored on the object storage system). In another example, the system120may configure the environment with a “local” file system (e.g., from the point of view of an operating system providing the file system), and copy the input data to the local file system. The local file system may, for example, be a filesystem in user space (FUSE). In some instances, the local file system may be implemented on a virtualized disk drive, provided by the host device of the environment or by a network-based device (e.g., as a network-accessible block storage device). In other embodiments, the system120may provide the IO stream by “piping” the input data to the execution environment, by writing the input data to a network socket of the environment (which may not provide access to an external network), etc. The system120further configures the environment with stream-level access to an output stream, such as by creating a file on the file system for the output data, enabling an execution of the task to create such a file, piping a handle of the environment (e.g., stdout) to a location on another VM instance colocated with the environment or a hypervisor of the environment, etc. At block808, the task is executed within the environment. Execution of the task may include executing code of the task, and passing to the execution handles or handles of the input stream and output stream. For example, the system120may pass to the execution a handle for the input data, as stored on the file system, as a “stdin” variable. The system may further pass to the execution a handle for the output data stream, e.g., as a “stdout” variable. In addition, the system120may pass other information, such as metadata of the request or an object or collection specified within the request, as parameters to the execution. The code of the task may thus execute to conduct stream manipulations on the input data according to functions of the code, and to write an output of the execution to the output stream using OS-level stream operations. The routine800then proceeds to block810, where the system120returns data written to the output stream as output data of the task (e.g., to the frontend162of the object storage system). In one embodiment, block810may occur subsequent to the execution of the task completing, and as such, the system120may return the data written as the complete output data of the task. In other instances, block810may occur during execution of the task. For example, the system120may detect new data written to the output stream and return that data immediately, without awaiting execution of the task. Illustratively, where the output stream is written to an output file, the system120may delete data of the output file after writing, such that sending of new data immediately obviates a need for the file system to maintain sufficient storage to store all output data of the task execution. Still further, in some embodiments, block810may occur on detecting a close of the output stream handle describing the output stream. In addition, at block812, subsequent to the execution completing, the system120returns a return value provided by the execution (e.g., to the frontend162of the object storage system). The return value may specify an outcome of the execution, such as success or failure. In some instances, the return value may specify a next action to be undertaken, such as implementation an additional data manipulation. Moreover, the return value may specify data to be provided to a calling device requesting an I/O operation on a data object, such as an HTTP code to be returned. As discussed above, the frontend162may obtain such return value and undertake appropriate action, such as returning an error or HTTP code to a calling device, implementing an additional data manipulation, performing an I/O operation on output data, etc. In some instances, a return value may be explicitly specified within code of the task. In other instances, such as where no return value is specified within the code, a default return value may be returned (e.g., a ‘1’ indicating success). The routine800then ends at block814. FIG.9is a flow diagram of an illustrative routine900that may be executed by a code execution service, such as the on-demand code execution system120. The routine900may be used to dynamically concatenate or otherwise combine multiple data objects or portions thereof at run time (“on-the-fly”) in response to a request for a data object. In some embodiments, the routine900may be used to generate a response that includes a composite of multiple data objects, portions thereof, or data derived therefrom, even if the request does not reference any or all of the multiple data objects. Aspects of the routine900will be described with additional reference toFIG.10, which is a system diagram of illustrative data flows and interactions between various components of the service provider system110. The routine900may begin in response to an event, such as when the routine illustrated inFIG.8reaches block808. The routine900may be automatically performed in response to a request from a requestor (e.g., a request for data stored in the object storage service160), without the request specifying that the routine900is to be performed prior to or during generation of a response to the request. The routine900may be a user-defined task, owner-defined function, or the like (referred to herein simply as a “function” for convenience), in the form of task code504that is performed by a VM instance150or other execution environment502generated during the routine illustrated inFIG.8. In some embodiments, the routine900or portions thereof may be implemented on multiple processors, serially or in parallel. Although portions of the routine900are described as generating a response to a request for a data object, in some embodiments the output of the routine900may not be provided directly as the response to the request, but rather may be used by down-stream processes in preparing the response to the request. For example, the output of the routine900(also referred to herein as “function output”) may be further processed by another routine in a pipeline, or processed by the object storage service160prior to sending a response to the request. Accordingly, descriptions of generating a response may be interpreted as descriptions of generating function output, and vice versa. At block902, task code504or other functional unit of the VM instance150or other execution environment502can receive parameters associated with a request for a data object.FIG.10illustrates the execution environment502receiving parameters associated with the request at (1). In some embodiments, the parameters may include: reference data comprising a reference to a requested data object; reference data comprising a reference to an output location at which output of the function is to be stored for use by the object storage service160in responding to the request; context data regarding the request; other data; or some combination thereof. For example, the request may be a resource request, such as a GET request, for a particular data object stored in the object storage service160. The reference to the requested data object may be data that can be used by the execution environment502to access the requested data object, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of the requested data object. The reference to the output location for responding to the request may be data that can be used by the execution environment502to write, store, or otherwise provide function output data, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of a location for providing output of the function. The context data may include data regarding the context of the request, such as: an identifier of a user, account or other source of the request; an identifier of an access or security profile under which the request is being make; data representing the access or security rights under which the request is to be processed; an identifier of a location associated with the request; an identifier of a language associated with the request; or data representing preferences or tendencies of a source of the request. At block904, task code504or other functional unit of the VM instance150or other execution environment502can determine that a response (or function output, if the current instance of the routine900is part of a pipeline) is to be generated using one or more additional data objects stored in the object storage service160. In some embodiments, the determination may be based on context data and/or the requested data object. For example, data objects in a particular collection may be required to be concatenated with or otherwise combined with one or more additional data objects when requested. If the requested data object is in the particular collection, then one or more additional data objects may be combined with the requested data object to produce function output. As another example, configuration data such as a record stored in the object storage service160or some other data store may identify the additional data object(s) to be combined with the requested data object. The execution environment502may access the record during the routine900to determine whether to perform a combination and which additional data object(s) to combine with the requested data object. In this way, the identity of the additional data object(s) can easily be changed without requiring programming changes to the task code504executed by the execution environment502. As a further example, the execution environment502may test one or more items of context data against one or more criteria to determine whether to perform a combination and which additional data object(s) to combine with the requested data object. If an item of context data satisfies one or more criteria (e.g., a source or language of the request has a particular identity, a location associated with the request is in a particular region, etc.), then the execution environment502can determine that the requested data object is to be combined with one or more additional data objects, and also determine the identity of the additional object(s). In one specific, non-limiting embodiment, the requested data object may be a media file, such as a video file, audio file, or the like. The media file may belong to a collection of media files, such as a bucket owned or managed by an entity. The entity may specify that an additional media file, such as an introduction, preview, or advertisement, may be required to be combined with, or otherwise included in a response with, each media file in the collection. The execution environment502may determine that the requested data object is a media file in the collection and, based on this property of the media file, the execution environment502may determine that the additional media file is to be included in the response. The identity of the additional media file (or files) may be specified by the code used to perform the determination, or it may be determined dynamically at run time (e.g., by accessing configuration data in a data store). In another specific, non-limiting embodiment, the requested data object may be a data file, such as a spreadsheet, delimited file, or other collection of data records. The data records may form a subset of the data records that are to be returned in response to a request for the data object. The execution environment502may determine that a response to the request is to be generated using one or more additional data objects, such as additional data files comprising additional subsets of data records. The specific additional data object(s) may be dynamically determined based on context associated with the request, a property of the requested data object (e.g., the bucket in which the requested data object is stored), etc. For example, a subset of regional data records from one or more additional data objects may be identified based on a location associated with the request, and may be combined with the requested data object when responding to the request. The example combinations discussed herein may be performed even in cases where the request for the requested data object (e.g., the GET resource request) references the requested data object and does not reference the additional data object(s). At block906, task code504or other functional unit of the VM instance150or other execution environment502can obtain a reference to the additional data object(s). In some embodiments, the execution environment502may request, receive, or otherwise have access to a mechanism by which the execution environment502can communicate with the object storage service160to request data dynamically determined during execution of the routine900(e.g., after the execution environment502has been staged and provided with access to the requested data object). For example, the execution environment502may receive a reference to a network socket (e.g., a control plane handle) which the execution environment502can use to make additional requests to the object storage service160. Using this mechanism, the execution environment502can request and receive a reference (e.g., file handle, pointer, etc.) for the additional data object(s).FIG.10illustrates the execution environment502obtaining the reference(s) to the additional data object(s) at (2). At block908, task code504or other functional unit of the VM instance150or other execution environment502can obtain an initial data object to be used in responding to the request. The initial data object is “initial” in the sense that it is obtained and/or used prior to one or more subsequent data objects. The initial data object may be the requested data object or an additional data object, depending upon how the response is to be structured. For example, if an additional data object such as an introduction or preview is to be provided before the requested data object, the execution environment502can use the reference to the additional data object to access the additional data object. As another example, if an additional data object is to be inserted into or provided after the requested data object, the execution environment502can use the reference to the requested data object to access the requested data object.FIG.10illustrates the execution environment502obtaining the initial data object at (3). In some embodiments, the initial data object may not be obtained from the object storage service160at block908, but may be provided to the execution environment502previously. For example, during staging of the execution environment, the initial data object (e.g., the requested data object) may be obtained and stored on a computing device of the execution environment502. As another example, when reference data for the additional data object is obtained, the additional data object may be obtained and stored on a computing device of the execution environment502at a location indicated by the reference data. At block910, task code504or other functional unit of the VM instance150or other execution environment502can prepare the initial part of the response using the initial data object obtained above.FIG.10illustrates the execution environment502preparing the initial part of the response at (4). Preparing the initial part of the response may involve writing a least a portion of the initial data object using the reference to the output location for responding to the request. For example, execution environment502can determine whether to include the entire initial data object, or a portion thereof, in the response. As another example, the execution environment502may determine whether to modify the initial data object or a portion thereof, such as by removing data, adding data, altering data, changing the format of the initial data object, changing metadata associated with the data object, or the like. Illustratively, the execution environment502may add or modify a header for the initial data object, adjust the formatting of the initial data object to be compatible with subsequent data objects, etc. These determinations may be based on parameters received above (e.g., a property of the requested data object, context data, etc.). The execution environment502can then place the determined data at the output location. In some embodiments, the execution environment502may first store the initial data in a temporary internal storage location for later placement in the output location. At decision block912, task code504or other functional unit of the VM instance150or other execution environment502can determine whether there is additional data to be included in the response. As discussed above, the response may be based on the requested data object and one or more additional data objects. The execution environment502can determine whether all data has been included. If not, the routine900may proceed to block914. Otherwise, if all data to be included in the response has been included, the routine900may proceed to block918.FIG.10illustrates the execution environment502determining that additional data is to be included in the response at (5). At block914, task code504or other functional unit of the VM instance150or other execution environment502can obtain a subsequent data object to be used in responding to the request. As with the initial data object discussed above, the subsequent data object may be the requested data object or an additional data object, depending upon how the response is to be structured. The subsequent data object is “subsequent” in the sense that it is obtained or used after the initial data object. For example, if an additional data object such as an introduction or preview was access and included in the initial part of the response as discussed above, the execution environment502can use the reference to the requested data object to access the requested data object for inclusion in a subsequent part of the response.FIG.10illustrates the execution environment502obtaining the subsequent data object at (6). In some embodiments, the subsequent data object may not be obtained from the object storage service160at block914, but may be provided to the execution environment502previously. For example, during staging of the execution environment, the subsequent data object (e.g., the requested data object) may be obtained and stored on a computing device of the execution environment502. As another example, when reference data for the additional data object is obtained, the additional data object may be obtained and stored on a computing device of the execution environment502at a location indicated by the reference data. At block916, task code504or other functional unit of the VM instance150or other execution environment502can prepare the subsequent part of the response using the subsequent data object obtained above.FIG.10illustrates the execution environment502preparing the subsequent part of the response at (7). Preparing the subsequent part of the response may involve writing a least a portion of the subsequent data object using the reference to the output location for responding to the request. For example, execution environment502can determine whether to include the entire subsequent data object, or a portion thereof, in the response. As another example, the execution environment502may determine whether to modify the subsequent data object or a portion thereof, such as by removing data, adding data, altering data, changing the format of the subsequent data object, changing metadata associated with the data object, or the like. Illustratively, the execution environment502may remove a header from subsequent data objects, adjust the formatting of subsequent data objects to be compatible with the initial data object, etc. These determinations may be based on parameters received above (e.g., a property of the requested data object, context data, etc.). The execution environment502can then place the determined data at the output location. In some embodiments, the execution environment502may first store the initial data in a temporary internal storage location for later placement in the output location. The routine900may return to decision block912to determine whether additional data is to be included in the response. At block918, task code504or other functional unit of the VM instance150or other execution environment502can finalize the output of the function. Finalizing output of the function may include closing an output stream or file identified by the reference to the output location and/or providing a return value (e.g., indicating success, failure, or some other characteristic of function execution) to the object storage service160. In some embodiments, additional processing may be performed prior to closing the output stream. For example, the execution environment502may generate and write metadata describing properties of the of the output, such as the size of the output or header information for use by a device consuming the output. The routine may terminate at block920. In some embodiments, output of the function may be cached so that the function does not need to retrieve and process the requested data object(s) and/or additional data object(s) each time the objects are to be used. Instead, the function may determine whether the function output has been cached and, if so, whether the cached output has expired. If the cached output has not expired, the function may obtain the cached function output and provide it as the output of the function, or derive current function output from the cached function output. The function output may be cached locally within the execution environment (e.g., on the server machine on which the task code504or other functional unit of the VM instance150is running), or in a network-accessible data store (e.g., a high-speed dedicated cache server, a cache portion of the object storage service160, etc.). In some embodiments, cached function output may be tagged or otherwise associated with the context data that was used to determine which data objects to combine to produce the output. In this way, the function may analyze the associated context data to determine which cached output, if any, is appropriate for use in responding to a subsequent request based on the context data associated with the subsequent request. In some embodiments, data objects provided as input to the function or otherwise accessed by the function during execution may be cached so that they do not need to be obtained from the object storage service160each time the function is executed. In some embodiments, the data object that is requested and provided by the routine900may not be a data object (or portion thereof) stored as such in the object storage service160. Instead, the routine900may dynamically generate a composite object definition, such as a manifest, that references one or more stored data objects or portions thereof, or that includes data derived from one or more stored data objects. For example, the requested data object may be media content that corresponds to a data object in the object storage service160, and an additional data object such as an introduction may be required to be presented prior to the data object. The routine900may generate a manifest that can be used by a computing device to submit follow-up requests for individual data objects in the correct sequence as dynamically determined during execution of the routine900. In this example, the initial data object is the additional data object, and preparation of the initial portion of the response includes referencing the additional data object in the manifest. The subsequent data object is the data object for the requested media content, and preparation of the of the subsequent portion of the response includes referencing the data object in the manifest. The requested data object, and the output produced by the function, is the manifest, which is dynamically generated using data regarding data objects stored in the object storage service160. Thus, the requested data object may not be a data object that is actually stored in the object storage service160. In some embodiments, the request may include or reference a manifest of data objects (or portions thereof) stored in the object storage service160. Instead of obtaining the referenced data objects or portions and returning them in combined form (either in a single data stream, or as a combination of multiple data steams), the routine900may determine to add and/or remove data objects or portions thereof to and/or from those listed in the manifest. For example, the routine900may use any of the methods described above for determining which additional data object or objects—not specifically requested—are to be included in a response to a request. The routine900may then provide output that is a combination of the dynamically determined set of data objects or portions, either in a single data stream or as a combination of multiple data streams. FIG.11is a flow diagram of an illustrative routine1100that may be executed by a code execution service, such as the on-demand code execution system120, to dynamically mask, scramble, obscure, or otherwise render unintelligible (collectively referred to herein as “obfuscate” for convenience) portions of a requested data object at run time in response to a request for the data object. Although portions of the routine1100are described as generating a response to a request for a data object, in some embodiments the output of the routine1100may not be provided directly as the response to the request, but rather may be used by down-stream processes in preparing the response to the request. For example, the function output may be further processed by another routine in a pipeline, or processed by the object storage service160prior to sending a response to the request. Accordingly, descriptions of generating a response may be interpreted as descriptions of generating function output, and vice versa. Aspects of the routine1100will be described with reference toFIG.12, which is a system diagram of illustrative data flows and interactions between various components of the service provider system110. The routine1100may begin in response to an event, such as when the routine illustrated inFIG.8reaches block808. The routine1100may be automatically performed in response to a request from a requestor (e.g., a request for data stored in the object storage service160), without the request specifying that the routine1100is to be performed prior to or during generation of a response to the request. For example, the routine1100may be an owner-defined function, also referred to as a user-defined task, that is performed by a VM instance150or other execution environment502generated during the routine illustrated inFIG.8. In some embodiments, the routine1100or portions thereof may be implemented on multiple processors, serially or in parallel. At block1102, task code504or other functional unit of the VM instance150or other execution environment502can receive parameters associated with a request for a data object.FIG.12illustrates the execution environment502receiving the parameters associated with the request at (1). In some embodiments, the parameters may include: reference data comprising a reference to a requested data object; a reference to an output location at which output of the function is to be stored for use by the object storage service160in responding to the request; context data regarding the request; other data; or some combination thereof. For example, the request may be a resource request, such as a GET or SELECT request, for a particular dataset or other data object stored in the object storage service160. The reference to the requested data object may be data that can be used by the execution environment502to access the requested data object, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of the requested data object. The reference to the output location for responding to the request may be data that can be used by the execution environment502to write, store, or otherwise provide output data, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of a location for providing output of the function. The context data may include data regarding the context of the request, such as: an identifier of a user, account or other source of the request; an identifier of an access or security profile under which the request is being make; data representing the access or security rights under which the request is to be processed; an identifier of a location associated with the request; an identifier of a language associated with the request; or data representing preferences or tendencies of a source of the request. At block1104, task code504or other functional unit of the VM instance150or other execution environment502can obtain the requested data object using the reference data. The requested data object may be obtained in un-obfuscated or substantially un-obfuscated form.FIG.12illustrates the execution environment502obtaining the requested data object at (2). In some embodiments, the requested data object may not be obtained from the object storage service160at block1104, but may be provided to the execution environment502previously. For example, during staging of the execution environment, the requested data object may be obtained and stored on a computing device of the execution environment502at a location indicated by the reference data. At block1106, task code504or other functional unit of the VM instance150or other execution environment502can determine that one or more portions of the requested data object are to be obfuscated. In some embodiments, the determination may be based on context data and/or the requested data object. The execution environment502may test one or more items of context data against one or more criteria to determine whether to perform an obfuscation and which portion(s) of the requested data object to obfuscate. If an item of context data satisfies one or more criteria, then the execution environment502can determine that one or more portions of the requested data object are to be obfuscated such that a recipient of the response to the request is unable to understand the obfuscated portion(s). A different request for the same data object, but associated with different context data or other properties, may lead to a different result when testing the criteria and determining whether to obfuscate portions of the data object. In some embodiments, different portions of a requested data object may be associated with different criteria for un-obfuscated access. In such cases, the execution environment502may test the criteria for each of the associated portions.FIG.12illustrates the execution environment502determining to obfuscate portions of the requested data object at (3). Testing the context data against the criteria may include: determining that a source of the request is prohibited from accessing the portion in un-obfuscated form, determining that a location associated with the request is prohibited from accessing the portion in un-obfuscated form, or determining than an access right or security profile associated with the request is prohibited from accessing the portion in un-obfuscated form. In some embodiments, the testing of context data against the criteria may be performed to determine that a portion of the requested data object is permitted to accessed in un-obfuscated form, rather than determining that the portion is prohibited from being accessed in un-obfuscated form. For example, testing the context data against the criteria may include: determining that a source of the request is permitted to access the portion in un-obfuscated form, determining that a location associated with the request is permitted to access the portion in un-obfuscated form, or determining than an access right or security profile associated with the request is permitted to access the portion in un-obfuscated form. In one specific, non-limiting embodiment, the requested data object may be a data file, such as a spreadsheet, delimited file, or other collection of data records. Some portions of the data file, such as collections of records, collections of columns or data fields, or the like may only be permitted to be accessed in un-obfuscated form if the request satisfies one or more criteria. The execution environment502may determine that properties of the request indicated by the context data or otherwise associated with the request fail to satisfy the criteria for particular records, columns, and/or fields of the requested data object. The execution environment502may determine, based on this failure to satisfy the criteria, that the particular records, columns, and/or fields of the requested data object are to be obfuscated prior to being provided as output of the function. At block1108, task code504or other functional unit of the VM instance150or other execution environment502can selectively apply obfuscation to portions of the requested data object determined above.FIG.12illustrates the execution environment502obfuscating portions of the requested data object at (4). Obfuscating the content of a portion of the requested data object may involve the use of one or more obfuscation methods, such as scrambling the content in a pseudo random method, generating a hash of the content, replacing the content with a token, or the like. For example, when replacing the content with a token, the task code504may identify a token mapped to the content in a data store such as a key-value database, a relational database, the object storage service160, or another network-accessible data store. In some embodiments, different obfuscation methods may be used for different portions of a data object, different data objects, different context data criteria, or the like. In some embodiments, the obfuscation method may be specified by an entity that owns or is responsible for the data object. For example, an entity may specify that particular type of obfuscation (e.g., an industry standard obfuscation method in the medical field) is to be used for a data object or bucket of data objects, while another entity may specify that a different type of obfuscation (e.g., tokenization using a mapping of tokens to data) is to be used for a different data object or bucket of data objects. If no obfuscation method is specified, the execution environment502may apply a default obfuscation method. At block1110, task code504or other functional unit of the VM instance150or other execution environment502can provide the selectively-obfuscated requested data object as output of the function. For example, the execution environment502can place the selectively-obfuscated requested data object at the output location indicated by the reference data, and finalize the output. Finalizing output of the function may include closing the output stream or file identified by the reference to the output location and/or providing a return value (e.g., indicating success, failure, or some other characteristics of function execution) to the object storage service160.FIG.12illustrates the execution environment502providing the selectively-obfuscated requested data object as output at (5). The routine1100may terminate at block1112. In some embodiments, output of the function may be cached so that the function does not need to retrieve and process requested data objects to generate selectively-obfuscated data objects each time the objects are to be used. Instead, the function may determine whether the function output has been cached and, if so, whether the cached output has expired. If the cached output has not expired, the function may obtain the cached function output and provide it as the output of the function, or derive current function output from the cached function output. The function output may be cached locally within the execution environment (e.g., on the server machine on which the task code504or other functional unit of the VM instance150is running), or in a network-accessible data store (e.g., a high-speed dedicated cache server, a cache portion of the object storage service160, etc.). In some embodiments, cached function output may be tagged or otherwise associated with the context data that was used to determine which portions of the requested data object to selectively obfuscate. In this way, the function may analyze the associated context data to determine which cached output, if any, is appropriate for use in responding to a subsequent request based on the context data associated with the subsequent request. In some embodiments, data objects provided as input to the function or otherwise accessed by the function during execution may be cached so that they do not need to be obtained from the object storage service160each time the function is executed. FIG.13is a flow diagram of an illustrative routine1300that may be executed by a code execution service, such as the on-demand code execution system120, to dynamically determine at run time a filtered subset of a requested data object to provide in response to a request for the data object. Although portions of the routine1300are described as generating a response to a request for a data object, in some embodiments the output of the routine1300may not be provided directly as the response to the request, but rather may be used by down-stream processes in preparing the response to the request. For example, the function output may be further processed by another routine in a pipeline, or processed by the object storage service160prior to sending a response to the request. Accordingly, descriptions of generating a response may be interpreted as descriptions of generating function output, and vice versa. Aspects of the routine1300will be described with reference toFIG.14, which is a system diagram of illustrative data flows and interactions between various components of the service provider system110. The routine1300may begin in response to an event, such as when the routine illustrated inFIG.8reaches block808. The routine1300may be automatically performed in response to a request from a requestor (e.g., a request for data stored in the object storage service160), without the request specifying that the routine1300is to be performed prior to or during generation of a response to the request. For example, the routine1300may be an owner-defined function, also referred to as a user-defined task, that is performed by a VM instance150or other execution environment502generated during the routine illustrated inFIG.8. In some embodiments, the routine1300or portions thereof may be implemented on multiple processors, serially or in parallel. At block1302, task code504or other functional unit of the VM instance150or other execution environment502can receive parameters associated with a request for a data object.FIG.14illustrates the execution environment502receiving the parameters associated with the request at (1). In some embodiments, the parameters may include: reference data comprising a reference to a requested data object; a reference to an output location at which output of the function is to be stored for use by the object storage service160in responding to the request; context data regarding the request; other data; or some combination thereof. For example, the request may be a resource request, such as a GET request, for a particular data object stored in the object storage service160. The reference to the requested data object may be data that can be used by the execution environment502to access the requested data object, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of the requested data object. The reference to the output location for responding to the request may be data that can be used by the execution environment502to write, store, or otherwise provide output data, such as: a file descriptor; a file handle; a pointer; or some other data representing an address or identifier of a location for providing output of the function. The context data may include data regarding the context of the request, such as: an identifier of a user, account or other source of the request; an identifier of an access or security profile under which the request is being make; data representing the access or security rights under which the request is to be processed; an identifier of a location associated with the request; an identifier of a language associated with the request; or data representing preferences or tendencies of a source of the request. At block1304, task code504or other functional unit of the VM instance150or other execution environment502can obtain the requested data object using the reference data.FIG.14illustrates the execution environment502obtaining the requested data object at (2). In some embodiments, the requested data object may not be obtained from the object storage service160at block1104, but may be provided to the execution environment502previously. For example, during staging of the execution environment, the requested data object may be obtained and stored on a computing device of the execution environment502at a location indicated by the reference data. At block1306, task code504or other functional unit of the VM instance150or other execution environment502can determine that one or more portions of the requested data object are to be excluded from the output of the function and thus not provided to a requesting device in response to the request. In some embodiments, the determination may be based on context data and/or the requested data object. For example, the execution environment502may test one or more items of context data against one or more criteria to determine whether to exclude a portion or portions of the requested data object, and to determine which portion(s) of the requested data object to exclude. If an item of context data satisfies one or more criteria, then the execution environment502can determine that one or more portions of the requested data object are to be excluded from output of the function. A different request for the same data object, but associated with different context data or other properties, may lead to a different result when testing the criteria and determining whether to exclude portions of the data object. In some embodiments, different portions of a requested data object may be associated with different criteria for exclusion. In such cases, the execution environment502may test the criteria for each of the associated portions. Testing the context data against the criteria may include: determining that a source of the request is prohibited from accessing the portion, determining that a location associated with the request is prohibited from accessing the portion, or determining than an access right or security profile associated with the request is prohibited from accessing the portion. In some embodiments, the testing of context data against the criteria may be performed to determine that a portion of the requested data object is permitted to accessed, rather than determining that the portion is prohibited from being accessed. For example, testing the context data against the criteria may include: determining that a source of the request is permitted to access the portion, determining that a location associated with the request is permitted to access the portion, or determining than an access right or security profile associated with the request is permitted to access the portion. FIG.14illustrates the execution environment502determining to exclude portions of the requested data object at (3). In some embodiments, as shown, there may be multiple request sources1402and1404. Requests from these request sources1402and1404may be handled differently by the execution environment502such that outputs of the function, and the responses ultimately returned to the respective request sources1402and1404, may be different even if the same data object is requested by both request sources1402and1404. The difference in the way the requests are handled may be based on different users using the different request sources1402and1404, the different request sources1402and1404being in different geographic regions, or the different access permissions assigned to the request sources1402and1404themselves. For example, an owner of a bucket of data objects stored on the object storage service160may configure multiple distinct request sources or “portals” (e.g., servers providing interfaces to the object storage service160) for accessing the data objects in the bucket. The owner may then assign different access permissions to the different portals. Thereafter, the owner may direct users to use different portals depending upon the access permissions desired for the users. In one specific, non-limiting embodiment, the requested data object may be a data file, such as a spreadsheet, delimited file, tabular data file, structured data file, or other collection of data records. Some portions of the data file, such as subsets of records, subsets of columns, subsets of data fields or classes (e.g., those storing personally identifiable information or “PII”) and the like may only be permitted to be accessed if the request satisfies one or more criteria. For example, portions may only be accessed if the request is associated with certain access rights. As another example, portions may only be accessed if a source of the request is associated with a particular location or region. As a further example, portions may only be accessed if the request is received from a particular source or subset of sources (e.g., portals, endpoints, etc.). The execution environment502may determine that properties of the request indicated by the context data or otherwise associated with the request satisfy criteria for particular portions of the requested data object to be excluded from the response (or, alternatively, fail to satisfy the criteria for particular portions of the requested data object to be included in the response). The execution environment502may determine, based on this test with respect to one or more criteria, that the particular portions of the requested data object are to be excluded from output of the function. In some embodiments, different portions of a requested data object may be associated with different criteria access. In such cases, the execution environment502may test the criteria for each of the associated portions. In another specific, non-limiting embodiment, the requested data object may have metadata, such as data representing an author, editor, creation date, modification date, size, format, location, version, image capture or encoding properties, audio capture or encoding properties, video capture or encoding properties, camera properties, hardware capabilities, software capabilities, and the like. The metadata may be embedded within the data object (e.g., in a header or reserved portion of the data object), or externally associated with the data object (e.g., in a directory). Some portions of the metadata, such as individual items of metadata, predefined, groupings thereof, or dynamically determined groupings thereof, may only be permitted to be accessed if the request satisfies one or more criteria (or, alternatively, may be prohibited from being accessed if the request satisfies one or more criteria). For example, portions may only be accessed if the request is associated with certain access rights. As another example, portions may only be accessed if a source of the request is associated with a particular location or region. As a further example, portions may only be accessed if the request is received from a particular source or subset of sources (e.g., portals, endpoints, etc.). The execution environment502may determine that properties of the request indicated by the context data or otherwise associated with the request fail to satisfy the criteria for particular portions of the requested data object. The execution environment502may determine, based on this failure to satisfy the criteria, that the particular portions of the metadata of the requested data object are to be excluded from output of the function such that they are not accessible in the response to the request or by other downstream functions in a pipeline. In some embodiments, different portions of metadata for a requested data object may be associated with different criteria access. In such cases, the execution environment502may test the criteria for each of the associated portions. In a further specific, non-limiting embodiment, the requested data objects that may be processed using this function are not limited to data objects stored as such on the object storage service160. Alternatively, or in addition, a requested data object may be a dynamically-generated data object, such as a data object comprising data regarding other data objects stored on the object storage service160. For example, a resource request such as the LIST request is not a request for a pre-existing data object stored on the object storage service160, but rather a request for information regarding data objects stored the object storage server160, such as a list of data objects in a particular data object group (e.g., a bucket or directory) of the object storage service160, information regarding the data object groups of the object storage service160, information regarding data objects used to represent users or groups of users of the object storage service160, etc. The requested information may be identifiers, summaries, directory information, metadata, or the like. It may be desirable to limit the data objects identified in response to the LIST function, such as by limiting LIST to only those data objects that satisfy one or more criteria (or, alternatively, by excluding from LIST data regarding those objects that satisfy one or more criteria). For example, some data objects may only be identified if the request is associated with certain access rights. As another example, some data objects may only be identified if a source of the request is associated with a particular location or region. As a further example, some data objects may only be identified if the request is received from a particular source or subset of sources (e.g., portals, endpoints, etc.). The execution environment502may determine that properties of the request indicated by the context data or otherwise associated with the request fail to satisfy the criteria for particular data objects that would otherwise be identified. The execution environment502may determine, based on this failure to satisfy the criteria, that the particular data objects are not to be identified in output of the function such that they are not identified in the response to the request or by other downstream functions in a pipeline. In some embodiments, different data objects may be associated with different criteria access. In such cases, the execution environment502may test the criteria for each of the associated data objects. In another specific, non-limiting embodiment, the requested data object that may be processed using this function may be transformed instead of, or in addition to, having portions of the data object excluded from output of the function. The transformations may include modifications to data, modifications to formatting, application of encryption, etc. For example, the execution environment502may determine, for a resource request such as a GET request for a media file, to modify the media file by applying a watermark, changing the resolution or bitrate, incorporating a copyright notice, and the like. As another example, the execution environment502may apply encryption to the data object. The application of these transformations may be dynamically determined based on criteria associated with context data, criteria associated with the requested data object itself, etc. The execution environment502may determine that properties of the request indicated by the context data or otherwise associated with the request satisfy or fail to satisfy particular criteria. For example, different levels of access rights for the source of the request may cause the execution environment502to apply a watermark, downscale resolution or bitrate, provide an alternate data object with a watermark or different resolution or bitrate, etc. As another example, different levels of encryption available to be decrypted by the source of the request (as indicated by context data) may cause the execution environment502to dynamically select an encryption method based on the encryption that the source is configured to decrypt. At block1308, task code504or other functional unit of the VM instance150or other execution environment502can selectively exclude portions of the requested data object and/or otherwise apply transformations to the requested data object as determined above.FIG.14illustrates the execution environment502selectively excluding portions of the requested data object at (4). Selectively excluding the content of a portion of the requested data object may involve generating an output version of the data object that does not include the portions determined to be excluded. For example, the execution environment502may read the content of the data object from an input file or stream (e.g., using reference data such as a file descriptor for the requested data object), and write the non-excluded portions to an output file or stream (e.g., using reference data such as a file descriptor for the function output), while not writing the portions to be excluded from the function output. Thus, to a recipient of a data object that has had portions selectively excluded, the data object may appear to have never included those portions. The execution environment502may also apply one or more transformations to ensure that the data object retains a valid format or configuration. For example, excluding certain data from the output of a LIST function, where the excluded data identifies a particular data object, may involve not only excluding the identifying data but also excluding or modifying structural or formatting data (e.g., markup tags, field definitions, etc.) that would otherwise appear in the function output as an empty object or null value. At block1310, task code504or other functional unit of the VM instance150or other execution environment502can provide the selectively-filtered requested data object—from which certain portions have been excluded—as output of the function. For example, the execution environment502can place the requested data object at the output location indicated by the reference data, and finalize the output. Finalizing output of the function may include closing the output stream or file identified by the reference to the output location and/or providing a return value (e.g., indicating success, failure, or some other characteristics of function execution) to the object storage service160.FIG.14illustrates the execution environment502providing the selectively-filtered requested data object as output at (5). The routine may terminate at block1312. In some embodiments, output of the function may be cached so that the function does not need to retrieve and process requested data objects to generate selectively-filtered requested data objects each time the objects are to be used. Instead, the function may determine whether the function output has been cached and, if so, whether the cached output has expired. If the cached output has not expired, the function may obtain the cached function output and provide it as the output of the function, or derive current function output from the cached function output. The function output may be cached locally within the execution environment (e.g., on the server machine on which the task code504or other functional unit of the VM instance150is running), or in a network-accessible data store (e.g., a high-speed dedicated cache server, a cache portion of the object storage service160, etc.). In some embodiments, cached function output may be tagged or otherwise associated with the context data that was used to determine which portions of the requested data object to selectively exclude. In this way, the function may analyze the associated context data to determine which cached output, if any, is appropriate for use in responding to a subsequent request based on the context data associated with the subsequent request. In some embodiments, data objects provided as input to the function or otherwise accessed by the function during execution may be cached so that they do not need to be obtained from the object storage service160each time the function is executed. In some embodiments, the execution environment502may perform transformations on data stored in the object storage service160in response to a request for a data object. For example, requirements for content, formatting, and/or retention of data objects may change over time, or due dates for such changes may be reached. Rather than actively performing transformations to the data objects to reflect the current content, formatting, and/or retention requirements to the data objects when the requirements change or when the due dates are reached, the data objects may remain in the object storage service160unchanged or substantially unchanged until the next time they are to be accessed. When a subsequent request is received for a data object, the object storage service160and/or execution environment502may determine that a transformation is to be applied, and may apply the transformation prior to responding to the request for the data object. The transformation may be applied even if the request is a request only to receive the data object, and is not a request to modify or delete the data object. This “just-in-time” transformation may be desirable in certain cases to reduce the computational expense of applying the transformations to all data objects immediately upon changes to requirements. For example, if a large amount of data would need to be transformed, or when subsequent requests for affected data objects are expected to be rare, a bucket owner or other entity may prefer to postpone applying the transformations until the affected data objects are accessed. FIG.14illustrates the execution environment502applying a just-in-time transformation at (2A) to a data object stored in the object storage service160in response to receiving a request to receive the data object. Although the just-in-time transformation is shown as occurring in connection with operations of routine1300for selective exclusion of data object portions, just-in-time transformations may be performed in connection with any of the other routines described herein, with any other owner-defined function or user-defined task, in a pipeline with multiple functions, etc. All of the methods and processes described above may be embodied in, and fully automated via, software code modules executed by one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in specialized computer hardware. Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to present that certain embodiments include, while other embodiments do not include, certain features, elements or steps. Thus, such conditional language is not generally intended to imply that features, elements or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements or steps are included or are to be performed in any particular embodiment. Disjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y or Z, or any combination thereof (e.g., X, Y or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as ‘a’ or ‘an’ should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. The term “or” should generally be understood to be inclusive, rather than exclusive. Accordingly, a set containing “a, b, or c” should be construed to encompass a set including a combination of a, b, and c. Any routine descriptions, elements or blocks in the flow diagrams described herein or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the routine. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, or executed out of order from that shown or discussed, including substantially synchronously or in reverse order, depending on the functionality involved as would be understood by those skilled in the art. It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
170,752
11860880
For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the invention. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present invention. The same reference numerals in different figures denote the same elements. The terms “first,” “second,” “third,” “fourth,” and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms “include,” and “have,” and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, device, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, system, article, device, or apparatus. The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The terms “couple,” “coupled,” “couples,” “coupling,” and the like should be broadly understood and refer to connecting two or more elements or signals, electrically, mechanically and/or otherwise. Two or more electrical elements may be electrically coupled together, but not be mechanically or otherwise coupled together; two or more mechanical elements may be mechanically coupled together, but not be electrically or otherwise coupled together; two or more electrical elements may be mechanically coupled together, but not be electrically or otherwise coupled together. Coupling may be for any length of time, e.g., permanent or semi-permanent or only for an instant. “Electrical coupling” and the like should be broadly understood and include coupling involving any electrical signal, whether a power signal, a data signal, and/or other types or combinations of electrical signals. “Mechanical coupling” and the like should be broadly understood and include mechanical coupling of all types. The absence of the word “removably,” “removable,” and the like near the word “coupled,” and the like does not mean that the coupling, etc. in question is or is not removable. As defined herein, “approximately” can, in some embodiments, mean within plus or minus ten percent of the stated value. In other embodiments, “approximately” can mean within plus or minus five percent of the stated value. In further embodiments, “approximately” can mean within plus or minus three percent of the stated value. In yet other embodiments, “approximately” can mean within plus or minus one percent of the stated value. DETAILED DESCRIPTION OF EXAMPLES OF EMBODIMENTS Turning to the drawings,FIG.1illustrates an exemplary embodiment of a computer system100, all of which or a portion of which can be suitable for (i) implementing part or all of one or more embodiments of the techniques, methods, and systems and/or (ii) implementing and/or operating part or all of one or more embodiments of the memory storage devices described herein. For example, in some embodiments, all or a portion of computer system100can be suitable for implementing part or all of one or more embodiments of the techniques, methods, and/or systems described herein. Furthermore, one or more elements of computer system100(e.g., a refreshing monitor106, a keyboard104, and/or a mouse110, etc.) also can be appropriate for implementing part or all of one or more embodiments of the techniques, methods, and/or systems described herein. In many embodiments, computer system100can comprise chassis102containing one or more circuit boards (not shown), a Universal Serial Bus (USB) port112, a hard drive114, and an optical disc drive116. Meanwhile, for example, optical disc drive116can comprise a Compact Disc Read-Only Memory (CD-ROM), a Digital Video Disc (DVD) drive, or a Blu-ray drive. Still, in other embodiments, a different or separate one of a chassis102(and its internal components) can be suitable for implementing part or all of one or more embodiments of the techniques, methods, and/or systems described herein. Turning ahead in the drawings,FIG.2illustrates a representative block diagram of exemplary elements included on the circuit boards inside chassis102(FIG.2). For example, a central processing unit (CPU)210is coupled to a system bus214. In various embodiments, the architecture of CPU210can be compliant with any of a variety of commercially distributed architecture families. In many embodiments, system bus214also is coupled to a memory storage unit208, where memory storage unit208can comprise (i) non-volatile memory, such as, for example, read only memory (ROM) and/or (ii) volatile memory, such as, for example, random access memory (RAM). The non-volatile memory can be removable and/or non-removable non-volatile memory. Meanwhile, RAM can include dynamic RAM (DRAM), static RAM (SRAM), etc. Further, ROM can include mask-programmed ROM, programmable ROM (PROM), one-time programmable ROM (OTP), erasable programmable read-only memory (EPROM), electrically erasable programmable ROM (EEPROM) (e.g., electrically alterable ROM (EAROM) and/or flash memory), etc. In these or other embodiments, memory storage unit208can comprise (i) non-transitory memory and/or (ii) transitory memory. The memory storage device(s) of the various embodiments disclosed herein can comprise memory storage unit208, an external memory storage drive (not shown), such as, for example, a USB-equipped electronic memory storage drive coupled to universal serial bus (USB) port112(FIGS.1&2), hard drive114(FIGS.1&2), optical disc drive116(FIGS.1&2), a floppy disk drive (not shown), etc. As used herein, non-volatile and/or non-transitory memory storage device(s) refer to the portions of the memory storage device(s) that are non-volatile and/or non-transitory memory. In various examples, portions of the memory storage device(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage device(s)) can be encoded with a boot code sequence suitable for restoring computer system100(FIG.1) to a functional state after a system reset. In addition, portions of the memory storage device(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage device(s)) can comprise microcode such as a Basic Input-Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) operable with computer system100(FIG.1). In the same or different examples, portions of the memory storage device(s) of the various embodiments disclosed herein (e.g., portions of the non-volatile memory storage device(s)) can comprise an operating system, which can be a software program that manages the hardware and software resources of a computer and/or a computer network. Meanwhile, the operating system can perform basic tasks such as, for example, controlling and allocating memory, prioritizing the processing of instructions, controlling input and output devices, facilitating networking, and managing files. Exemplary operating systems can comprise (i) Microsoft® Windows® operating system (OS) by Microsoft Corp. of Redmond, Washington, United States of America, (ii) Mac® OS by Apple Inc. of Cupertino, California, United States of America, (iii) UNIX® OS, and (iv) Linux® OS. Further exemplary operating systems can comprise (i) iOS™ by Apple Inc. of Cupertino, California, United States of America, (ii) the Blackberry® OS by Research In Motion (RIM) of Waterloo, Ontario, Canada, (iii) the Android™ OS developed by the Open Handset Alliance, or (iv) the Windows Mobile™ OS by Microsoft Corp. of Redmond, Washington, United States of America. Further, as used herein, the term “computer network” can refer to a collection of computers and devices interconnected by communications channels that facilitate communications among users and allow users to share resources (e.g., an internet connection, an Ethernet connection, etc.). The computers and devices can be interconnected according to any conventional network topology (e.g., bus, star, tree, linear, ring, mesh, etc.). As used herein, the term “processor” means any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a controller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor, or any other type of processor or processing circuit capable of performing the desired functions. In some examples, the one or more processors of the various embodiments disclosed herein can comprise CPU210. In the depicted embodiment ofFIG.2, various I/O devices such as a disk controller204, a graphics adapter224, a video controller202, a keyboard adapter226, a mouse adapter206, a network adapter220, and other I/O devices222can be coupled to system bus214. Keyboard adapter226and mouse adapter206are coupled to keyboard104(FIGS.1&2) and mouse110(FIGS.1&2), respectively, of computer system100(FIG.1). While graphics adapter224and video controller202are indicated as distinct units inFIG.2, video controller202can be integrated into graphics adapter224, or vice versa in other embodiments. Video controller202is suitable for refreshing monitor106(FIGS.1&2) to display images on a screen108(FIG.1) of computer system100(FIG.1). Disk controller204can control hard drive114(FIGS.1&2), USB port112(FIGS.1&2), and CD-ROM drive116(FIGS.1&2). In other embodiments, distinct units can be used to control each of these devices separately. Network adapter220can be suitable to connect computer system100(FIG.1) to a computer network by wired communication (e.g., a wired network adapter) and/or wireless communication (e.g., a wireless network adapter). In some embodiments, network adapter220can be plugged or coupled to an expansion port (not shown) in computer system100(FIG.1). In other embodiments, network adapter220can be built into computer system100(FIG.1). For example, network adapter220can be built into computer system100(FIG.1) by being integrated into the motherboard chipset (not shown), or implemented via one or more dedicated communication chips (not shown), connected through a PCI (peripheral component interconnector) or a PCI express bus of computer system100(FIG.1) or USB port112(FIG.1). Returning now toFIG.1, although many other components of computer system100are not shown, such components and their interconnection are well known to those of ordinary skill in the art. Accordingly, further details concerning the construction and composition of computer system100and the circuit boards inside chassis102are not discussed herein. Meanwhile, when computer system100is running, program instructions (e.g., computer instructions) stored on one or more of the memory storage device(s) of the various embodiments disclosed herein can be executed by CPU210(FIG.2). At least a portion of the program instructions, stored on these devices, can be suitable for carrying out at least part of the techniques, methods, and activities of the methods described herein. In various embodiments, computer system100can be reprogrammed with one or more systems, applications, and/or databases to convert computer system100from a general purpose computer to a special purpose computer. Further, although computer system100is illustrated as a desktop computer inFIG.1, in many examples, system100can have a different form factor while still having functional elements similar to those described for computer system100. In some embodiments, computer system100may comprise a single computer, a single server, or a cluster or collection of computers or servers, or a cloud of computers or servers. Typically, a cluster or collection of servers can be used when the demand on computer system100exceeds the reasonable capability of a single server or computer. In certain embodiments, computer system100may comprise a mobile device. In certain additional embodiments, computer system100may comprise an embedded system. As used herein, the term “mobile device” can refer to a portable electronic device (e.g., an electronic device easily conveyable by hand by a person of average size) with the capability to present audio and/or visual data (e.g., text, images, videos, music, etc.). For example, a mobile device can comprise at least one of a digital media player, a cellular telephone (e.g., a smartphone), a personal digital assistant, a handheld digital computer device (e.g., a tablet personal computer device), a laptop computer device (e.g., a notebook computer device, a netbook computer device), a wearable computer device, or another portable computer device with the capability to present audio and/or visual data (e.g., text, images, videos, music, etc.). In many examples, a mobile device can comprise a volume and/or weight sufficiently small as to permit the mobile device to be easily conveyable by hand. For example, in some embodiments, a mobile device can occupy a volume of less than or equal to approximately 189 cubic centimeters, 244 cubic centimeters, 1790 cubic centimeters, 2434 cubic centimeters, 2876 cubic centimeters, 4016 cubic centimeters, and/or 5752 cubic centimeters. Further, in these embodiments, a mobile device can weigh less than or equal to 3.24 Newtons, 4.35 Newtons, 15.6 Newtons, 17.8 Newtons, 22.3 Newtons, 31.2 Newtons, and/or 44.5 Newtons. Exemplary mobile devices can comprise, but are not limited to, one of the following: (i) an iPod®, iPhone®, iPod Touch®, iPad®, MacBook® or similar product by Apple Inc. of Cupertino, California, United States of America, (ii) a Blackberry® or similar product by Research in Motion (RIM) of Waterloo, Ontario, Canada, (iii) a Lumia®, Surface Pro™, or similar product by the Microsoft Corporation of Redmond, Washington, United States of America, and/or (iv) a Galaxy™, Galaxy Tab™, Note™, or similar product by the Samsung Group of Samsung Town, Seoul, South Korea. Further, in the same or different embodiments, a mobile device can comprise an electronic device configured to implement one or more of (i) iOS™ by Apple Inc. of Cupertino, California, United States of America, (ii) Blackberry® OS by Research In Motion (RIM) of Waterloo, Ontario, Canada, (iii) Android™ OS developed by the Open Handset Alliance, or (iv) Windows Mobile™ OS by Microsoft Corp. of Redmond, Washington, United States of America. Skipping ahead now in the drawings,FIG.3illustrates a representative block diagram of a system300, according to an embodiment. In many embodiments, system300can comprise a computer system. In some embodiments, system300can be implemented to perform part or all of one or more methods (e.g., method700(FIG.7)). System300is merely exemplary and embodiments of the system are not limited to the embodiments presented herein. System300can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, certain elements of system300can perform various methods and/or activities of those methods. In these or other embodiments, the methods and/or the activities of the methods can be performed by other suitable elements of system300. As explained in greater detail below, in many embodiments, system300is operable to identify from a gross population of case individuals one or more sub-populations of the case individuals for which (i) the case individuals of the sub-population(s) are associated with one or more sub-population features corresponding to the particular sub-population(s), and (ii) when provided with content selected according to an incumbent statistical model and one or more alternative statistical models, an average feedback of the case individuals of the sub-population(s) to content selected according to at least one of the alternative statistical model(s) exceeds an average feedback of the case individuals of the sub-population(s) to content selected according to the incumbent statistical model and a difference in the average feedback of the case individuals of the sub-population(s) to content selected according to the at least one of the alternative statistical model(s) and the average feedback of the case individuals of the sub-population(s) to content selected according to the incumbent statistical model is statistically significant. For example, the incumbent statistical model can involve a level of significance l, where l is a configurable parameter in the model. Further, for each of the sub-population(s) identified from the gross population, system300can associate the sub-population feature(s) that correspond to a particular sub-population with the alternative statistical model that results in the most statistically significant difference in average feedback for that particular sub-population. As a result, system300advantageously can learn one or more sets of sub-population feature(s) from which a content provider, which may be the operator of system300or another party, can determine whether or not to select content to provide to one or more applied individuals according to the incumbent statistical model or one or more of the alternative statistical model(s). In some embodiments, system300can confirm for applied individuals, whether or not a particular applied individual is associated with a set of the sub-population feature(s) learned by system300, and can select content to provide to the applied individual according to the incumbent statistical model or one of the alternative statistical model(s) based on whether or not the applied individual is associated with a set of the sub-population feature(s) learned by system300. Further, as used herein, the term “individual” can refer to a person or a person performing an action, and the term “individuals” can refer to people or people performing a same action. The action can be any suitable action, such as, for example, visiting a type of website, selecting a type of content filter, purchasing a type of item, etc. Meanwhile, the terms “case” and “applied,” when used herein to modify the terms “individual” or “individuals,” are used for purposes of clarifying how content is being provided to the individual or individuals, with “case” being used for an individual or individuals while system300is identifying the sub-population(s), and with “applied” being used for an individual or individuals when system300has identified the sub-population(s) and is presenting content in view of the sub-population feature(s) of the sub-population(s). The terms “case” and “applied” should not otherwise be construed as limiting of the terms “individual” or “individuals.” In many embodiments, implementing system300can permit personalization of content provided to applied individual(s) to be limited to instances where personalization of content will result in a statistically significant difference in engagement with the content. Limiting personalization of content when providing content to applied individual(s) to instances where personalization will result in a statistically significant difference in engagement with the content may be advantageous, such as, for example, where a content provider prefers to otherwise provide specific content to other applied individual(s) and/or where a content provider wants to continue refining the sub-populations with respect to other applied individual(s), in which case the other applied individual(s) can be treated as case individual(s). Further, limiting personalization of content when providing content to applied individual(s) to instances where personalization will result in a statistically significant difference in engagement with the content may be advantageous to free up computational resources of system300for other purposes, such as, for example, further refining the sub-populations. In many embodiments, the different content with which system300is implemented can comprise different versions of a website, and implementing system300can help to improve engagement with the website. In many embodiments, implementing system300can be advantageous to allow a content provider to personalize content in an unconventional manner. For example, rather than relying on subjectively identified sub-population(s) of a gross population of individuals to target with personalized content, implementing system300can permit a content provider to identify sub-populations of the gross population of individuals to target with personalized content based on statistical significance. Meanwhile, system300can use objective rules (e.g., a probability value, etc.) to identify relevant sub-populations of the gross population of individuals. Moreover, system300can be helpful to make content personalization more effective by increasing a likelihood of engagement by content consumers. Generally, therefore, system300can be implemented with hardware and/or software, as described herein. In some embodiments, at least part of the hardware and/or software can be conventional, while in these or other embodiments, part or all of the hardware and/or software can be customized (e.g., optimized) for implementing part or all of the functionality of system300described herein. Specifically, system300can comprise a central computer system301. In many embodiments, central computer system301can be similar or identical to computer system100(FIG.1). Accordingly, central computer system301can comprise one or more processors and one or more memory storage devices (e.g., one or more non-transitory memory storage devices). In these or other embodiments, the processor(s) and/or the memory storage device(s) can be similar or identical to the processor(s) and/or memory storage device(s) (e.g., non-transitory memory storage devices) described above with respect to computer system100(FIG.1). In some embodiments, central computer system301can comprise a single computer or server, but in many embodiments, central computer system301comprises a cluster or collection of computers or servers and/or a cloud of computers or servers. Meanwhile, central computer system301can comprise one or more input devices (e.g., one or more keyboards, one or more keypads, one or more pointing devices such as a computer mouse or computer mice, one or more touchscreen displays, etc.), and/or can comprise one or more output devices (e.g., one or more monitors, one or more touch screen displays, one or more speakers, etc.). Accordingly, the input device(s) can comprise one or more devices configured to receive one or more inputs, and/or the output device(s) can comprise one or more devices configured to provide (e.g., present, display, emit, etc.) one or more outputs. For example, in these or other embodiments, one or more of the input device(s) can be similar or identical to keyboard104(FIG.1) and/or a mouse110(FIG.1). Further, one or more of the output device(s) can be similar or identical to refreshing monitor106(FIG.1) and/or screen108(FIG.1). The input device(s) and the output device(s) can be coupled to the processor(s) and/or the memory storage device(s) of central computer system301in a wired manner and/or a wireless manner, and the coupling can be direct and/or indirect, as well as locally and/or remotely. As an example of an indirect manner (which may or may not also be a remote manner), a keyboard-video-mouse (KVM) switch can be used to couple the input device(s) and the output device(s) to the processor(s) and/or the memory storage device(s). In some embodiments, the KVM switch also can be part of central computer system301. In a similar manner, the processor(s) and the memory storage device(s) can be local and/or remote to each other. In many embodiments, central computer system301is configured to communicate with contact computer systems303of multiple individuals (e.g., multiple case individuals and/or one or more applied individuals). For example, the individual(s) can interface (e.g., interact) with central computer system301, and vice versa, via contact computer systems303. In these or other embodiments, contact computer systems303can comprise contact computer system309. In some embodiments, system300can comprise one or more of contact computer systems303. In many embodiments, central computer system301can refer to a back end of system300operated by an operator and/or administrator of system300. In these or other embodiments, the operator and/or administrator of system300can manage central computer system301, the processor(s) of central computer system301, and/or the memory storage device(s) of central computer system301using the input device(s) and/or output device(s) of central computer system301. Like central computer system301, contact computer systems303each can be similar or identical to computer system100(FIG.1), and in many embodiments, multiple or all of contact computer systems303can be similar or identical to each other. In many embodiments, contact computer systems303can comprise one or more desktop computer devices and/or one or more mobile devices, etc. At least part of central computer system301can be located remotely from contact computer systems303. Meanwhile, in many embodiments, for reasons explained later herein, central computer system301also can be configured to communicate with one or more databases302(e.g., one or more feature databases501(FIG.5), one or more identification databases502(FIG.5), etc.). Database(s)302can be stored on one or more memory storage devices (e.g., non-transitory memory storage device(s)), which can be similar or identical to the one or more memory storage device(s) (e.g., non-transitory memory storage device(s)) described above with respect to computer system100(FIG.1). Also, in some embodiments, for any particular database of database(s)302, that particular database can be stored on a single memory storage device of the memory storage device(s) and/or the non-transitory memory storage device(s) storing database(s)302or it can be spread across multiple of the memory storage device(s) and/or non-transitory memory storage device(s) storing database(s)302, depending on the size of the particular database and/or the storage capacity of the memory storage device(s) and/or non-transitory memory storage device(s). In these or other embodiments, the memory storage device(s) of central computer system301can comprise some or all of the memory storage device(s) storing database(s)302. In further embodiments, some of the memory storage device(s) storing database(s)302can be part of one or more of contact computer systems303and/or one or more third-party computer systems (i.e., other than central computer system301and/or contact computer systems303), and in still further embodiments, all of the memory storage device(s) storing database(s)302can be part of one or more of contact computer systems303and/or one or more of the third-party computer system(s). Like central computer system301and/or contact computer systems303, when applicable, each of the third-party computer system(s) can be similar or identical to computer system100(FIG.1). Notably, the third-party computer systems are not shown atFIG.3in order to avoid unduly cluttering the illustration ofFIG.3, and database(s)302are illustrated atFIG.3apart from central computer system301and contact computer systems303to better illustrate that database(s)302can be stored at memory storage device(s) of central computer system301, contact computer systems303, and/or the third-party computer system(s), depending on the manner in which system300is implemented. Database(s)302each can comprise a structured (e.g., indexed) collection of data and can be managed by any suitable database management systems configured to define, create, query, organize, update, and manage database(s). Exemplary database management systems can include MySQL (Structured Query Language) Database, PostgreSQL Database, Microsoft SQL Server Database, Oracle Database, SAP (Systems, Applications, & Products) Database and IBM DB2 Database. Meanwhile, communication between central computer system301, contact computer systems303, the third-party computer system(s), and/or database(s)302can be implemented using any suitable manner of wired and/or wireless communication. Accordingly, system300can comprise any software and/or hardware components configured to implement the wired and/or wireless communication. Further, the wired and/or wireless communication can be implemented using any one or any combination of wired and/or wireless communication network topologies (e.g., ring, line, tree, bus, mesh, star, daisy chain, hybrid, etc.) and/or protocols (e.g., personal area network (PAN) protocol(s), local area network (LAN) protocol(s), wide area network (WAN) protocol(s), cellular network protocol(s), Powerline network protocol(s), etc.). Exemplary PAN protocol(s) can comprise Bluetooth, Zigbee, Wireless Universal Serial Bus (USB), Z-Wave, etc. Exemplary LAN and/or WAN protocol(s) can comprise Data Over Cable Service Interface Specification (DOCSIS), Institute of Electrical and Electronic Engineers (IEEE) 802.3 (also known as Ethernet), IEEE 802.11 (also known as WiFi), etc. Exemplary wireless cellular network protocol(s) can comprise Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/Time Division Multiple Access (TDMA)), Integrated Digital Enhanced Network (iDEN), Evolved High-Speed Packet Access (HSPA+), Long-Term Evolution (LTE), WiMAX, etc. The specific communication software and/or hardware implemented can depend on the network topologies and/or protocols implemented, and vice versa. In many embodiments, exemplary communication hardware can comprise wired communication hardware including, for example, one or more data buses, such as, for example, universal serial bus(es), one or more networking cables, such as, for example, coaxial cable(s), optical fiber cable(s), and/or twisted pair cable(s), any other suitable data cable, etc. Further exemplary communication hardware can comprise wireless communication hardware including, for example, one or more radio transceivers, one or more infrared transceivers, etc. Additional exemplary communication hardware can comprise one or more networking components (e.g., modulator-demodulator components, gateway components, etc.). Turning ahead now in the drawings,FIG.4illustrates a representative block diagram of central computer system301, according to the embodiment ofFIG.3; andFIG.5illustrates a representative block diagram of database(s)302, according to the embodiment ofFIG.3. Referring toFIG.4, in many embodiments, central computer system301can comprise one or more processors401and one or more memory storage devices402. Further, memory storage device(s)402can comprise one or more non-transitory memory storage devices403. Meanwhile, in these or other embodiments, central computer system301comprises a communication system404, a feature learning system405, and a personalization system406. In these or other embodiments, part or all of at least one or more of communication system404, feature learning system405, and personalization system406can be part of at least one or more others of communication system404, feature learning system405, and personalization system406, and vice versa. In these or other embodiments, at least one or more of communication system404, feature learning system405, and personalization system406can be separate server systems apart and independent from the central computer system301. In these or other embodiments, communication system404, feature learning system405, and/or personalization system406can be located spatially apart from each other, and/or located separately from central computer system301. In these or other embodiments, communication system404can communicate with feature learning system405and/or can communicate with personalization system406, and/or vice versa. Similarly, feature learning system405can communicate with personalization system406, and/or vice versa. In these or other embodiments, communication system404, feature learning system405, and/or personalization system406can communicate with central computer system301, and/or vice versa. In many embodiments, processor(s)401can be similar or identical to the processor(s) described above with respect to computer system100(FIG.1) and/or central computer system301(FIG.3); memory storage device(s)402can be similar or identical to the memory storage device(s) described above with respect to computer system100(FIG.1) and/or central computer system301(FIG.3); and/or non-transitory memory storage device(s)403can be similar or identical to the non-transitory memory storage device(s) described above with respect to computer system100(FIG.1) and/or central computer system301(FIG.3). Further, communication system404, feature learning system405, and personalization system406can be implemented with hardware and/or software, as desirable. Although communication system404, feature learning system405, and personalization system406are shown atFIG.4as being separate from processor(s)401, memory storage device(s)402, and/or non-transitory memory storage device(s)403, in many embodiments, part or all of communication system404, feature learning system405, and personalization system406can be stored at memory storage device(s)402and/or non-transitory memory storage device(s)403and can be called and run at processor(s)401, such as, for example, when the part or all of communication system404, feature learning system405, and personalization system406are implemented as software. Communication System404 Communication system404can provide and manage communication between the various elements of central computer system301(e.g., processor(s)401, memory storage device(s)402, non-transitory memory storage device(s)403, communication system404, feature learning system405, and personalization system406, etc.) and manage incoming and outgoing communications between central computer system301(FIG.3) and contact computer systems303(FIG.3), the third party computer system(s), and/or database(s)302(FIG.3). Like the communications between central computer system301(FIG.3), contact computer systems303(FIG.3), the third party computer system(s), and/or database(s)302(FIG.3), communication system404can be implemented using any suitable manner of wired and/or wireless communication, and/or using any one or any combination of wired and/or wireless communication network topologies and/or protocols, as described above with respect to the central computer system301(FIG.3), contact computer systems303(FIG.3), the third party computer system(s), and/or database(s)302(FIG.3). In many embodiments, communication system404can be part of hardware and/or software implemented for communications between central computer system301(FIG.3), contact computer systems303(FIG.3), the third party computer system(s), and/or database(s)302(FIG.3). For example, as applicable, communication system404can permit processor(s)401to call (i) software (e.g., at least part of feature learning system405, personalization system406, etc.) stored at memory storage device(s)402and/or non-transitory memory storage device(s)403, and/or (ii) data stored at memory storage device(s)402, at non-transitory memory storage device(s)403, and/or in database(s)302(FIG.3). Feature Learning System405 Feature learning system405can identify from a gross population of case individuals one or more sub-populations of the case individuals. The case individuals of the sub-population(s) can be associated with one or more sub-population features corresponding to the particular sub-population(s). Further, when provided with content selected according to an incumbent statistical model and one or more alternative statistical models, an average feedback of the case individuals of the sub-population(s) to content selected according to at least one of the alternative statistical model(s) exceeds an average feedback of the case individuals of the sub-population(s) to content selected according to the incumbent statistical model and a difference in the average feedback of the case individuals of the sub-population(s) to content selected according to the at least one of the alternative statistical model(s) and the average feedback of the case individuals of the sub-population(s) to content selected according to the incumbent statistical model can be statistically significant. For example, the incumbent statistical model can involve a level of significance l, where l is a configurable parameter of the system. In many embodiments, the feature learning system can identify a customized (e.g., optimal) arrangement for a given instance based on its context. A feature learning system could be leveraged to target users for future applications as well as personalize the choice of optimal systems for content. In some embodiments, the techniques described herein can provide several technological improvements to feature learning approaches. For example, the techniques described herein can reduce the network load on the system. In particular, users can run fewer search queries when the users are presented with more responsive relevant content, which can beneficially reduce the amount of computing resources required to service the search queries, and/or can advantageously mitigate problems with available bandwidth, reduce network traffic, and/or improve management of cache memory. In many embodiments, the case individuals of the gross population of case individuals can access or receive the content provided by feature learning system405at one or more of contact computer system(s)303(FIG.3) via communication system404. In these or other embodiments, feature learning system405can receive feedback to the content provided by the case individuals from the one or more of contact computer system(s)303via communication system404. In many embodiments, in order to identify the one or more sub-populations of case individuals from the gross population of case individuals, feature learning system405can identify a control sub-population of case individuals of the gross population associated with a set of one or more sub-population features and a test sub-population of case individuals of the gross population associated with the set of sub-population features. Applying an AB testing methodology to the control sub-population and the test sub-population, feature learning system405can present content selected according to an incumbent statistical model (i.e., control content) to case individuals of the control sub-population and content selected according to an alternative or different statistical model (i.e., test content) to case individuals of the test sub-population. Meanwhile, feature learning system405can measure an average feedback metric of the case individuals of the control group provided in response to the control content and an average feedback metric of the case individuals of the test group provided in response to the test content to determine whether the average feedback metric of the case individuals of the test group exceeds the average feedback metric of the case individuals of the control group and whether a difference in the average feedback metrics is statistically significant. In many embodiments, the difference in the average feedback metrics can be statistically significant when a probability value of the difference in the average feedback metrics is less than a predetermined significance level value. Meanwhile, the probability value can refer to a probability that the difference in the average feedback metrics is caused by random chance. If the difference in the average feedback metrics is statistically significant (e.g., the probability value of the difference in the average feedback metrics is less than the predetermined significance level value. Under the null hypothesis that the difference is average feedback metrics is purely due to random chance, the probability that the difference in the average feedback metrics is larger in absolute value than the observed difference in average user feedback, is less than the predetermined significance level value l, a configurable parameter in the system), feature learning system405can recognize the case individuals of the control sub-population and the test sub-population together as forming one of the sub-populations of case individuals identified from the gross population of case individuals. Meanwhile, the probability value can refer to a probability that the difference in the average feedback metrics is caused by random chance. Further, feature learning system405can perform this methodology with one or more other pairs of control and test sub-populations of the case individuals of the gross population that are associated with one or more other sets of sub-population features, and in many embodiments, can do so for all sets of sub-population features associated with the case individuals of the gross population, recognizing the case individuals of the pairs of control and test sub-populations for which the average feedback metrics of the test sub-populations exceed the average feedback metrics of the control sub-populations and for which the differences in the average feedback metrics are statistically significant, if any, as other sub-populations of case individuals identified from the gross population of case individuals. Meanwhile, sub-populations of case individuals of pairs of control and test sub-populations of the case individuals of the gross population for which the average feedback metric of the test sub-population does not exceed the average feedback metric of the control sub-population and/or for which the differences in the average feedback metrics are not statistically significant (e.g., that result in differences in average feedback metrics for which the probability value of the differences in the average feedback metrics is greater than or equal to the predetermined significance level value. Under the null hypothesis that the difference is average feedback metrics is purely due to random chance, the probability that the difference in the average feedback metrics is larger than the observed difference in average user feedback, is less than the predetermined significance level value l), if any, can be excluded by feature learning system405from the sub-population(s) of case individuals identified from the gross population of case individuals. In some embodiments, feature learning system405can provide several technological improvements. The approach described herein is different from conventional approaches, which applied subjective human manual determinations to determine the personalization method that would work better for certain sets of users and resulted in setting up elaborate A/B test designs to test a random number of subjective human guesses. By contrast, this data driven approach advantageously removes the subjective element of human guesses and instead uses computer rules. The approach can derive personalization methods that matches certain sets of users using a single A/B test set-up. In some embodiments, content presented by feature learning system405to the case individuals of a pair of a control sub-population and a test sub-population can comprise any suitable form of content for which the case individuals can provide a feedback metric. For example, in many embodiments, content presented by feature learning system405to the case individuals of a control sub-population and a test sub-population can comprise different versions of a website. For example, content presented to the case individuals of a control sub-population can comprise a version of a website selected according to an incumbent statistical model. Meanwhile, content presented to the case individuals of a test sub-population can comprise a version of the website selected according to an alternative statistical model. In some embodiments, the feedback metric measured by feature learning system405can comprise any suitable metric by which feedback of case individuals of a pair of a control sub-population and a test sub-population of the case individuals of the gross population can be measured. For example, in many embodiments, the feedback metric can comprise a click-through-rate, an order per session, or a revenue per session, such as, for example, when content presented by feature learning system405to the case individuals of a control sub-population and a test sub-population comprises different versions of a website. In some embodiments, the predetermined significance level value against which feature learning system405compares the probability value of the difference in feedback metrics provided by case individuals of a pair of a control sub-population and a test sub-population of the case individuals of the gross population can comprise any suitable value below which the difference in average feedback metrics provided by case individuals of the control sub-population and the test sub-population is unlikely to have been caused by random chance. For example, in many embodiments, the predetermined significance level value can be 0.01 (1 percent) or 0.05 (5 percent). In some embodiments, feature learning system405can evaluate any suitable sub-population feature or sub-population features that can be associated with a case individual of the gross population when feature learning system405is identifying the one or more sub-populations of case individuals from the gross population of case individuals. In many embodiments, the sub-population feature(s) associated with the respective case individuals of the gross population of case individuals can be stored in feature database501(FIG.5), and feature learning system405can query feature database501(FIG.5) to determine relevant case individuals of the gross population of case individuals when identifying the case individuals of the pair(s) of control and test sub-populations from the gross population of case individuals. For example, in many embodiments, the sub-population feature(s) can comprise a gender of the case individual and/or an age of the case individual. In these or other embodiments, the sub-population feature(s) can comprise one or more browsing acts of the case individual and/or a purchase history of the case individual, such as, for example, when content presented by feature learning system405to the case individuals of the gross population comprises different versions of a website. Exemplary browsing acts can comprise conducting a search, applying a filter, changing a view, selecting a link, etc. In many embodiments, feature learning system405can limit the sub-population feature(s) evaluated to a finite number of real values. For example, when evaluating age as a sub-population feature, feature learning system405can define age by individual years or by ranges of years (e.g., ages 11-20, ages 21-30, etc.). Limiting the sub-population feature(s) evaluated by feature learning system405to a finite number of real values can reduce computational demands on feature learning system405. In some embodiments, the number of real values to which the sub-population feature(s) evaluated by feature learning system405is limited can depend on the computational capacity of processor(s)401. In many embodiments, in order to identify a control sub-population and a test sub-population from the case individuals of the gross population, feature learning system405can randomly select a first group of the case individuals of the gross population that are associated with a set of sub-population feature(s) to form the control sub-population and a second group of the case individuals of the gross population that are associated with the set of sub-population feature(s) to form the test sub-population. The first group of the case individual of the gross population can be exclusive of the second group of the case individuals of the sub-population. Feature learning system405can perform this methodology for each of the pairs of control and test sub-populations evaluated by feature learning system405. In some embodiments, case individuals of different pairs of control and test sub-populations can overlap, and in other embodiments, case individuals of different pairs of control and test sub-populations can be exclusive of each other. By randomly selecting the case individuals of which the control sub-populations and the test sub-populations are comprised, feature learning system405can beneficially identify the one or more sub-populations of case individuals from the gross population of case individuals with less or without sampling bias. In some embodiments, sub-populations of case individuals of pairs of control and test sub-populations of the case individuals of the gross population having quantities of case individuals falling below a sub-population case individual quantity threshold value also can be excluded by feature learning system405from the sub-populations of case individuals identified from the gross population of case individuals. For example, in some embodiments, the sub-population case individual quantity threshold value can be approximately 500 or 1000 case individuals. In some embodiments, permitting case individuals of different pairs of control and test sub-populations to overlap can permit quantities of case individuals of pairs of control and test sub-populations of the case individuals of the gross population to be greater, which may permit feature learning system405to identify the sub-population(s) of case individuals from the gross population of case individuals with greater accuracy. In many embodiments, the sub-population case individual quantity threshold value can be determined by an operator of system300(FIG.3). In some embodiments, sub-populations of case individuals of pairs of control and test sub-populations of the case individuals of the gross population for which an average feedback metric for the test sub-population of case individuals does not exceed an average feedback metric for the control sub-population by an average feedback metric variance threshold value also can be excluded by feature learning system405from the sub-populations of case individuals identified from the gross population of case individuals. In many embodiments, the average feedback metric variance threshold value can be determined by an operator of system300(FIG.3). In some embodiments, feature learning system405can limit the sub-population(s) of case individuals identified from the gross population of case individuals to a predetermined number of sub-populations. In many embodiments, feature learning system405can limit the sub-population(s) of case individuals identified from the gross population of case individuals to a predetermined number of sub-population(s) having a greatest statistical significance (e.g., having probability values below the predetermined significance level value by one or more greatest margins) and/or having one or more greatest quantities of case individuals. In some embodiments, limiting the sub-population(s) of case individuals identified from the gross population of case individuals to a predetermined number of sub-population(s) can reduce computational demands on feature learning system405. In some embodiments, the predetermined number of sub-population features to which feature learning system405limits the sub-population(s) of case individuals identified from the gross population of case individuals can depend on the computational capacity of processor(s)401. In some embodiments, the statistical models based upon which feature learning system405selects content to present to a control sub-population and a test sub-population of the case individuals of the gross population can comprise any suitable different statistical models. Exemplary statistical models can comprise a linear regression, a logistic regression, a Poisson regression, a hierarchical tree-based regression, etc. In some embodiments, when feature learning system405is identifying the sub-population(s) of case individuals from the gross population of case individuals, feature learning system405can evaluate differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models. In these embodiments, the incumbent statistical model based upon which the control content is selected can remain the same. Evaluating differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models can be advantageous where two or more sets of sub-population feature(s) evaluated by feature learning system405return statistically significant differences in average feedback metrics for different and/or multiple alternative statistical models. For example, the statistical models can have a level of significance l, where l is a configurable parameter in the model. For example, one set of sub-population feature(s) comprising at least one first sub-population feature may return a statistically significant difference in average feedback metrics for a first alternative statistical model while another set of sub-population feature(s) comprising at least one of the first sub-population feature(s) and at least one other sub-population feature may return a statistically significant difference in average feedback metrics for a different alternative statistical model even though the other set of sub-population feature(s) includes at least one of the first sub-population feature(s). In some embodiments, when feature learning system405is identifying the sub-population(s) of case individuals from the gross population of case individuals, and when feature learning system405evaluates differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models, feature learning system405can evaluate differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models for each set of sub-population feature(s) evaluated by feature learning system405. In other embodiments, when feature learning system405is identifying the sub-population(s) of case individuals from the gross population of case individuals, and when feature learning system405evaluates differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models, feature learning system405can evaluate differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models only for sets of sub-population feature(s) evaluated by feature learning system405for which at least one difference in the average feedback metrics is statistically significant. For example, in these embodiments, feature learning system405can compare a difference in the average feedback metrics for statistical significance for test content selected according to a first alternative statistical model for each set of sub-population feature(s) evaluated by feature learning system405. Then, feature learning system405can compare a difference in the average feedback metrics for statistical significance for test content selected according to another alternative statistical model for the sub-population feature(s) that are determined to be statistically significant relative to the first alternative statistical model. In some of these embodiments, feature learning system405can continue this process for more alternative statistical models, limiting the sets of sub-population feature(s) evaluated as applicable, until a next statistical model no longer returns a statistically significant difference in the average feedback metrics for the sub-population feature(s) being evaluated. In some embodiments, when feature learning system405is identifying the sub-population(s) of case individuals from the gross population of case individuals, and when feature learning system405evaluates differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models, feature learning system405can reuse the same control sub-population and test sub-population for each set of sub-population feature(s) evaluated by feature learning system405. However, in other embodiments, feature learning system405can identify a new control sub-population and a new test sub-population for each alternative statistical model used to select test content. In many embodiments, in order to identify the sub-population(s) of case individuals from the gross population of case individuals, as described above, feature learning system405can use an optimization algorithm. When implementing the optimization algorithm, feature learning system405can assume each sub-population feature can only take a finite number of real values. For sub-population features not directly satisfying this assumption, feature learning system405can merge values or discretize to satisfy this assumption. Feature learning system405can let xi=(xi,1, xi,2, . . . , xi, F) be the i-th case individual and xi,jbe the value of the j-th sub-population feature of the i-th case individual. Further, feature learning system405can assume the total number of sub-population features used to represent a case individual for the gross population of case individuals is F and each case individual assumes a value for each of the sub-population features, that is, a real value of xi,jis available for each case individual i and each sub-population feature j. Further, feature learning system405can let yibe the feedback metric for the i-th case individual. In some embodiments, feature learning system405can consider one feedback metric, making yia scalar, and in other embodiments, feature learning system405can consider several feedback metrics together, making yia vector. In some embodiments, feature learning system405can assume the feedback metric yifor each case individual i is mutually independent (in the probabilistic sense). Feature learning system405can use T to denote a test sub-population of case individuals of a gross population of case individuals P and can use C to denote a control sub-population of case individuals of the gross population of case individuals P. Accordingly, feature learning system405can let xiT=(xi,1T, xi,2T, . . . , xi,FT) be a sub-population feature vector for the i-th case individual of the test sub-population of case individuals T and can let xjC=(xj,1C, xj,2C, . . . , xj,FC) be a sub-population feature vector of the j-th case individual of the control sub-population of case individuals C. Further, feature learning system405can let be the feedback metric for the i-th case individual of the test sub-population of case individuals T and yiCbe the feedback metric for the j-th case individual in the control sub-population of case individuals C. Feature learning system405can establish a criterion for establishing whether an impact of test content on a sub-population is statistically significant. For example, membership of the case individuals from the gross population of case individuals P in a test sub-population of case individuals T or a control sub-population of case individuals C can be decided randomly independent of x. Accordingly, comparison of feedback metrics for test and control sub-populations restricted to any sub-population of case individuals based on x, say the sub-population of case individuals {h(x)=v} whereh(⋅):R→Kis a measurable function andvϵK, can yield a proper measurement of the impact of the test content in consideration for the sub-population of case individuals, where x represents features characterizing the instances, h represents a suitable transformation of the features that could be used to define subpopulation, v is defined as one of the possible values the transformed features could take and for each possible value v the set of instances for which the transformed features (transformed by h( )) take the value v, forms a subpopulation,represents the set of all real numbers, K represents the dimension of the transformed feature vectors, h represents a transformation function for the features, H represents a special case of h( ), where the transformation is linear and h(x)=H x, where x is a feature vector of an instance. In some embodiments, feature learning system405can use linear representations of sub-population features such thath(⋅)=H×; where H is a K×F matrix. In order to find sub-populations of case individuals where the average feedback metric for the test sub-population exceeds the average feedback metric for the control sub-population and a difference in the average feedback metrics for the test and control sub-population of case individuals is statistically significant, feature learning system405can find H andvsatisfying the following Relationship (1) as follows: ❘"\[LeftBracketingBar]"[y-T|H·x=v_]-[y-C|H·x=v_]❘"\[RightBracketingBar]"Var[yT⁢v_|H·x=v_]❘"\[LeftBracketingBar]"T⋂{H·x=v_}❘"\[RightBracketingBar]"+Var[yC|H·x=v_]❘"\[LeftBracketingBar]"C⋂{H·x=v_}❘"\[RightBracketingBar]">q,(1)⇔❘"\[LeftBracketingBar]"T⋂{H·x=v_}❘"\[RightBracketingBar]"×❘"\[LeftBracketingBar]"C⋂{H·x=v_}❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"{H·x=v_}❘"\[RightBracketingBar]"×([y-T|H·x=v_]-[y-C|H·x=v_])2Var[yT|H·x=v_]·❘"\[LeftBracketingBar]"C⋂{H·x=v_}❘"\[RightBracketingBar]"+Var[yC|H·x=v_]·❘"\[LeftBracketingBar]"T⋂{H·x=v_}❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"{H·x=v_}❘"\[RightBracketingBar]">q2. where [y−T|H·x=v] and [y−C|H·x=v] are the average feedback metrics from the test and control sub-populations of case individuals respectively restricted to the sub-population of case individuals {H·x=v}; Var[yT|H·x=v] and Var[yC|H·x=v] are the empirical variances of the feedback metric in the test and control sub-populations of case individuals respectively restricted to the subpopulation {H·x=v}, q2is an appropriate quantile of the distribution of the quantity on the left hand side of Relationship (1), and the function |⋅| equals the size of the set in its argument. Under the assumption that the test content has no impact relative to the control content, the distribution of the quantity on the left hand side of Relationship (1) could be approximated by the square of a variable following standard normal distribution. Feature learning system405can call a sub-population of case individuals {H·x=v} eligible to be included in the sub-populations identified from the gross population of case individuals if it satisfies Relationship (1). A sub-population of case individuals {H·x=v}. may not be eligible according to Relationship (1) for one of the two reasons. First, there is little or no impact of the test content in consideration, or second, there is insufficient data to conclude anything, or both. These two reasons can complement each other. For example, where a larger number of case individuals are considered, feature learning system405can view even tiny impacts of test content as being statistically significant, whereas if a smaller number of case individuals are considered, the impact of the test content may need to be larger in order for feature learning system405to be able to measure the impact of the test content in a statistically significant way. On the other hand, if an impact of test content in consideration for the sub-population of case individuals is larger, feature learning system405can rely on a smaller number of case individuals to measure the impact of the test content in a statistically significant way, and if an impact of test content in consideration for the sub-population of individuals is smaller, feature learning system405can rely on a larger number of case individuals to measure the impact of the test content in a statistically significant way. The threshold for case individual sufficiency (i.e., the sub-population case individual quantity threshold value) and the threshold for impact of the test content on the sub-population (i.e., the average feedback metric variance threshold value) can be subjectively determined for feature learning system405by an operator of system300(FIG.3). Because solving Relationship (1) in its full generality may be impractical, feature learning system405instead can use Relationship (2), which relies on making specific assumptions on the dependence of feedback metric yiand case individual xi. For Relationship (2), even if the sub-population(s) of case individuals are defined in terms of linear sub-population feature representations, feature learning system405may not assume that the relationship between feedback metric yiand case individual xican be expressed by a known function. Instead, feature learning system405can aim to identify sub-population(s) of case individuals which have many case individuals, so that when feature learning system405applies Relationship (1) to measure the impact of test content in consideration, even relatively smaller impacts can be measured in a statistically significant way. For example, the model can include, with a level of significance l, in which l is a configurable parameter in the model. For example, focusing on the first term on the left hand side of Relationship (1), which is a quantification of the amount of data for a sub-population of case individuals {H·x=v}, feature learning system405can revise this term as |{H·x=v}|·wT·(1−wT) where wT=❘"\[LeftBracketingBar]"T⋂{H·x=v_}❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"{H·x=v_}❘"\[RightBracketingBar]" is the fraction of case individuals of the sub-population of case individuals in the test sub-population of case individuals relative to the case individuals of the sub-population of case individuals and where wTrepresents the fraction of test instances in the subpopulation defined as {H·x=v}. The revised first term on the left hand side of Relationship (1) can depend on the quantity of case individuals in the sub-population of case individuals |{H·x=vh}| as well as the fractions of case individuals of the test sub-population of case individuals and control sub-population of case individuals relative to the case individuals of the sub-population of case individuals, given by wTand (1−wT), respectively. Feature learning system405can find H so as to maximize the first term on the left hand side of Relationship (1) for all sub-population(s) of case individuals created by H, videlicet {H·x=vh}, where H represents the linear transformation applied to the features of the instances that help us define the subpopulations, and where h=1, 2, . . . , H. Since achieving that for all sub-population(s) of case individuals created by H together might be impractical, feature learning system405can maximize the expected value of the first term on the left hand side of Relationship (1) over all subpopulation(s) of case individuals created by H. The expected value of the first term on the left hand side of Relationship (1) over all subpopulation(s) of case individuals created by H, where each sub-population of case individuals is weighted by its relative size, can be simplified to Relationship (2) as follows: Ev_h[❘"\[LeftBracketingBar]"T⋂{H·x=v_}❘"\[RightBracketingBar]"×❘"\[LeftBracketingBar]"C⋂{H·x=v_}❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"{H·x=v_h}❘"\[RightBracketingBar]"]=1❘"\[LeftBracketingBar]"❘"\[RightBracketingBar]"⁢∑i⁢ϵ⁢T∑j⁢ϵ⁢C1{H·(xiT-xiC}=0_},(2) where 1C=1 if C is true, and 1C=0 otherwise, are indicator functions indicating whether condition C is true or not andis the total number of instances used in the AB test. Feature learning system405can search for H which maximizes the right hand side of Relationship (2). Meanwhile, feature learning system405can define a matrix Z, where Z represents a matrix corresponding to feature differences of all possible pairs of two instances, where one instance belongs to the test group and one instance belongs to the control group, for which columns of the matrix Z are of the form (xiT−xjC), where iϵT, and jϵC. Thus, matrix Z can have dimensions equal to F×|T|·|C|. As indicated previously, F represents the total number of sub-population features corresponding to case individuals in the gross population P, and |T| and |C| represent the quantity of case individuals in the test and control sub-populations, respectively. Feature learning system405can let Zkbe the k-th column of matrix Z. Accordingly, feature learning system405can search for H using Relationship (3) as follows: maxH,{ak,k=1,2,…,❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"}∑k=1❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"ak⁢s.t.ak⁢H·Zk=0_,∀k=1,2,…⁢❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]",ak∈{0,1},∀k=1,2,…⁢❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]".(3) For Relationship (3), the variables akcan act like slack variables, where akrepresents a variable in the optimization problem in (3) corresponding to the kth column of Z taking only two possible values, 0 and 1, in the sense that if H·Zk≠0, then akmust be 0 in order to satisfy the linear constraint in Relationship (3). If H·Zk=0, the corresponding slack variable akmust assume the value 1 in order to maximize the objective function Σk=1|T|·|C|akof Relationship (3). Therefore, if H, {ak, k=1, 2, . . . , |T|·|C|} is a solution of the Relationship (3), then H and {ak, k=1, 2, . . . , |T|·|C|} will satisfy the condition Σk=1|T|·|C|ak=ΣiϵTΣjϵC1(H·(x1T−jC)=0), which is equal to || times the right hand side of Relationship (2), whererepresents the total size of the population on which the A/B test is run. Accordingly, it follows that H obtained as a solution from Relationship (3) would also maximize the left hand side of Relationship (2). Feature learning system405can reduce the search space of H by eliminating some redundancies and imposing some structure on H in Relationship (3). For example, feature learning system405can demand the rows of H be orthonormal without changing the set of sub-population(s) we consider with the help of the following two propositions: (i) if H is not a full row rank, the set of sub-population(s) generated by H, videlicet {{H·x=vh}, h=1, 2, . . . , H}, also can be generated by a lower dimensional matrix with a lesser number of rows (Proposition 1); and (ii) if H is a full row rank, the set of sub-population(s) generated by H videlicet {{H·x=vh}, h=1, 2, . . . , H} also can be generated by a matrix with dimensions same as H for which the rows are orthonormal (Proposition 2). Feature learning system405can continue solving Relationship (3) for the next best solution. When feature learning system405has found a set of matrices {H1, H2, . . . , Hn} and starts searching for H(n+1), feature learning system405can impose additional restrictions on Relationship (3) to search for the next best solution, where, for the set of matrices, each Hiis a different transformation applied to the features of the instances to define a new set of subpopulations. The following proposition suggests that the row space of H(n+1)must not be a subset of the row space of H1for i=1, 2, . . . , n. Feature learning system405can denote the row space of Hiby(Hi), whereis defined as the row space of Hi, which is a vector space spanned by the rows of Hiand apply the proposition that if(Hn+1)⊆(Hi) for some i=1, 2, . . . , n, then the set of sub-population(s) generated by Hn+1is the same as the set of sub-population(s) generated by Hi(Proposition 3). Applying this proposition, feature learning system405can add the restriction that(Hn+1)⊆(Hi) for Hi, i=1, 2, . . . , n. The condition(Hn+1)⊆(Hi) can be equivalent to the condition Σj=1KH(n+1),j, (I−HiTHi)H(n=1),jT>0, where H(n+1),j, is the j-th row of H(n+1). The equivalence holds since HiTHiis the projection matrix for(Hi) and hence (I−HiTHi) is an idempotent matrix. So, putting everything together, having found {H1=1, 2, . . . , n}, to find the (n+1)-th H matrix H(n+1), feature learning system405can solve Relationship (4) as follows: maxH(n+1),(ak,k=1,2,…,❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"}∑k=1❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"ak⁢s.t⁢ak⁢H(n+1)·Z·k=0_,∀k=1,2,…⁢❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]",H(n+1)·H(n=1)T=I,∑j=1KH(n+1),j·(𝕀-HiT⁢Hi)⁢H(n=1),j·T>D,∀i=1,2,…,n,ak∈{0,1},∀k=1,2,…⁢❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"(4) The parameter D>0 can be chosen as an appropriate tuning parameter, which solves Relationship (4). When searching for the first matrix H1, that is n=0, the fourth set of constraints in Relationship (4) can disappear. Each run of Relationship (4) can give feature learning system405a sub-population feature representation H(n+1)and with increasing n, the optimal value of Relationship (4) drops. Feature learning system405can stop solving Relationship (4) when n exceeds a preset threshold or the optimal value drops below a preset threshold. Feature learning system405can start from K=1 (dimension of H(n+1)is K×F) and then continue increasing K, thus increasing the granularity of the sub-population(s). The required magnitude of impact of the test content in order to satisfy Relationship (1) goes up as a consequence, which in turn reduces the likelihood of Relationship (1) being satisfied for those sub-population(s). Thus, feature learning system405can keep the K much lower compared to F. To solve Relationship (4), feature learning system405can use a Lagrangian relaxation of Relationship (4) given by Relationship (5), as follows: maxH(n+1),(ak∈{0,1},k=1,2,…,❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]".❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"}L=∑k=1❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"ak+∑j=1K∑k=1❘"\[LeftBracketingBar]"T❘"\[RightBracketingBar]"·❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"λj,k⁢H(n+1),j⁢Zk+∑i=1nμi(∑j=1KH(n+1),j·(𝕀-HiT⁢Hi)⁢H(n=1),j·T-D),s.t.H(n+1)·H(n=1)T=I(5) where H(n+1),jis the j-th row of H(n+1)and μiand λj,kare penalty constants. Feature learning system405can take a greedy approach and solve Relationship (5) by updating H(n+1)and {akϵ{0,1}, k=1, 2, . . . , |T|·|C|} in sequence. Feature learning system405can choose μi=½ for all i=1, 2, . . . , n, and at each update, feature learning system405can change the constant λj,kas λj,k=−sign (H(n+1),j·Zk), where H(n+1)in the last step is used for computation of the constant. Feature learning system405can update H(n+1)by gradient descent, moving the H(n+1)slightly in the direction of the derivative of L given in Relationship (5) with respect to H(n+1). Also, leveraging Proposition 3, feature learning system405can consider only the update vector projected in the orthogonal space of the row space of current H(n+1). Accordingly, the update to the matrix H(n+1)can be, for a small ϵ>0, Relationship (6), as follows: H(n=1)updated=H(n+1)+ϵ⁢∂L∂H(n+1)⁢(𝕀-H(n+1)T⁢H(n+1))∂L∂H(n+1)⁢(𝕀-H(n+1)T⁢H(n+1)).(6) Next, feature learning system405can update {akϵ{0,1}, k=1, 2, . . . , |T|·|C|}. To update {akϵ{0,1}, k=1, 2, . . . , |T|·|C|} feature learning system405can compute MAX=maxk=1|T|·|C|maxi=1K|H(n+1),x·Zk|. Then, feature learning system405can update each akso that if MAX=maxk=1|T|·|C|maxi=1K|H(n+1),x·Zk|>θ*MAX for some value of 0<θ<1, feature learning system405sets ak=0 and otherwise, sets ak=1. The parameter t can be tuned for the speed of convergence of Relationship (6). Feature learning system405can select initializations of the variables. The optimal value of the slack variable akcan take the value 1 if, and only if, the sub-population feature(s) of the corresponding pair are equal in value once premultiplied by H(n+1). For an initial choice of {akϵ{0,1}, k=1, 2, . . . , |T|·|C|}, feature learning system405can look for an appropriate H(n+1)for which the sub-population feature(s) of the corresponding pair are equal in value once premultiplied by H(n+1)for every pair and choose ak=1 for k=1, 2, . . . , |T|·|C|. For the initial choice of Hnfeature learning system405can perturb the last found solution a little as shown by Relationship (7), as follows: H(n=1)initial=Hn+ϵ⁢∂L∂Hn·(𝕀-HnT⁢Hn)∂L∂Hn·(𝕀-HnT⁢Hn)(7) Note that Hncan be the optimal solution for Relationship (4) with n replaced by (n−1). So, Hncan satisfy all the constraints on H(n+1)except for the additional constraint imposed when n is incremented by 1 in Relationship (4), that is, H(n+1)cannot belong to(Hn). Ideally, the perturbation in Relationship (7) can satisfy all constraints on H(n+1). For initialization of H1, feature learning system405can start with K×K identity matrix appended by a zero matrix of dimensions K×(F−K). Below, Table 1 outlines a procedure by which feature learning system405can select H(n+1): TABLE 11:SEARCH FOR H(n+1)(Start with H(n+1)= H(n=1)initialas in Relationship (7).)2:Start with ak= 1∀k = 1,2, ... , |T| · |C|.3:Compute Δi·= Σk=1|T| · |C| − sign(H(n+1),i·Z·k)akZ·kT∀i = 1,2, ... K andMAX=maxk=1T·C⁢maxi=1K⁢H(n+1),i··Z·k4:If MAX < γ, STOP.5:If⁢⁢∂L∂H(n+1)·(𝕀-H(n+1)T⁢H(n+1))=[Δ+H(n+1)⁡(∑i=1n⁢(𝕀-HiT⁢Hi))]⁢(𝕀-H(n+1)T⁢H(n+1))≠0_,update H(n+1)as in Relationship (6), else if ak-s are updated at least once, STOP, else try adifferent initial H(n+1), say, by changing ϵ in Relationship (7).6:Orthonormalize rows of H(n+1)following Gram-Schmidt algorithm.7:Set ak= 0 if maxi=1K|H(n+1),i·· Z·k| > θ · MAX, otherwise, set ak= 1.8:Go back to step 3.9:end procedure In some embodiments, a case individual may be associated with multiple of the set(s) of sub-population feature(s) identified from the gross population by feature learning system405using the optimization algorithm. While selecting content according to an alternative statistical model may increase a likelihood of feedback by the case individual for one or more sub-population features associated with the case individual, selecting content according to the same alternative statistical model may decrease a likelihood of feedback by the case individual for one or more other of the sub-population feature(s) associated with the case individual. Accordingly, in many embodiments, feature learning system405can limit the set(s) of sub-population feature(s) identified from the gross population to those set(s) of sub-population feature(s) having the most impact on engagement by the case individual. For example, for a set of sub-population(s) S of the form S={H·x=v} which satisfy Relationship (1), feature learning system405can derive the valuation of a case individual x according to Relationship (8), as follows: v(x)=Σ{S,xϵS and S satisfies (1)}wS,xv(S),  (8) where the weights wS,x, are described below, where w represents the weight of subpopulation S in determining the value of the intervention for instance x, and v(S) is the average valuation for the sub-population of case individuals S as found from the randomized experiment given by v(S)=[y−T|S]−[y−C|S] where [y−T|S] and [y−C|S] represents the average metrics from the test and control sub-populations of case individuals, respectively, restricted to the sub-population of case individuals S. If no sub-population of case individuals satisfies the condition in the sum on the right hand side of Relationship (8), the value v(x) is the empty sum, which is 0. The weights wS,xcan have an inverse relationship with the volatility of the average feedback metric v(S), as higher volatility means less confidence in our estimate of the average valuation v(S) for the sub-population of case individuals S. Also, the weights wS,xcan penalize bigger sub-populations of case individuals as sub-populations of case individuals reduce the volatility of v(S) by adding more case individuals and thus the individual valuations of the case individuals of sub-population of case individuals S are not necessarily close to the average valuation v(S) of the sub-population of case individuals S. Feature learning system405can compute the weights by solving Relationship 9, as follows: ∑(S|x∈S⁢and⁢S⁢satisfies⁢(1)}⁢wS,x=1,wS,x∝1σ[v⁡(S)]⁢1❘"\[LeftBracketingBar]"S❘"\[RightBracketingBar]",(9) where σ[v(S)] is the volatility of v(S), and |S| represents the size of subpopulation |S|. The case individual represented by its feature vector x can play a role in defining the weights through Relationship (8), where the summands are determined by x. In many embodiments, when feature learning system405has identified the sub-population(s) of case individuals from the gross population of case individuals, feature learning system405can associate the set(s) of sub-population feature(s) associated with the sub-population(s) with the alternative statistical model(s) used to select the test content. Accordingly, the set(s) of sub-population features and the alternative statistical model(s) can be stored in feature database501(FIG.5). In further embodiments, when feature learning system405evaluates differences in the average feedback metrics for statistical significance for test content selected according to multiple alternative statistical models, the sub-population feature(s) of sub-population(s) of case individuals identified from the gross population of case individuals that return statistically significant differences in average feedback metrics for multiple alternative statistical models can be associated with the alternative statistical model that results in the most statistically significant difference in average feedback metrics. For example, in some embodiments, the models can involve a level of significance l, in which l is a configurable parameter in the model. In many embodiments, the set(s) of sub-population feature(s) and the associated alternative statistical model(s) stored in feature database501(FIG.5) can be used by a content provider to determine whether or not to select content to provide to one or more applied individuals according to the incumbent statistical model or one of the alternative statistical model(s). For example, where an applied individual is associated with one of the set(s) of sub-population feature(s) stored in feature database501(FIG.5), a content provider can select content to provide to the applied individual according to the alternative statistical model associated with that set of sub-population feature(s). Meanwhile, where an applied individual is not associated with one of the set(s) of sub-population feature(s) stored in feature database501(FIG.5), the content provider can select content to provide to the applied individual according to the incumbent statistical model. In some embodiments, the content provider can be the operator of system300(FIG.3). However, in other embodiments, the content provider can be a third party. Personalization System406 Personalization system406can receive a request from an applied individual to access or receive content. For example, the content can comprise a website, and the applied individual may be requesting to visit the website. In many embodiments, personalization system406can receive the request from the applied individual to access or receive the content from one of contact computer systems303ofFIG.3(e.g., contact computer system309(FIG.3)) via communication system404(FIG.4). Further, personalization system406can provide content to the one of contact computer systems303ofFIG.3(e.g., contact computer system309(FIG.3)) in response to receiving the request from the applied individual to access or receive content. In many embodiments the techniques described here provide one or more technical improvements. For example, the techniques described herein provide a more fine-tuned personalization or customization of webpage content over using any combination of one or more single personalization methods due to the ability to combine several methods of personalization together. In the absence of the techniques described in this disclosure, other attempts would rely on human guesswork as to which personalization method could be superior to other personalization methods for one or more sets of users, which would involve numerous experiments to test the numerous hypotheses generated based the human guesswork. The possible subset of users grows exponentially as the number of users increase, which in turn involves an exponential increase in A/B testing. The techniques described here can advantageously provide an artificially intelligent system the removes the guesswork previously relied upon, in which the set-up involves a single A/B test to understand which method of personalization performs better for which set of users, to more accurately generate the preferred system for a set of users. In some embodiments, before providing content to the one of contact computer systems303ofFIG.3(e.g., contact computer system309(FIG.3)) being used by the applied individual, personalization system406can identify the applied individual. Personalization system406can use any suitable methodology for identifying the applied individual. For example, personalization system406can request and receive identifying information (e.g., user name, password, name, mailing address, telephone number, and/or email address, etc.) about the applied individual. In some embodiments, personalization system406can request the identifying information at the one of contact computer systems303ofFIG.3(e.g., contact computer system309(FIG.3)) being used by the applied individual, and the applied individual can use the one of contact computer systems303ofFIG.3(e.g., contact computer system309(FIG.3)) to provide the identifying information to personalization system406via communication system404. In further embodiments, personalization system406can generate a user profile for the applied individual, including the identifying information, and store the user profile at identification database(s)502(FIG.5). When the applied individual is already associated with a user profile, personalization system406can request and receive part of the identifying information (e.g., user name and password) and retrieve the remaining identifying information, as needed, from identification database(s)502(FIG.5). In many embodiments, identifying the applied individual can comprise determining that the applied individual is associated with one or more sub-population features. Personalization system406can use any suitable methodology to determine that the applied individual is associated with the sub-population feature(s). In many embodiments, personalization system406can request and receive at least part of the sub-population feature(s) associated with the applied individual when requesting and receiving identifying information for the applied individual. In these or other embodiments, personalization system406can determine at least part of the sub-population feature(s) associated with the applied individual by tracking the behavior of the applied individual, such as, for example, using a hypertext transfer protocol cookie or the like. In some embodiments, personalization system406can store the sub-population feature(s) associated with the applied individual at identification database(s)502(FIG.5), and can retrieve the sub-population feature(s) associated with the applied individual in conjunction with receiving the identifying information for the applied individual. In many embodiments, personalization system406can reference feature database(s)501(FIG.5) in view of the sub-population feature(s) associated with the applied individual to determine whether or not to select content to provide to the applied individuals according to an incumbent statistical model or one of the alternative statistical model(s). For example, when the sub-population feature(s) associated with the applied individual match a set of sub-population feature(s) stored at feature database(s)501(FIG.5), personalization system406can provide content to the applied individual (e.g., a version of a website) formatted according to the alternative statistical model associated with the set of sub-population feature(s) stored at feature database(s)501(FIG.5), such as, for example, rather than providing content to the applied individual (e.g., a version of a website) formatted according to the incumbent statistical model. For simplicity, the functionality of personalization system406generally is described herein as it relates particularly to contact computer system309of contact computer system(s)303and a single applied individual, but in many embodiments, the functionality of personalization system406can be extended to multiple applied individuals and multiple of contact computer system(s)303, at the same or at different times. Turning ahead now in the drawings,FIG.6illustrates a flow chart for an embodiment of a method600of providing (e.g., manufacturing) a system. Method600is merely exemplary and is not limited to the embodiments presented herein. Method600can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, the activities of method600can be performed in the order presented. In other embodiments, the activities of the method600can be performed in any other suitable order. In still other embodiments, one or more of the activities in method600can be combined or skipped. In many embodiments, the system can be similar or identical to system300(FIG.3). In many embodiments, method600can comprise activity601of providing (e.g., manufacturing) a central computer system. For example, the central computer system can be similar or identical to central computer system301(FIG.3). In many embodiments, method600can comprise activity602of providing (e.g., manufacturing and/or programming) a feature learning system. The feature learning system can be similar or identical to feature learning system405(FIG.4). In some embodiments, activity602can be part of activity601. In many embodiments, method600can comprise activity603of providing (e.g., manufacturing and/or programming) a personalization system. The personalization system can be similar or identical to personalization system406(FIG.4). In some embodiments, activity603can be part of activity601. In further embodiments, activity603can be part of activity602, and vice versa. In some embodiments, method600can comprise activity604of providing one or more feature databases. The feature database(s) can be similar or identical to feature database(s)501(FIG.5). In some embodiments, method600can comprise activity605of providing one or more identification databases. The identification database(s) can be similar or identical to identification database(s)502(FIG.5). In other embodiments activity605can be omitted. In further embodiments, activity605can be part of activity604, and vice versa. In some embodiments, method600can comprise activity606of providing one or more contact computer systems. The contact computer system(s) can be similar or identical to contact computer system(s)303(FIG.3). In other embodiments, activity606can be omitted. Turning ahead now in the drawings,FIG.7illustrates a flow chart for an embodiment of a method700. Method700is merely exemplary and is not limited to the embodiments presented herein. Method700can be employed in many different embodiments or examples not specifically depicted or described herein. In some embodiments, the activities of method700can be performed in the order presented. In other embodiments, the activities of the method700can be performed in any other suitable order. In still other embodiments, one or more of the activities in method700can be combined or skipped. In many embodiments, method700can comprise activity701of identifying one or more sub-populations of case individuals from a gross population of case individuals. In many embodiments, performing activity701can be similar or identical to identifying one or more sub-populations of case individuals from a gross population of case individuals as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the gross population of case individuals can be similar or identical to the gross population of case individuals described above with respect to system300(FIG.3) and feature learning system405(FIG.4), the sub-population(s) of case individual(s) can be similar or identical to the sub-population(s) of case individuals described above with respect to system300(FIG.3) and feature learning system405(FIG.4), and the case individuals can be similar or identical to the case individuals described above with respect to system300(FIG.3) and/or feature learning system405(FIG.4).FIG.8illustrates an exemplary activity701, according to the embodiment ofFIG.7. In many embodiments, activity701can comprise activity801of identifying a first sub-population of case individuals from the gross population of case individuals. In many embodiments, performing activity801can be similar or identical to identifying a first sub-population of case individuals from the gross population of case individuals as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). For example, case individuals of the first sub-population of case individuals can be associated with at least one first sub-population feature. Further, the first sub-population feature(s) can be similar or identical to one or more of the sub-population feature(s) described above with respect to system300(FIG.3) and feature learning system405(FIG.4).FIG.9illustrates an exemplary activity801, according to the embodiment ofFIG.7. In many embodiments, activity801can comprise activity901of identifying a first control sub-population of case individuals from the gross population of case individuals. In many embodiments, performing activity901can be similar or identical to identifying a first control sub-population of case individuals from the gross population of case individuals as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the first control sub-population of case individuals can be similar or identical to one of the control sub-populations of case individuals described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity902of identifying a first test sub-population of case individuals from the gross population of case individuals. In many embodiments, performing activity902can be similar or identical to identifying a first test sub-population of case individuals from the gross population of case individuals as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the first test sub-population of case individuals can be similar or identical to one of the test sub-populations of case individuals described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity903of presenting first control content to case individuals of the first control sub-population. In many embodiments, performing activity903can be similar or identical to presenting first control content to case individuals of the first control sub-population as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the first control content can be similar or identical to the control content described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity904of measuring an average feedback metric of the case individuals of the first control sub-population provided in response to being presented the first control content. In many embodiments, performing activity904can be similar or identical to measuring an average feedback metric of the case individuals of the first control sub-population provided in response to being presented the first control content as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the feedback metric can be similar or identical to the feedback metric described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity905of presenting first test content to case individuals of the first test sub-population. In many embodiments, performing activity905can be similar or identical to presenting first test content to case individuals of the first test sub-population as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the first test content can be similar or identical to the test content described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity906of measuring an average feedback metric of the case individuals of the first test sub-population provided in response to being presented the first test content. In many embodiments, performing activity906can be similar or identical to measuring an average feedback metric of the case individuals of the first test sub-population provided in response to being presented the first test content as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). In many embodiments, activity801can comprise activity907of determining that the average feedback metric of the case individuals of the first test sub-population exceeds the average feedback metric of the case individuals of the first control sub-population and that a probability value for a difference of the average feedback metric of the case individuals of the first test sub-population and the average feedback metric of the case individuals of the first control sub-population is less than a predetermined significance level value. In many embodiments, performing activity907can be similar or identical to determining that the average feedback metric of the case individuals of the first test sub-population exceeds the average feedback metric of the case individuals of the first control sub-population and that a probability value for a difference of the average feedback metric of the case individuals of the first test sub-population and the average feedback metric of the case individuals of the first control sub-population is less than a predetermined significance level value as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, the predetermined significance level value can be similar or identical to the predetermined significance level value described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Referring now back toFIG.8, in many embodiments, activity801can be repeated one or more times to identify one or more other sub-populations of case individuals from the gross population of case individuals, as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). The sub-population feature(s) associated with each of the first sub-population of case individuals and the other sub-population(s) of case individuals can differ by at least one sub-population feature. In some embodiments, activity801can be performed one or more times with different test content for activities905-907, as described above with respect to system300(FIG.3) and feature learning system405(FIG.4). Further, when activity801is repeated one or more times to identify other sub-population(s) of case individuals, in some embodiments, different control sub-populations of case individuals and test sub-populations of case individuals can be used, and in other embodiments, the same control sub-populations of case individuals and test sub-populations of case individuals can be used. In many embodiments, each repetition of activity801can be performed serially, while in other embodiments, each repetition of activity801can be performed in parallel to each other. Referring now back toFIG.7, in many embodiments, method700can comprise activity702of receiving a request to receive or access content from an applied individual. In many embodiments, performing activity702can be similar or identical to receiving a request to receive or access content from an applied individual as described above with respect to system300(FIG.3) and personalization system406(FIG.4). In many embodiments, activity702can be performed after activity701. In some embodiments, activity702can be omitted. In many embodiments, method700can comprise activity703of identifying the applied individual. In many embodiments, performing activity702can be similar or identical to identifying the applied individual as described above with respect to system300(FIG.3) and personalization system406(FIG.4). In many embodiments, activity703can be performed after activity701and activity702. In some embodiments, activity703can be omitted. In many embodiments, method700can comprise activity704of presenting a second version of the content to the applied individual instead of a first version of the content. In many embodiments, performing activity702can be similar or identical to presenting a second version of the content to the applied individual instead of a first version of the content as described above with respect to system300(FIG.3) and personalization system406(FIG.4). In many embodiments, activity704can be performed after activity701, activity702, and activity703. In some embodiments, activity704can be performed in response to activity703. In further embodiments, activity704can be omitted. In several embodiments, the systems described herein advantageously can transform a traditionally subjective process performed by humans, which applied subjective human manual determinations or guesses, into a streamlined process using a single A/B test data to determine which personalization method is working better for a particular subset of users. By incorporating the rules described in this disclosure, the systems described herein provide an improvement over the conventional approaches by reducing the number of A/B tests performed in order to understand which personalization method to select for a set of users, and by removing human guesswork in determining whether a group of users prefers a different method of personalization. The techniques described herein are rooted in computer technologies that overcome existing problems in database systems, which can increase available bandwidth, reduce network traffic, and efficiently manage databases. Conventional database systems cannot handle massive amounts of network traffic or database requests, while keeping latency to an acceptable level and/or avoiding server crashes. The techniques described herein can provide a technical solution, such as one that utilizes databases in a novel arrangement. This technology-based solution marks an improvement over existing computing capabilities and functionalities related to database systems by improving bandwidth, reducing network traffic, and permitting greater database efficiency (e.g., by processing combined read/delete requests). The systems can improve the way databases store, retrieve, delete, and/or transmit data. In many embodiments, the techniques described herein can provide several technological improvements. Specifically, the techniques described herein can reduce network load by enabling users to find relevant information faster. By reducing the network load, the systems and methods described herein can help to improve performance of the CPU, memory and cache for recommendation systems. This improvement can directly reduce the number of service calls per second and/or can translate into better usage of various system components like CPU, memory, hard disk, etc. As described above, the methods and systems described herein can process huge amount of data efficiently and allow recommendation systems to isolate or filter for one or more sets of users that have different preferences for content over other sets of users in the technical field of customizing webpage content based on user preferences and user information. Once the webpage content preferred by a set of users is determined, the set of users can be presented with content that is relevant, as opposed to a generalized pool of items, hence reducing the number of pages the user would browse in order to reach the content for which they are interested. This approach is different from conventional approaches, which applied subjective, manual human determinations. In many embodiments, the method described herein can cover the identification itself of different sets of users, which can advantageously apply to any approach of identifying content recommendations. This level of personalization for transmitting content recommendations does not exist in conventional approaches to targeted content to particular groups or sets of users. Because these described methods cover the identification of different sets of users with similar content preferences itself, in some embodiments, any approach of identifying content recommendations can be used. The level of personalization in the timing of when the user reviews the customized content does not exist in conventional approaches, which typically transmit recommendations to each user at a present time after a certain action, or to all of a group of users at the same time after a certain event. A number of embodiments include a system. The system can include one or more processors and one or more non-transitory memory storage devices storing computer instructions configured to run on the one or more processors and perform identifying one or more sub-populations of case individuals from a gross population of case individuals. Identifying the one or more sub-populations of case individuals from the gross population of case individuals can comprise identifying a first sub-population of case individuals from the gross population of case individuals. The one or more sub-populations of case individuals can comprise the first sub-population of case individuals. Case individuals of the first sub-population of case individuals can be associated with at least one first sub-population feature. Identifying the first sub-population of case individuals from the gross population of case individuals can comprise identifying a first control sub-population of case individuals from the gross population of case individuals. Identifying the first control sub-population of case individuals from the gross population of case individuals can comprise grouping together first case individuals randomly selected from case individuals of the gross population of case individuals to form the first control sub-population of case individuals. The first case individuals can be associated with the at least one first sub-population feature. Identifying the first sub-population of case individuals from the gross population of case individuals can also can comprise identifying a first test sub-population of case individuals from the gross population of case individuals. Identifying the first test sub-population of case individuals from the gross population of case individuals can comprise grouping together second case individuals randomly selected from the case individuals of the gross population of case individuals to form the first test sub-population of case individuals. The first case individuals can be exclusive from the second case individuals. The second case individuals can be associated with the at least one first sub-population feature. The first case individuals and the second case individuals together can comprise the case individuals of the first sub-population of case individuals. Identifying the first sub-population of case individuals from the gross population of case individuals can comprise presenting first control content to case individuals of the first control sub-population. The first control content can be selected according to a first statistical model. Identifying the first sub-population of case individuals from the gross population of case individuals additionally can comprise measuring an average feedback metric of the case individuals of the first control sub-population provided in response to being presented the first control content. Identifying the first sub-population of case individuals from the gross population of case individuals further can comprise presenting first test content to case individuals of the first test sub-population. The first test content can be selected according to a second statistical model different than the first statistical model. Identifying the first sub-population of case individuals from the gross population of case individuals additionally can comprise measuring an average feedback metric of the case individuals of the first test sub-population provided in response to being presented the first test content. Identifying the first sub-population of case individuals from the gross population of case individuals further can comprise determining that the average feedback metric of the case individuals of the first test sub-population exceeds the average feedback metric of the case individuals of the first control sub-population and that a probability value for a difference of the average feedback metric of the case individuals of the first test sub-population and the average feedback metric of the case individuals of the first control sub-population is less than a predetermined significance level value. Various embodiments include a method. The method can include being implemented via execution of computer instructions configured to run at one or more processors and configured to be stored at one or more non-transitory memory storage devices. Identifying one or more sub-populations of case individuals from a gross population of case individuals can comprise identifying the one or more sub-populations of case individuals from the gross population of case individuals. Identifying a first sub-population of case individuals from the gross population of case individuals can include the one or more sub-populations of case individuals comprising the first sub-population of case individuals. Case individuals of the first sub-population of case individuals are associated with at least one first sub-population feature. Identifying the first sub-population of case individuals from the gross population of case individuals comprises identifying a first control sub-population of case individuals from the gross population of case individuals. Identifying the first control sub-population of case individuals from the gross population of case individuals comprises grouping together first case individuals randomly selected from case individuals of the gross population of case individuals to form the first control sub-population of case individuals. The first case individuals are associated with the at least one first sub-population feature. Identifying a first test sub-population of case individuals from the gross population of case individuals can also comprise identifying the first test sub-population of case individuals from the gross population of case individuals comprises grouping together second case individuals randomly selected from the case individuals of the gross population of case individuals to form the first test sub-population of case individuals. The first case individuals are exclusive from the second case individuals. The second case individuals are associated with the at least one first sub-population feature. The first case individuals and the second case individuals together comprise the case individuals of the first sub-population of case individuals presenting first control content to case individuals of the first control sub-population. The first control content is selected according to a first statistical model measuring an average feedback metric of the case individuals of the first control sub-population provided in response to being presented the first control content presenting first test content to case individuals of the first test sub-population. The first test content is selected according to a second statistical model different than the first statistical model measuring an average feedback metric of the case individuals of the first test sub-population provided in response to being presented the first test content. Determining that the average feedback metric of the case individuals of the first test sub-population exceeds the average feedback metric of the case individuals of the first control sub-population. Determining that a probability value for a difference of the average feedback metric of the case individuals of the first test sub-population and the average feedback metric of the case individuals of the first control sub-population is less than a predetermined significance level value. Many embodiments can include a system. The system can include comprising one or more processors and one or more non-transitory memory storage devices storing computer instructions configured to run on the one or more processors and perform identifying one or more sub-populations of case individuals from a gross population of case individuals. Identifying the one or more sub-populations of case individuals from the gross population of case individuals comprises identifying a first sub-population of case individuals from the gross population of case individuals. The one or more sub-populations of case individuals comprise the first sub-population of case individuals. Case individuals of the first sub-population of case individuals are associated with at least one first sub-population feature. Identifying the first sub-population of case individuals from the gross population of case individuals comprises identifying a first control sub-population of case individuals from the gross population of case individuals. Identifying the first control sub-population of case individuals from the gross population of case individuals comprises grouping together first case individuals randomly selected from case individuals of the gross population of case individuals to form the first control sub-population of case individuals. The first case individuals are associated with the at least one first sub-population feature identifying a first test sub-population of case individuals from the gross population of case individuals. Identifying the first test sub-population of case individuals from the gross population of case individuals comprises grouping together second case individuals randomly selected from the case individuals of the gross population of case individuals to form the first test sub-population of case individuals. The first case individuals are exclusive from the second case individuals. The second case individuals are associated with the at least one first sub-population feature. The first case individuals and the second case individuals together comprise the case individuals of the first sub-population of case individuals presenting first control content to case individuals of the first control sub-population. The first control content is selected according to a first statistical model measuring an average feedback metric of the case individuals of the first control sub-population provided in response to being presented the first control content presenting first test content to case individuals of the first test sub-population. The first test content is selected according to a second statistical model different than the first statistical model measuring an average feedback metric of the case individuals of the first test sub-population provided in response to being presented the first test content. Determining that the average feedback metric of the case individuals of the first test sub-population exceeds the average feedback metric of the case individuals of the first control sub-population and that a probability value for a difference of the average feedback metric of the case individuals of the first test sub-population and the average feedback metric of the case individuals of the first control sub-population is less than a predetermined significance level value. After identifying the one or more sub-populations of case individuals from the gross population of case individuals identifying a first applied individual visiting a website. Identifying the first applied individual visiting the website comprises determining that the first applied individual is associated with the at least one first sub-population feature and presenting a second version of the website to the first applied individual instead of a first version of the website in response to determining that the first applied individual is associated with the at least one first sub-population feature. The first version of the website is selected according to the first statistical model and the second version of the website is selected according to the second statistical model. A number of embodiments can include a system. The system can include comprising one or more processors and one or more non-transitory media storing computer instructions configured to run on the one or more processors and perform certain acts. The acts can include identifying a first sub-population of case individuals from a gross population of the case individuals. The first sub-population of the case individuals can be associated with at least one first sub-population feature. The acts also can include presenting first test content to a first test sub-population of the case individuals of the first sub-population of the case individuals. The first test content can be selected according to a first statistical model. The first statistical model can include measuring a first test sub-population average feedback metric based on first test content feedback provided from the first test sub-population of the case individuals in response to being presented the first test content. The first statistical model also can include determining that the first test sub-population average feedback metric exceeds a first control population average feedback metric of a first control population of the case individuals. The first control population of the case individuals can be distinct from the first test sub-population of the case individuals. The first statistical model further can include determining that a probability value for a difference between the first test sub-population average feedback metric and the first control population average feedback metric can be less than a predetermined significance level value. Various embodiments can include a method. The method can include being implemented via execution of computer instructions configured to run at one or more processors and configured to be stored at one or more non-transitory memory storage devices. The method can include identifying a first sub-population of case individuals from a gross population of the case individuals. The first sub-population of the case individuals can be associated with at least one first sub-population feature. The method also can include presenting first test content of a first test sub-population of the case individuals of the first sub-population of the case individuals. The first test content can be selected according to a first statistical model. The first statistical model can include measuring a first test sub-population average feedback metric based on first test content feedback provided from the first test sub-population of the case individuals in response to being presented the first test content. The first statistical model also can include determining that the first test sub-population average feedback metric exceeds a first control population average feedback metric of a first control population of the case individuals. The first control population of the case individuals can be distinct from the first test sub-population of the case individuals. The first statistical model further can include determining that a probability value for a difference between the first test sub-population average feedback metric and the first control population average feedback metric can be less than a predetermined significance level value. Although the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made without departing from the spirit or scope of the disclosure. Accordingly, the disclosure of embodiments is intended to be illustrative of the scope of the disclosure and is not intended to be limiting. It is intended that the scope of the disclosure shall be limited only to the extent required by the appended claims. For example, to one of ordinary skill in the art, it will be readily apparent that any element ofFIGS.1-9may be modified, and that the foregoing discussion of certain of these embodiments does not necessarily represent a complete description of all possible embodiments. For example, one or more of the activities of the methods described herein may include different activities and be performed by many different elements, in many different orders. As another example, the elements within central computer system301and/or contact computer system(s)303inFIG.3can be interchanged or otherwise modified. Generally, replacement of one or more claimed elements constitutes reconstruction and not repair. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims, unless such benefits, advantages, solutions, or elements are stated in such claim. Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents.
124,230
11860881
DETAILED DESCRIPTION OF THE INVENTION Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments of the invention may be readily combined, without departing from the scope or spirit of the invention. In addition, as used herein, the term “or” is an inclusive “or” operator, and is equivalent to the term “and/or,” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,”“an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” As used herein, the term “event data” refers to computing data that is collected about a computing system, including, for example, an action, characteristic, condition (or state), or state change of the computing system. For example, such events may be about a computing system's performance, actions taken by the computing system, or the like. Event data may be obtained from various computing log files generated by the computer's operating system, and/or other monitoring application. However, event data is not restricted by a file format or structure from which the event data is obtained. As used herein, an event record refers to data associated with a single event. As used herein, the term “report” refers to one or more visualizations of search query results. For example, a report may include a table of data, a timeline, a chart, a “field picker” or the like. In one embodiment, the report is interactive, enabling a user to selectively view pieces of raw data used to generate the report. For example, if the report lists users sorted based on the number of times each user has logged into the system, each user is selectable to view detailed records of that user's login events. Briefly described is a mechanism for generating a report derived from data, such as event data, stored on a plurality of distributed nodes. In one embodiment the analysis is generated using a “divide and conquer” algorithm, such that each distributed node analyzes locally stored event data while an aggregating node combines these analysis results to generate the report. In one embodiment, each distributed node also transmits a list of event data references associated with the analysis result to the aggregating node. The aggregating node may then generate a global ordered list of data references based on the list of event data references received from each distributed node. Subsequently, in response to a user selection of a range of global event data, the report may dynamically retrieve event data from one or more distributed nodes for display according to the global order. Illustrative Operating Environment FIG.1shows components of one embodiment of an environment in which the invention may be practiced. Not all the components may be required to practice the invention, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of the invention. As shown, system100ofFIG.1includes local area networks (“LANs”)/wide area networks (“WANs”)-(network)107, client devices101-103, and distributed search server109. One embodiment of client devices101-103is described in more detail below in conjunction withFIG.2. Generally, however, client devices101-103may include virtually any computing device capable of communicating over a network to send and receive information, including a search query, analysis results of a search query, lists of event data references, collections of event data, and the like. Client devices101-103are referred to interchangeably herein as “distributed computing devices”, “distributed nodes”, or the like. In one embodiment, one or more of client devices101-103may be configured to operate within a business or other entity to perform a variety of services for the business or other entity. For example, client devices101-103may be configured to operate as a web server, an accounting server, a production server, an inventory server, or the like. However, client devices101-103are not constrained to these services and may also be employed, for example, as an end-user computing node, in other embodiments. Further, it should be recognized that more or less client devices may be included within a system such as described herein, and embodiments are therefore not constrained by the number or type of client devices employed. The set of such client devices101-103may include devices that typically connect using a wired or wireless communications medium such as personal computers, servers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, or the like. In one embodiment, at least some of client devices101-103may operate over wired and/or wireless network. In some embodiments, client devices101-103may include virtually any portable computing device capable of receiving and sending a message over a network, such as network107. Client devices101-103also may include at least one client application that is configured to capture and record event data and/or related metadata. However, the client application need not be limited to merely providing event data and related metadata, and may also provide other information, and/or provide for a variety of other services, including, for example, monitoring for events within and/or between client devices. The client application may further provide information that identifies itself, including a type, capability, name, and the like. Such information may be provided in a network packet, or the like, sent between other client devices, distributed search server109, or other computing devices. Network107is configured to couple network devices with other computing devices, including distributed search server109and client devices101-103. Network107is enabled to employ any form of computer readable media for communicating information from one electronic device to another. Also, network107can include the Internet in addition to local area networks (LANs), wide area networks (WANs), direct connections, such as through a universal serial bus (USB) port, other forms of computer-readable media, or any combination thereof. On an interconnected set of LANs, including those based on differing architectures and protocols, a router acts as a link between LANs, enabling messages to be sent from one to another. In addition, communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may utilize analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, and/or other carrier mechanisms including, for example, E-carriers, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communications links known to those skilled in the art. Moreover, communication links may further employ any of a variety of digital signaling technologies, including without limit, for example, Digital Signal (DS)-0, DS-1, DS-2, DS-3, DS-4, Optical Carrier (OC)-3, OC-12, OC-48, or the like. Furthermore, remote computers and other related electronic devices could be remotely connected to either LANs or WANs via a modem and temporary telephone link. In one embodiment, network107may be configured to transport information of an Internet Protocol (IP). In essence, network107includes any communication method by which information may travel between computing devices. Additionally, communication media typically embodies computer-readable instructions, data structures, program modules, or other transport mechanism and includes any information delivery media. By way of example, communication media includes wired media such as twisted pair, coaxial cable, fiber optics, wave guides, and other wired media and wireless media such as acoustic, Radio Frequency (RF), infrared, and other wireless media. In some embodiments, network107may be further configurable as a wireless network, which may further employ a plurality of access technologies including 2nd (2G), 3rd (3G), 4th (4G) generation radio access for cellular systems, WLAN, Wireless Router (WR) mesh, and the like. In one non-limiting example, network107, when configured as a wireless network, may enable a radio connection through a radio network access such as Global System for Mobile communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), and the like. Distributed search server109includes virtually any network device usable to receive a search query, distribute sub-queries of the search query among client devices101-103, synthesize the results of the sub-queries, and display a report. Distributed search server109may, for example, be configured to merge lists of event data references into a global ordered list of event data references, enabling ranges of event data to be selectively retrieved from one or more distributed nodes. Devices that may operate as distributed search server109include various network devices, including, but not limited to personal computers, desktop computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, server devices, network appliances, and the like. AlthoughFIG.1illustrates distributed search server109as a single computing device, the invention is not so limited. For example, one or more functions of the distributed search server109may be distributed across one or more distinct network devices. Moreover, distributed search server109is not limited to a particular configuration. Thus, in one embodiment, distributed search server109may contain a plurality of network devices to perform digest aggregation and calculation of approximate order statistics therefrom. Similarly, in another embodiment, distributed search server109may operate as a plurality of network devices within a cluster architecture, a peer-to-peer architecture, and/or even within a cloud architecture. Thus, the invention is not to be construed as being limited to a single environment, and other configurations, and architectures are also envisaged. Illustrative Client Device FIG.2shows one embodiment of client device200that may be included in a system implementing embodiments of the invention. Client device200may include many more or less components than those shown inFIG.2. However, the components shown are sufficient to disclose an illustrative embodiment for practicing the present invention. Client device200may represent, for example, one embodiment of at least one of client devices101-103ofFIG.1. As shown in the figure, client device200includes processing unit (CPU)202in communication with a mass memory226via a bus234. Client device200also includes a power supply228, one or more network interfaces236, an audio interface238, a display240, and an input/output interface248. Power supply228provides power to client device200. Network interface236includes circuitry for coupling client device200to one or more networks, and is constructed for use with one or more communication protocols and technologies including, but not limited to, global system for mobile communication (GSM), code division multiple access (CDMA), time division multiple access (TDMA), user datagram protocol (UDP), transmission control protocol/Internet protocol (TCP/IP), Short Message Service (SMS), general packet radio service (GPRS), Wireless Application Protocol (WAP), ultra wide band (UWB), Institute of Electrical and Electronics Engineers (IEEE) 802.16 Worldwide Interoperability for Microwave Access (WiMax), Session Initiation Protocol (SIP)/Real-time Transport Protocol (RTP), or any of a variety of other communication protocols. Network interface236is sometimes known as a transceiver, transceiving device, or network interface card (NIC). Audio interface238is arranged to produce and receive audio signals such as the sound of a human voice. For example, audio interface238may be coupled to a speaker and microphone (not shown) to enable telecommunication with others and/or generate an audio acknowledgement for some action. Display240may be a liquid crystal display (LCD), gas plasma, light emitting diode (LED), or any other type of display used with a computing device. Display240may also include a touch sensitive screen arranged to receive input from an object such as a stylus or a digit from a human hand. Client device200also comprises input/output interface248for communicating with external devices, such as a keyboard, or other input or output devices not shown inFIG.2. Input/output interface248can utilize one or more communication technologies, such as USB, infrared, BluetoothTM, or the like. Mass memory226includes a Random Access Memory (RAM)204, a Read Only Memory (ROM)222, and other storage means. Mass memory226illustrates an example of computer readable storage media (devices) for storage of information such as computer readable instructions, data structures, program modules or other data. Mass memory226stores a basic input/output system (“BIOS”)224for controlling low-level operation of client device200. The mass memory also stores an operating system206for controlling the operation of client device200. It will be appreciated that this component may include a general-purpose operating system such as a version of UNIX, or LINUX™, or a specialized client communication operating system such as Windows Mobile™, or the Symbian® operating system. The operating system may include, or interface with a Java virtual machine module that enables control of hardware components and/or operating system operations via Java application programs. Mass memory226further includes one or more data storage208, which can be utilized by client device200to store, among other things, applications214and/or other data. For example, data storage208may also be employed to store information that describes various capabilities of client device200. The information may then be provided to another device based on any of a variety of events, including being sent as part of a header during a communication, sent upon request, or the like. At least a portion of the information may also be stored on a disk drive or other computer-readable storage device230within client device200. Data storage208may further store event data and metadata210and local search results212. Such event data and metadata210and local search results212may also be stored within any of a variety of other computer-readable storage devices, including, but not limited to a hard drive, a portable storage device, or the like, such as illustrated by computer-readable storage device230. Applications214may include computer executable instructions which, when executed by client device200, transmit, receive, and/or otherwise process network data. Other examples of application programs include calendars, search programs, email clients, IM applications, SMS applications, Voice Over IP (VOIP) applications, contact managers, task managers, transcoders, database programs, word processing programs, security applications, spreadsheet programs, games, search programs, data log recording programs, and so forth. Applications214may include, for example, local search module220. Local search module220may process a sub-query, returning analysis results and a list of event data references associated with the analysis results, as described herein. Illustrative Network Device FIG.3shows one embodiment of a network device300, according to one embodiment of the invention. Network device300may include many more or less components than those shown. The components shown, however, are sufficient to disclose an illustrative embodiment for practicing the invention. Network device300may be configured to operate as a server, client, peer, or any other device. Network device300may represent, for example distributed search server109ofFIG.1. Network device300includes processing unit302, an input/output interface332, video display adapter336, and a mass memory, all in communication with each other via bus326. The mass memory generally includes RAM304, ROM322and one or more permanent mass storage devices, such as hard disk drive334, tape drive, optical drive, and/or floppy disk drive. The mass memory stores operating system306for controlling the operation of network device300. Any general-purpose operating system may be employed. Basic input/output system (“BIOS”)324is also provided for controlling the low-level operation of network device300. As illustrated inFIG.3, network device300also can communicate with the Internet, or some other communications network, via network interface unit330, which is constructed for use with various communication protocols including the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol. Network interface unit330is sometimes known as a transceiver, transceiving device, or network interface card (MC). Network device300also comprises input/output interface332for communicating with external devices, such as a keyboard, or other input or output devices not shown inFIG.3. Input/output interface332can utilize one or more communication technologies, such as USB, infrared, BluetoothTM, or the like. The mass memory as described above illustrates another type of computer- readable media, namely computer-readable storage media and/or processor-readable storage medium. Computer-readable storage media (devices) may include volatile, nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, Compact Disc ROM (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory physical medium which can be used to store the desired information and which can be accessed by a computing device. As shown, data storage308may include a database, text, spreadsheet, folder, file, or the like, that may be configured to maintain and store user account identifiers, user profiles, email addresses, 1M addresses, and/or other network addresses; or the like. Data stores308may further include program code, data, algorithms, and the like, for use by a processor, such as central processing unit (CPU)302to execute and perform actions. In one embodiment, at least some of data store308might also be stored on another component of network device300, including, but not limited to computer-readable storage medium328, hard disk drive334, or the like. Data storage308may further store ordered list of event data references310. Ordered list of event data references310may include a list of event data references received from a plurality of distributed nodes. In one embodiment, the ordered list of event data references is generated by sorting data references received from each distributed node according to a common field, such as a timestamp, a number a string, or the like. In one embodiment, each element of the ordered list includes a reference to the distributed node the event data is stored on, an offset or other pointer to the event data on that distributed node, and optionally the value used to sort the ordered list. The mass memory also stores program code and data. One or more applications314are loaded into mass memory and run on operating system306. Examples of application programs may include transcoders, schedulers, calendars, database programs, word processing programs, Hyper Text Transfer Protocol (HTTP) programs, customizable user interface programs, Internet Protocol Security (IPSec) applications, encryption programs, security programs, SMS message servers, account managers, and so forth. Distributed search module318may also be included as application programs within applications314. Distributed search module318may be configured and arranged to receive a query, generate sub-queries for each of a specified set of distributed devices, and aggregate results of these sub-queries to generate a report, as described further herein. Generalized Operation The operation of certain aspects will now be described with respect toFIGS.4-6.FIGS.4-5provide logical flow diagrams illustrating certain aspects, whileFIG.6illustrates an example of a scalable interactive display of distributed data.FIG.4illustrates a logical flow diagram of one embodiment of a process for generating and displaying an interactive report. In one embodiment, process400may be implemented on distributed search server109. Process400begins, after a start block, at block402, where a search query (hereinafter “query”) is received. In one embodiment, the received query targets data, such as “event data” (also referred to as “events”), that is distributed across a plurality of specified computing devices, such as client devices101-103. In one embodiment, sub-queries are generated for each of the specified computing devices and submitted to each corresponding computing device for processing. For example, if the received query asks for a count of system log entries that contain the word “error”, then a sub-query is generated for each of the specified computing devices, where each sub-query counts the number of events derived from system log entries that contain the word “error” stored on that device. The received query may specify which computing devices to search in a number of ways. In one embodiment, the received query specifies particular computing devices or groups of computing devices by name, network address, or the like. In another embodiment, computing devices are specified based on attributes, such as operating system, hardware components (e.g. CPU, web cam, network adapter, etc.), form factor (e.g. laptop, desktop, server, tablet, virtual machine, smartphone, etc.), and the like. In another embodiment, a query may specify all of the plurality of computing devices. In one embodiment, the received query is received from a user, such as a system administrator. However, queries may also be automatically generated by a software agent. In one embodiment, a query may be automatically generated at periodic intervals, such as every hour or every Saturday night. In another embodiment, a query may be generated in response to an event, such as installation of a software patch, or in response to a metric crossing a threshold, such as an unusually large volume of network traffic. The distributed data to be searched may be stored on the specified computing devices in many ways. In one embodiment, the distributed data may include events, as defined herein, that have been recorded and stored by each of the specified computing devices. However, the distributed data may be generated at any time and in any manner, including partitioning a data set across specified computing devices after the query has been received. Also, while in one embodiment the distributed data comprises “events” or “event data” as defined herein, the distributed data may include any kind of data, structured or unstructured. The received query may include one or more analyses to be performed on the distributed event data by the computing devices storing that data. For example, an analysis may include counting the number of events that satisfy a condition, deriving statistical information about events (including distributions, histograms, Nth percentile rankings, and the like), grouping events, sorting events, and the like. That is, the analysis may be performed in response to the query. In one embodiment, the received query may also specify the order of query results. For example, a query requesting system log entries that contain the word “error” may be ordered based on the time the system log entry was generated (timestamp). A similar query may order entries based on an error severity value field in the event derived from the system log entry. Multiple orderings and nested orderings are also contemplated, such as ordering first by an error severity value and then by a timestamp. The process proceeds to block404, where sub-query results are received from each of the specified computing devices. In one embodiment, the sub-query results include analysis results corresponding to the one or more analyses specified in the received query. In one embodiment, analysis results are derived from raw event data stored on each of the specified devices, but analysis results do not include the actual raw event data. The sub-query results additionally include one or more lists of event references. In one embodiment, each event reference includes (1) an identifier that uniquely identifies the event on the computing device that generated it, and (2) a value usable to order the event (hereinafter “order value”). In one embodiment the unique identifier includes a serial number assigned to the event as the event is created. In this example, the unique identifier is unique to a given computing device-events from different computing devices may be assigned the same unique identifier. However, globally unique identifiers, such as GUIDs, are similarly contemplated. In one embodiment, the order value of an event may be a timestamp, such as the time when an event was created. However, any value of any data type is similarly contemplated. Other examples of order values include integers, such as a severity of error, and a string, such as a username. In one embodiment, a computing device creates an event reference for each event used to generate an analysis result. For example, consider three devices A, B, and C that contain 14, 37, and 94 system log events containing the word, “error”, respectively. If a query to count all of the system log events that contain the word “error” is received, device A will return a count of 14 as well as a list of 14 references, one reference for each of the 14 counted events. Similarly, device B will return a count of 37 as well as a list of 37 references, and device C will return a count of 94 and a list of 94 references. Note that at this time, none of the raw event data has been transmitted to the distributed search server. The process proceeds to block406, where a global ordered list of event references is generated based on each of the returned lists of event references. In one embodiment, each entry in the global ordered list includes the content of an event reference, as described above, as well as an identifier of the computing device that the event was found on. Continuing the example above, consider if the first 7 of device A's events were the first to be recorded, followed by the first 50 of device C's, followed by all 37 of device B's, followed by the last 44 of device C, and finally the last7of device A. In this simple example, the global ordered list would include all145event references in this same order, where each event reference is fully qualified to include a computing device identifier in addition to that event's unique identifier. In this way, a user may select a range from the global ordered list of event references, and only the actual event data contained in the selected range is downloaded. The process proceeds to block408, where a request to display a range of events is received. Continuing the above example, the global ordered list includes 145 fully qualified event references. A request may be received to display the last 5 events, the second 50 events, the first event, all 145 of the events, or any other sub-range of the events. The process proceeds to block410, where event data is requested from one or more of the computing devices based on the range of event references requested from the global ordered list. For example, if the first 50 events are requested, then the first 50 entries in the global ordered list are retrieved. Continuing the example above, the first 7 events from device A would be requested, all 37 of the events from device B would be requested, and the first 6 events from device C would be requested. Thus a total of 50 events are retrieved from three different computing devices, without retrieving any unnecessary events. In one embodiment these requests are made in parallel, however requests may be submitted to individual devices serially. Also, in one embodiment, a range of events may be requested from a single computing device in a single network transaction, however requests may also be made individually. The process proceeds to block412, where the raw data is displayed. In one embodiment, event data retrieved from individual computing devices are displayed according to the global order. In one embodiment, the requested raw data is displayed with the one or more analysis results. In this way, a user may see the analysis results as well as portions of the underlying data. The process then proceeds to a return block. FIG.5illustrates a logical flow diagram generally showing one embodiment of a process an individual computing device may perform in the course of performing a distributed search query. In one embodiment, process500is performed by one of client devices101-103. Process500begins, after a start block, at block502, where a sub-query is received from a distributed search server. The process then proceeds to block504, where data such as events are analyzed according to the received sub-query. In one embodiment, as events are analyzed, events that contribute to the requested analysis are referenced in a list of event references. The process then proceeds to block506, where the results of the analysis and the list of event references are transmitted to the computing device that submitted the sub-query. In one embodiment, this device is distributed search server109. The process then proceeds to block508, where a request for one or more pieces of event data is received. In one embodiment, the request includes a contiguous range of event data. In another embodiment, individual pieces of event data are individually requested. The process then proceeds to block510, where the requested pieces of event data are transmitted to the requestor. The process then proceeds to a return block. FIG.6illustrates one non-limiting example of an interactive report600; however, other layouts containing other types of information are similarly contemplated. The interactive report was generated based on a search query602of all domain name system (dns) lookups the specified clients performed “yesterday”. The report is broken into three sections- a timeline604, a field picker606, and an event data view608. The timeline includes a bar graph610depicting how many dns lookups were performed each hour. Field picker606is generally used to select fields612from all of the fields available on a given type of event. In this example, field picker606has been used to select two of the 24 fields associated with dns lookup events: client host and client IP. Thus, the event data displayed in the event data view will contain only these two fields. Finally, the event data view608displays raw event data, currently 50 results per page. A total of 562 events were gathered from 79 clients. However, only the first 50 events have been downloaded to the distributed search server at the time this display was generated. If the user were to select another range of 50 events, the distributed search server could retrieve these 50 events from one or more of the clients in real-time as discussed above in conjunction withFIGS.4and5. It will be understood that figures, and combinations of steps in the flowchart-like illustrations, can be implemented by computer program instructions. These program instructions may be provided to a processor to produce a machine, such that the instructions, which execute on the processor, create means for implementing the actions specified in the flowchart block or blocks. The computer program instructions may be executed by a processor to cause a series of operational steps to be performed by the processor to produce a computer implemented process such that the instructions, which execute on the processor to provide steps for implementing the actions specified in the flowchart block or blocks. These program instructions may be stored on a computer readable medium or machine readable medium, such as a computer readable storage medium. Accordingly, the illustrations support combinations of means for performing the specified actions, combinations of steps for performing the specified actions and program instruction means for performing the specified actions. It will also be understood that each block of the flowchart illustration, and combinations of blocks in the flowchart illustration, can be implemented by modules such as special purpose hardware-based systems which perform the specified actions or steps, or combinations of special purpose hardware and computer instructions. The above specification, examples, and data provide a complete description of the manufacture and use of the composition of the described embodiments. Since many embodiments can be made without departing from the spirit and scope of this description, the embodiments reside in the claims hereinafter appended.
33,898
11860882
DETAILED DESCRIPTION Search systems support identifying, for received queries, search result items (e.g., products or content) from item databases. Item databases may specifically be for content platforms or product listing platforms such as EBAY item listing platform, developed by EBAY INC., of San Jose, California Search systems may include search interfaces that provide search refinement functionality that is implemented to systematically browse the World Wide Web, typically for the purpose of refining search results. For example, standard search refinement features may be used in a search system to refine results provided in response to a query. In conventional search systems, search interfaces may provide standard search system refinement user interface (“search refinement interface”) features. In particular, search refinement interface features may be a predefined set of attributes (i.e., a standard set of item characteristics), a derived set of attributes (i.e., characteristics of all returned search result items), or simply values of attributes of returned search result items. For example, a predefined set of attributes (e.g., a color filter with a standard listing of colors, a size filter with a standard listing of sizes) or a derived set of attributes (i.e., entire set of attributes corresponding to a plurality of item provided as search result items) or the values (i.e., an isolated value of an attribute—size 10) may be provided as standard search refinement features. However, such standard search refinement features lead to efficient interfaces because the refinement options provided may not be applicable to the search being performed, thus limiting how effectively a user may further refine or search via the interface. Moreover, a standard search refinement user interface may include several inapplicable attributes that may make the interface and search functionality cumbersome. For example, a user may have to scroll through several attributes and corresponding attribute values to identify which attributes are relevant to the search being performed and how to further refine existing search results. As such, an alternative approach for providing search refinement interfaces to support efficient refinement of items in an item listing database would improve computing operations for ease of performing search refinement. Embodiments of the present disclosure are directed to a search system with relevance-based search refinement. Relevance-based search refinement may be provided using a refinement user interface having selectable guidance attributes. At a high level, guidance attributes are characteristics of a plurality of items (e.g., Shoe Size) having corresponding values (e.g., size 11). The selectable guidance attributes are top-ranked characteristics of items (e.g., search result items) based on historical user interactions with the items. The selectable guidance attributes are displayed to provide additional search functionality (e.g., a guidance-attribute control that may perform embedded-value search operations including refining existing search results or executing a new search). For a selected guidance attribute, a guidance-attribute control may be provided. In particular, relevance-based search refinement may further be provided using a guidance-attribute control (i.e., a user interface control object) having embedded selectable values of a guidance attribute. The guidance-attribute control may have values embedded in the control such that the values are directly selectable to initiate an embedded-value search operation. In this way, the selectable guidance attributes may provide a first level of relevance-based search refinement and the guidance-attribute control support further relevance-based search refinement via integrated embedded-value search operation functionality using the selectable values in the control. Initially, selectable guidance attributes may be provided for relevance-based search refinement. Guidance attributes may be identified based on guidance information. For instance, guidance information may be associated with demand data for a plurality of items (e.g., items in an item listing database). Demand data may generally relate to popularity of item characteristics (e.g., attributes and associated values). Popularity of item characteristics may be based on user interactions with items returned for a search query (e.g., search result items). For example, guidance information may include an identified ranked set of characteristics of items based on historical user interactions with the items from a same/similar search query. As another example, guidance information may include top-ranked characteristics of items (e.g., search result items) based on historical user interactions with the items. Guidance information may be dynamically updated based on user interactions with search result items in relation to the search query. Guidance attributes may be identified using the guidance information in relation to a search query based on historical user interactions with items (provided for a same or similar search query). For example, in response to a search for “Running Shoes,” top-ranked characteristics of items may be identified based on historical user interactions with items responsive to a search query “Running Shoes.” Such top-ranked characteristics may include guidance attributes. In the “Running Shoes” example, guidance attributes may include, for instance, “Brand Name,” “Material Type,” and “Color.” Such top-ranked characteristics may further include guidance values. In the “Running Shoes” example, guidance values may include, for instance, “Nike,” “Leather,” and “Red.” One or more selectable guidance attributes may be provided via a refinement user interface for relevance-based search refinement based on the guidance attributes identified in relation to a search query. For example, for search query “Running Shoes,” selectable guidance attributes may include, for instance, “Brand Name,” “Material Type,” and “Color.” In some embodiments, the top four ranked guidance attributes may be provided as selectable guidance attributes. In some other embodiments, “Price” may always be presented along with the top ranked guidance attributes. Selecting a guidance attribute may initiate the presentation of a guidance-attribute control having embedded-value search operation functionality. Relevance-based search refinement may be provided using a guidance-attribute control having embedded selectable values of a guidance attribute. In particular, the guidance-attribute control may include values of a selected guidance attribute such that a user may quickly select at least one of the values for use in refining search results. A user may use the guidance-attribute control to initiate additional embedded-value search operation functionality. In particular, the guidance-attribute control includes values that are directly selectable to initiate an embedded-value search operation. For example, a user may select a guidance attribute (e.g., “Color”) to cause display of a guidance-attribute control corresponding to the selected guidance attribute. The guidance-attribute control may include embedded values of the guidance attribute (e.g., “Black,” “Red,” and “White”). Using the guidance-attribute control, a user may select a value (e.g., “Red”) to execute an embedded-value search operation. In some embodiments, such embedded selectable values may be top-ranked values of items (e.g., search result items) based on historical user interactions with the items. For example, for search query “Running Shoes,” when the guidance attribute “Color” is selected, embedded values may include, for instance, “Black,” “Red,” and “Blue,” etc. In some embodiments, the top twelve ranked values may be provides as selectable values. Selecting one or more value via the guidance-attribute control may initiate an embedded-value search operation. In embodiments, relevance-based search refinement may be provided based on selectively available embedded-value search operations. The selectively available embedded-value search operations may be based on the selected attribute (e.g., “Color,” “Brand,” etc.). In particular, upon selecting a guidance attribute, a guidance-attribute control may be provided. The available options for selecting values to execute the embedded-value search operations for use via the guidance-attribute control may be determined based on the selected guidance attribute. In particular, the selected guidance attribute may be analyzed to determine what embedded-value search operations are optimally available. For example, if the guidance attribute “Color” is selected, selectable values may be presented using a color wheel with selectable color values and an auto-complete text-based search box that may be used to identify different color values. As another example, if the guidance attribute “Brand” is selected, selectable values may be presented using selectable ranked values and an auto-complete text-based search box. Executing an embedded-value search operation may identify a subset of the items using a selected value or provide a dynamically updatable count of items that will be provided upon refinement using the selected value. For example, selecting a value (e.g., “Red”) may execute an embedded-value search operation to provide a subset of “Red” items (e.g., red running shoes). As another example, selecting a value (e.g., “Red”) may execute an embedded-value search operation to provide a dynamically updatable count of items that are “Red” (e.g., 10 red running shoes). Values may be selected in a variety of manners (e.g., directly selecting a value, using text-based value searching, using color-based value searching). As an example, a value may be directly selected using a pill (i.e. button). As an example of text-based value searching, a value (e.g., “Nike”) may selected from a list provided by typing text into a search bar. Text-based value searching may use the search bar limited to auto-complete of available search refinement terms (e.g., guidance attributes and/or values). As “N” is typed into the search bar, a list of “Nike” and “New Balance” may be presented as selectable values. As an example of color-based value searching, a color wheel or color gradient may be provided. For instance, a color wheel may be used to select a specific shade or color (e.g., “Burgundy”). Using a color wheel may more exactly match colors of interest during refinement. In this way, the guidance-attribute control supports relevance-based search refinement while integrating embedded-value search operation functionality using the selectable values in the control. In operation, items may be displayed on webpage (in response to a search query) along with selectable guidance attributes. The guidance attributes are top-ranked characteristics of items based on historical user interactions with the items provided in response to the search query. Upon selection of a guidance attribute, a guidance-attribute control having embedded selectable values may be displayed. Selection of a value may execute an embedded-value search operation. The embedded-value search operation may either identify a subset of the items using the selected value or provide a dynamically updatable count of items that will be provided upon refinement using the selected value. Embodiments of the present disclosure have been described with reference to several inventive features associated with a search system with relevance-based search refinement. Inventive features described include identifying and providing selectable guidance attributes for relevance-based search refinement that triggers a guidance-attribute control that supports further relevance-based search refinement via integrated embedded-value search operation functionality using selectable values in the control. Functionality of the embodiments of the present disclosure have further been described, by way of an implementation and anecdotal examples, to demonstrate that the operations for relevance-based search refinement are an unconventional ordered combination of operations that operate with a search operations manager as a solution to a specific problem in search technology environment to improve computing operations in search systems. Overall, these improvements result in less CPU computation, smaller memory requirements, and increased flexibility in search systems. With reference toFIG.1,FIG.1illustrates an exemplary search system100in which implementations of the present disclosure may be employed. In particular,FIG.1shows high level functionality of search system100. The search system100may provide relevance-based search refinement. The search system100may receive search queries. The search query may indicate an item or item type. For example, as depicted, the search query may be “Running Shoes.” In response to receiving a search query, the search system100may identify a set of items for the search query in an item database. For the search query, “Running Shoes,” a set of running shoes may be identified in the item database. The identified set of items may be presented (e.g., via a user interface). Guidance attributes may be presented along with the identified set of items. The guidance attributes may be provided for selection to perform relevance-based search refinement. The guidance attributes may be identified using popularity of item characteristics based on user interactions with items returned for a same or similar search query (e.g., search result items). In particular, the guidance attributes may be identified from a ranked set of characteristics of items based on historical user interactions with the items. For example, in response to the search for “Running Shoes,” top-ranked characteristics of items (e.g., “Brand Name,” “Material Type,” and “Color”) may be identified based on historical user interactions with items responsive to the search query “Running Shoes.” A guidance attribute may be selected to trigger the presentation of a guidance-attribute control. As depicted, the guidance-attribute control has embedded selectable values of the selected guidance attribute. The guidance-attribute control may provide further relevance-based search refinement based on selected values via the control. Selecting values may result in executing embedded-value search operations. In one example, a single value may be selected via the guidance-attribute control. This selection may result in an embedded-value search operation to identify a subset of items that have the selected value from the identified set of items (e.g., identify the subset of items 1 to n—from the set of items 1 to n). In another example, multiple values may be selected via the guidance-attribute control. This selection may result in an embedded-value search operation to identify a subset of items that have the selected values from the identified set of items (e.g., the subset of items 1 to n—from the set of items 1 to n). In an additional example, a dynamically updatable count of items that will be provided via the guidance-attribute control upon refinement using a selected value may be provided. It should be appreciated that such a dynamically updatable count of items may be provided prior to executing the embedded-value search operation to refine the set of items. However, the subset of items may be identified prior to the execution the embedded-value search operation to refine the set of items. Identifying the subset of items in this manner may reduce any lag between execution of the embedded-value search operation to refine the set of items and providing the subset of items. In another example, a value may be selected via the guidance-attribute control using text-based value searching. In text-based value searching, a value (e.g., “Nike”) may be selected from a list provided by typing text into a search bar. Text-based value searching may use the search bar limited to auto-complete of available search refinement terms (e.g., guidance attributes and/or values). For instance, as “N” is typed into the search bar, a list of “Nike” and “New Balance” may be presented as selectable values. Selecting one of the text-based value may result in an embedded-value search operation to identify a subset of items that have the selected value from the identified set of items (e.g., the subset of items 1 to n—from the set of items 1 to n). As a further example, a value may be selected via the guidance-attribute control using color-based value searching. Color-based value searching may use a color wheel, as depicted, or color gradient. For instance, the color wheel may be used to select a specific shade or color (e.g., “Burgundy”). Using a color wheel may more exactly match colors of interest during refinement. Selecting a color value may result in an embedded-value search operation to identify a subset of items that have the selected value from the identified set of items (e.g., the subset of items 1 to n—from the set of items 1 to n). Such examples of relevance-based search refinement by selecting values via the guidance-attribute control may be combined in any manner. In this way, the guidance-attribute control supports relevance-based search refinement while integrating embedded-value search operation functionality using selectable values in the control. With reference toFIG.2A,FIG.2Aillustrates an exemplary search system200in which implementations of the present disclosure may be employed. In particular,FIG.2Ashows a high level architecture of search system200having components in accordance with implementations of the present disclosure. Among other components or engines not shown, search system200includes a computing device280. The computing device280communicates via a network270and with a search engine210. The search engine210includes search refinement engine220having guidance-attribute control222, guidance attribute224, guidance value226, and search operations manager228, item database230, supply data240, demand data242, guidance manager250, and search refinement engine client252. Each of the identified components may represent a plurality of different instances of the component, for example, guidance attribute224may be various guidance attributes and guidance value226may be various guidance values, as described below. The components of the search system200may communicate with each other over one or more networks (e.g., public network or virtual private network “VPN”) as shown with network270. The network270may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). The computing device280may be a client computing device that corresponds to the computing device described herein with reference toFIG.7. The components of the search system200may operate together to provide functionality for relevance-based search refinement, as described herein. In particular, the relevance-based search refinement may use a guidance-attribute control having embedded selectable values of a guidance attribute. As discussed, the search system200supports processing operation requests (e.g., search queries, search refinement, other search system requests from the computing device280). For example, query results for a search query from the search system may include identified items as well as additional relevant information, where the additional external information (e.g., guidance attributes and associated values) may be identified by the guidance manager250and provided via search refinement engine220. The search engine210is responsible for supporting operations for providing search refinement as described herein. The search engine210in the search system200may access items in an item listing platform. The search engine210may be part of an item listing platform that supports access to the item database230. The items in the item database may be stored based on a data structure having a structural arrangement of items (e.g., an item category and an item classification system). For example, the item database230may be implemented with a database schema that stores item listings based on item titles. Available items in the item database230may be identified using, for example, supply data240. Supply data240may include information related to item database230. For example, the supply data240may include items and associated item information. Associated item information may comprise attributes and associated values for the items. As a non-limiting example, a pair of shoes may have attributes that include “Brand” “Material” and “Color” along with associated values “Nike” “Leather” and “Black,” respectively. The guidance manager250manages guidance information that may be used by the search system200. Guidance information may be associated with demand data for a plurality of items (e.g., items in the item database230). Demand data may generally relate to popularity of item characteristics (e.g., attributes and associated values). Popularity of item characteristics may be based on user interactions with items returned for a search query (e.g., search result items). For example, guidance information may include an identified ranked set of characteristics of items based on historical user interactions with the items. As another example, guidance information may include top-ranked characteristics of items (e.g., search result items) based on historical user interactions with the items. Guidance information may be dynamically updated based on user interactions with search result items in relation to the search query. Guidance manager250may identify relevant guidance information in relation to a search query based on historical user interactions with items (provided for a same or similar search query). For example, For example, in response to a search for “Running Shoes,” the guidance manager250may identify a ranked set of characteristics (i.e., top-ranked characteristics) of items based on historical user interactions with items responsive to a search query “Running Shoes.” Such ranked characteristics may indicate guidance attributes. In the “Running Shoes” example, guidance attributes may include, for instance, “Brand Name,” “Material Type,” and “Color.” Such ranked characteristics may further include guidance values. In the “Running Shoes” example, ranked values may include, for instance, “Nike,” “Leather,” and “Red.” The search refinement engine220supports relevance-based search refinement by implementing guidance attribute224in accordance with the search system200. Guidance attribute224may leverage a ranked set of characteristics (i.e., top-ranked characteristics) of items (e.g., guidance attributes) identified by the guidance manager250in relation to a search query. In particular, the guidance attribute224supports providing selectable guidance attribute(s). One or more guidance attributes may be presented based on the guidance attributes identified in relation to a search query. For example, for search query “Running Shoes,” guidance attributes may include, for instance, “Brand Name,” “Material Type,” and “Color.” In some embodiments, the top four ranked guidance attributes may be provided as selectable guidance attributes. In some other embodiments, “Price” may always be presented along with the top ranked guidance attributes. Selecting a guidance attribute may initiate the presentation of a guidance-attribute control (e.g., guidance-attribute control222) having embedded-value search operation functionality. The search refinement engine220further supports relevance-based search refinement by implementing the guidance-attribute control222in accordance with the search system200. The guidance-attribute control222supports providing embedded selectable values of a guidance attribute that initiate embedded-value search operations. The guidance-attribute control222may be presented upon selection of a guidance attribute. In this way, the guidance-attribute control222supports providing embedded selectable values of the selected guidance attribute. The embedded selectable values provided via the guidance-attribute control222may be provided by guidance value226. Such embedded selectable values may be top-ranked values of items (e.g., search result items) based on historical user interactions with the items. For example, for search query “Running Shoes,” when the guidance attribute “Color” is selected, guidance values may include, for instance, “Black,” “Red,” and “Blue,” etc. In some embodiments, the top twelve ranked guidance values may be provides as selectable guidance values. Selecting one or more guidance value via the guidance-attribute control222may initiate an embedded-value search operation. The search refinement engine client252operates with the search refinement engine220to provide the functionality as described herein (i.e., relevance-based search refinement). In particular, the search refinement engine client252may implement the presentation of a refinement user interface having selectable guidance attributes. Such selectable guidance attributes may be provided using the guidance attribute224. Receiving a selection of a guidance attribute may trigger a further presentation of a refinement user interface having embedded selectable values of a selected guidance attribute. Such embedded selectable values may be provided using the guidance value226in conjunction with the guidance-attribute control222. Receiving a selection of an embedded selectable value may initiate embedded-value search operations. With reference toFIG.2B,FIG.2Billustrates search refinement engine220with components that support search refinement using embedded-value search operations. The embedded-value search operations may be performed based on a selected value (i.e., value226A, value226B, . . . , and value226N). In particular, one or more values (i.e., value226A, value226B, . . . , and value226N) may be selected via guidance-attribute control222to implement an embedded-value search operation. The guidance-attribute control222is provided based on a selection of a guidance attribute (i.e., attribute224A, attribute224B, . . . , and attribute224N). The guidance attribute(s) is provided based on supply data240(e.g., indicating available items in item database230) and demand data242(indicating popularity of item characteristics based on historical interactions with the characteristics of items provided for same or similar search queries). With reference toFIG.2C,FIG.2Cillustrates search refinement engine220with components that support search refinement using selectively available embedded-value search operations. The selectively available embedded-value search operations may be based on the selected attribute (i.e., attribute224). In particular, upon selecting a guidance attribute (i.e., attribute224), the guidance attribute control222may be provided. The available embedded-value search operations may be determined using the search operations manager228. In particular, the search operations manager228may analyze the selected guidance attribute (i.e., attribute224) to determine what embedded-value search operations are optimally available. For example, if the guidance attribute “Color” is selected, selectable values may be presented using a color wheel with selectable color values and an auto-complete text-based search box that may be used to identify different color values and. As another example, if the guidance attribute “Brand” is selected, selectable values may be presented using selectable ranked values and an auto-complete text-based search box. With reference toFIGS.3A-3D, example implementations are provided of a search system having relevance-based search refinement. The example implementations may be performed using the search system described herein. In embodiments, the relevance-based search refinement may be provided using selectable guidance attributes. In further embodiments, the relevance-based search refinement may be provided using a guidance-attribute control (i.e., a user interface control object) having embedded selectable values of a guidance attribute. Turning toFIG.3A, an example implementation is provided of a search system having relevance-based search refinement. In particular, inFIG.3Asearch system300may provide relevance-based search refinement using a selected attribute that triggers a guidance-attribute control that supports further relevance-based search refinement via integrated embedded-value search operation functionality using selectable values in the control. The search system300may receive a search query, “Running Shoes.” In response to receiving a search query, the search system300may identify a set of running shoes in an item database. The identified set of items may be presented via a user interface (i.e., items 1 to n). Selectable guidance attributes may be presented along with the identified set of items. A guidance attribute may be selected. Selection of the guidance attribute may result in the presentation of a guidance-attribute control. The guidance-attribute control may be presented as a partial overlay that comprises a user-interface element that indicates the embedded guidance-attribute control is active and the items responsive to the initial search query are inactive. The guidance-attribute control may have embedded selectable values of the selected guidance attribute. The embedded selectable values may be identified ranked set of characteristic (i.e., top-ranked values) of the selected guidance attribute (e.g., 1 to 8) based on historical user interactions with items responsive to the search query “Running Shoes.” A value may be selected (i.e., 7), resulting in an embedded-value search operation to identify a subset of items from the identified set of items that have the selected value (e.g., the subset of items 1 to n—from the set of items 1 to n). Turning toFIG.3B, an example implementation is provided of a search system having relevance-based search refinement. In particular, inFIG.3Bsearch system300may provide relevance-based search refinement using a selected attribute that triggers a guidance-attribute control that supports further relevance-based search refinement via integrated embedded-value search operation functionality using selectable values in the control. The search system300may receive a search query, “Running Shoes.” In response to receiving a search query, the search system300may identify a set of running shoes in an item database. The identified set of items may be presented via a user interface (i.e., items 1 to n). Selectable guidance attributes may be presented along with the identified set of items. A guidance attribute may be selected. Selection of the guidance attribute may result in the presentation of a guidance-attribute control. The guidance-attribute control may have embedded selectable values of the selected guidance attribute. Multiple values may be selected, resulting in a dynamically updatable count of items that will be provided via the guidance-attribute control upon refinement using the selected value may be provided. Upon executing an embedded-value search operation using the selected multiple values, a subset of the identified set of items that have the selected values may be identified (e.g., the subset of items 1 to n—from the set of items 1 to n). Turning toFIG.3C, an example implementation is provided of a search system having relevance-based search refinement. In particular, inFIG.3Csearch system300may provide relevance-based search refinement using a selected attribute that triggers a guidance-attribute control that supports further relevance-based search refinement via integrated embedded-value search operation functionality using selectable values in the control. The search system300may receive a search query, “Running Shoes.” In response to receiving a search query, the search system300may identify a set of running shoes in an item database. The identified set of items may be presented via a user interface (i.e., items 1 to n). Selectable guidance attributes may be presented along with the identified set of items. A guidance attribute may be selected. Selection of the guidance attribute may result in the presentation of a guidance-attribute control. The guidance-attribute control may have embedded selectable values of the selected guidance attribute. In particular, a value may be selected via the guidance-attribute control using text-based value searching. As depicted, as “N” is typed into the search bar, a list of “Nike” and “New Balance” may be presented as selectable values. From these values, a value (e.g., “Nike”) may be selected from a list provided by typing text into a search bar. Text-based value searching may use the search bar limited to auto-complete of available search refinement terms (e.g., guidance attributes and/or values). Selecting one of the text-based value may result in an embedded-value search operation to identify a subset of the identified set of items that have the selected value (e.g., the subset of items 1 to n—from the set of items 1 to n). Turning toFIG.3D, an example implementation is provided of a search system having relevance-based search refinement. In particular, inFIG.3Dsearch system300may provide relevance-based search refinement using a selected attribute that triggers a guidance-attribute control that supports further relevance-based search refinement via integrated embedded-value search operation functionality using selectable values in the control. The search system300may receive a search query, “Running Shoes.” In response to receiving a search query, the search system300may identify a set of running shoes in an item database. The identified set of items may be presented via a user interface (i.e., items 1 to n). Selectable guidance attributes may be presented along with the identified set of items. A guidance attribute may be selected. Selection of the guidance attribute may result in the presentation of a guidance-attribute control. The guidance-attribute control may have embedded selectable values of the selected guidance attribute. In particular, a value may be selected via the guidance-attribute control using color-based value searching. Color-based value searching may use a color gradient, as depicted. For instance, the color wheel may be used to select a specific shade or color (e.g., “Brown”). Using a color gradient may more exactly match colors of interest during refinement. Selecting a color value may result in an embedded-value search operation to identify a subset of the identified set of items that have the selected value (e.g., the subset of items 1 to n—from the set of items 1 to n). With reference toFIGS.4,5, and6, flow diagrams are provided illustrating methods for implementing a search system for providing relevance-based search refinement. The methods may be performed using the search system described herein. In embodiments, one or more computer storage media having computer-executable instructions embodied thereon that, when executed, by one or more processors, may cause the one or more processors to perform the methods in the search system. Turning toFIG.4, a flow diagram is provided that illustrates a method400for implementing a search system for providing relevance-based search refinement using a guidance-attribute control having embedded selectable values of a guidance attribute. Initially at block410, a search query is received. The search query may indicate an item or item type. For example, the search query may be “Running Shoes.” At block420, a supply data is accessed based on the search query. Such supply data may include items and associated item information of available items in an item database. At block430, demand data is accessed based on the search query. Demand data may include information related to the popularity of item characteristics (e.g., guidance attributes and associated values). At block440, selectable guidance attribute(s) are provided. Such selectable guidance attributes may be based in the demand data. In particular, the guidance attributes may be from a ranked set of characteristics of items based on historical user interactions with the items. At block450, a guidance-attribute control is provided based on a selected guidance attribute. The guidance-attribute control provides embedded selectable values of the selected guidance attribute that initiate embedded-value search operations. Turning toFIG.5, a flow diagram is provided that illustrates a method500for implementing a search system for relevance-based search refinement. Initially at block510, a selected guidance attribute is received. At block520, a guidance-attribute control is provided based on the selected guidance attribute. At block530, a selected values from the guidance-attribute control is received. At block540, an embedded-value search operation is executed. In some embodiments, method500may progress to block550where a subset of items is identified. Turning toFIG.6, a flow diagram is provided that illustrates a method600for implementing a search system for providing relevance-based search refinement. Initially at block610, a selected guidance attribute is received. At block620, a search operations manager is accessed. At block630, selectively available embedded-search operations based on the selected guidance attribute are determined. At block640, a guidance-attribute control is provided with selectively available embedded-value search operations. With reference to the search system200, embodiments described herein support providing relevance-based search refinement for a search system. The search system components refer to integrated components that implement the search system. The integrated components refer to the hardware architecture and software framework that support functionality using the search system components. The hardware architecture refers to physical components and interrelationships thereof and the software framework refers to software providing functionality that may be implemented with hardware operated on a device. The end-to-end software-based search system may operate within the other components to operate computer hardware to provide search system functionality. As such, the search system components may manage resources and provide services for the search system functionality. Any other variations and combinations thereof are contemplated with embodiments of the present invention. By way of example, the search system may include an API library that includes specifications for routines, data structures, object classes, and variables may support the interaction the hardware architecture of the device and the software framework of the search system. These APIs include configuration specifications for the search system such that the components therein may communicate with each other for the novel functionality described herein. With reference toFIG.2A,FIG.2Aillustrates an exemplary search system200in which implementations of the present disclosure may be employed. In particular,FIG.2Ashows a high level architecture of search system200having components in accordance with implementations of the present disclosure. It should be understood that this and other arrangements described herein are set forth only as examples. In addition, a system, as used herein, refers to any device, process, or service or combination thereof. As used herein, engine is synonymous with system unless otherwise stated. A system may be implemented using components, managers, engines, or generators as hardware, software, firmware, a special-purpose device, or any combination thereof. A system may be integrated into a single device or it may be distributed over multiple devices. The various components, managers, engines, or generators of a system may be co-located or distributed. For example, although discussed for clarity as a singular component, operations discussed may be performed in a distributed manner. The system may be formed from other systems and components thereof. It should be understood that this and other arrangements described herein are set forth only as examples. Having identified various component of the search system200, it is noted that any number of components may be employed to achieve the desired functionality within the scope of the present disclosure. Although the various components ofFIG.2Aare shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines may more accurately be grey or fuzzy. Further, although some components ofFIG.2Aare depicted as single components, the depictions are exemplary in nature and in number and are not to be construed as limiting for all implementations of the present disclosure. The search system200functionality may be further described based on the functionality and features of the above-listed components. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) may be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. Having described an overview of embodiments of the present invention, an exemplary operating environment in which embodiments of the present invention may be implemented is described below in order to provide a general context for various aspects of the present invention. Referring initially toFIG.7in particular, an exemplary operating environment for implementing embodiments of the present invention is shown and designated generally as computing device700. Computing device700is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device700be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc. refer to code that perform particular tasks or implement particular abstract data types. The invention may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With reference toFIG.7, computing device700includes a bus710that directly or indirectly couples the following devices: memory712, one or more processors714, one or more presentation components716, input/output ports718, input/output components720, and an illustrative power supply722. Bus710represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG.7are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. We recognize that such is the nature of the art, and reiterate that the diagram ofFIG.7is merely illustrative of an exemplary computing device that may be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope ofFIG.7and reference to “computing device.” Computing device700typically includes a variety of computer-readable media. Computer-readable media may be any available media that may be accessed by computing device700and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device700. Computer storage media excludes signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory712includes computer storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device700includes one or more processors that read data from various entities such as memory712or I/O components720. Presentation component(s)716present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. I/O ports718allow computing device700to be logically coupled to other devices including I/O components20, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. Embodiments described in the paragraphs above may be combined with one or more of the specifically described alternatives. In particular, an embodiment that is claimed may contain a reference, in the alternative, to more than one other embodiment. The embodiment that is claimed may specify a further limitation of the subject matter claimed. The subject matter of embodiments of the invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. For purposes of this disclosure, the word “including” has the same broad meaning as the word “comprising,” and the word “accessing” comprises “receiving,” “referencing,” or “retrieving.” Further the word “communicating” has the same broad meaning as the word “receiving,” or “transmitting” facilitated by software or hardware-based buses, receivers, or transmitters” using communication media described herein. Also, the word “initiating” has the same broad meaning as the word “executing or “instructing” where the corresponding action may be performed to completion or interrupted based on an occurrence of another action. In addition, words such as “a” and “an,” unless otherwise indicated to the contrary, include the plural as well as the singular. Thus, for example, the constraint of “a feature” is satisfied where one or more features are present. Also, the term “or” includes the conjunctive, the disjunctive, and both (a or b thus includes either a or b, as well as a and b). For purposes of a detailed discussion above, embodiments of the present invention are described with reference to a distributed computing environment; however the distributed computing environment depicted herein is merely exemplary. Components may be configured for performing novel aspects of embodiments, where the term “configured for” may refer to “programmed to” perform particular tasks or implement particular abstract data types using code. Further, while embodiments of the present invention may generally refer to the search system and the schematics described herein, it is understood that the techniques described may be extended to other implementation contexts. Embodiments of the present invention have been described in relation to particular embodiments which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present invention pertains without departing from its scope. From the foregoing, it will be seen that this invention is one well adapted to attain all the ends and objects hereinabove set forth together with other advantages which are obvious and which are inherent to the structure. It will be understood that certain features and sub-combinations are of utility and may be employed without reference to other features or sub-combinations. This is contemplated by and is within the scope of the claims.
50,843
11860883
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT(S) The following description is intended to convey an understanding of the present invention by providing specific embodiments and details. It is understood, however, that the present invention is not limited to these specific embodiments and details, which are exemplary only. It is further understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the invention for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs. An embodiment of the present invention is directed to implementing a Data Usage Analysis engine that receives queries (e.g., SQL queries), tables (e.g., internal Catalog tables) and/or other portions or sections of data. An embodiment of the present invention may then parse the queries and identify various data usage patterns. This may include details concerning what tables are used, how much data is queried at what intervals, frequency of querying along what attributes are used in the queries and/or other usage details in various levels of granularity. An embodiment of the present invention may ascertain data usage patterns based on the queries that are executed. This information may then be used to determine portions of data that are not frequently used, portions of data that are frequently used and the specifics concerning the data usage (e.g., this type of data has not used in the X number of months or other time period). An embodiment of the present invention may further analyze or dissect queries and the associated data. For example, an embodiment of the present invention may identify that a user or type of user executes a query that relates to transaction data and further determine how the data is being used and for what purpose, e.g., whether the data to generate reports from a table, whether the user is accessing costs from the table, etc. An embodiment of the present invention is directed to identifying users or types of users and data usage patterns (e.g., query data, how the data is being used, frequency at which the data is queries, etc.) and then categorizing the data based on usage (e.g., Hot, Warm, Luke Warm, Cold, Frozen, etc.). This information may then be used to intelligently and efficiently store and/or manage the data in an appropriate storage service or device and then made available as needed. An embodiment of the present invention may apply a prediction feature to the data usage patterns. For example, an embodiment of the present invention may identify that a type of user (e.g., role, specific user, etc.) is predicted to access a type of dataset at a particular time period or range (e.g., first week of June every year, etc.). For example, an embodiment of the present invention may predict when data will likely be accessed based on user data patterns and then move the data from an archival storage service to a local server or high cost storage device for convenient and faster access. Accordingly, an embodiment of the present invention may be directed to moving or otherwise positioning data based on data usage patterns. With an embodiment of the present invention, users and entities may save significant manual hours to identify data use trends, e.g., Hot, Warm, Luke Warm, Cold, Frozen Data. Based on data usage patterns, an embodiment of the present invention may offload un-used data to low cost storage devices and thereby reduce the overall cost of the platform. For example, by understanding who the users are who are accessing what data and the frequency at which the data is accessed, an embodiment of the present invention may intelligently and efficiently store and move data between various storage services. An embodiment of the present invention may recognize that out of the 20 million requests, there are 15 million requests that were not active in the most recent time period. That information may be used to determine what data should be kept in local storage as opposed to remote storage or archival services. An embodiment of the present invention may be directed to efficiently managing data based on data usage patterns. For example, an entity may operate with a limited amount of storage (or storage cost). Based on data usage patterns, an embodiment of the present invention may be move data from high cost short term storage to long-term storage and back to high cost short term storage when needed. An embodiment of the present invention may effectively identify data for long-term storage to then free up storage for new and/or critical projects rather than having to purchase additional high cost short term storage. This may further reduce operational spend and expenditure costs. An embodiment of the present invention further recognizes that some data cannot be moved to remote storage or archival services. In such instances, an embodiment of the present invention may compress local storage and/or provide other alternatives to conserving the data and using less storage space. An embodiment of the present invention may also consider service level agreements (SLA) and other business considerations when managing and storing data in various platforms. For example, data may be stored in an archival storage and then retrieved from the archival storage (as opposed to maintaining the data in a more costly local option) and still be compliant with SLA and other requirements. FIG.1illustrates a system that implements a data usage analytics engine for database systems, according to an embodiment of the present invention. As shown inFIG.1, Data Usage Analytics Engine110may receive data input from a plurality of Data Stores114. Data Stores may represent various types of storage components, e.g., databases, sources, etc. Data Usage Analytics Engine may receive data access patterns112, as well as other forms of data that relate to data access and usage. Data Usage Analytics Engine110may also access data object statistics116, Data Usage Analytics Engine110may process the input data and identify storage utilization data flows represented by120. This may include frequently used data124, less frequently used data126and un-used data128in varying degrees and frequencies of access via user interface122. Other categories of data may be identified. Data Usage Analytics Engine110may manage data storage components, represented by Archival as a Service130, Data Sources140, Cloud Services142,144, etc. Data compression150may be applied to local storage components152. An embodiment of the present invention further recognizes that some data cannot be moved to remote storage or archival services. As shown by150, an embodiment of the present invention may compress local storage and/or provide other alternatives to conserving the data and using less storage space. The components illustrated inFIG.1are merely exemplary, other devices may be represented in various applications. While a single component is illustrated, each component may represent multiple components. An entity, such as a financial institution, may host Data Usage Analytics Engine110according to an embodiment of the present invention. The entity may support Data Usage Analytics Engine110as an integrated feature or system. According to another example, Data Usage Analytics Engine110may be offered by a third party service provider. Other scenarios and architectures may be implemented. An embodiment of the present invention may send and/or receive data from various other sources represented by databases. Databases may be internal or external to a host entity. Data may be stored and managed in storage components via one or more networks. Databases may include any suitable data structure to maintain the information and allow access and retrieval of the information. The storage may be local, remote, or a combination thereof with respect to Databases. Communications with Databases may be over a network, or communications may involve a direct connection between Databases and Entity, as depicted inFIG.1. Databases may also represent cloud or other network based storage. A user of an embodiment of the present invention may communicate with the Data Usage Analytics Engine via a network through a User Interface or other Self Service portal, such as122,132. Communication may be performed using any mobile or computing device, such as a laptop computer, a personal digital assistant, a smartphone, a smartwatch, smart glasses, other wearables or other computing devices capable of sending or receiving network signals. The system100ofFIG.1may be implemented in a variety of ways. Architecture within system100may be implemented as hardware components (e.g., module) within one or more network elements. It should also be appreciated that architecture within system100may be implemented in computer executable software (e.g., on a tangible, non-transitory computer-readable medium) located within one or more network elements. Module functionality of architecture within system100may be located on a single device or distributed across a plurality of devices including one or more centralized servers and one or more mobile units or end user devices. The architecture depicted in system100is meant to be exemplary and non-limiting. For example, while connections and relationships between the elements of system100are depicted, it should be appreciated that other connections and relationships are possible. The system100described below may be used to implement the various methods herein, by way of example. Various elements of the system100may be referenced in explaining the exemplary methods described herein. FIG.2illustrates a system that implements a data usage analytics engine for database systems, according to an embodiment of the present invention. Data Usage Analytics Engine240may include Horizontal Data Analysis242, Column Usage Analysis244, for example. For example, data usage analysis may refer to identifying tables that are being used within a particular database. An embodiment of the present invention may identify usage patterns and further classify the tables to indicate usage, such as Hot (Used) or Cold (Unused). Column usage (or Attribute usage) may refer to identifying columns and/or attributes within a table and further identify usage categories, such as Hot (Used) or Cold (Unused). Other categories and variations may be applied. Other data and/or usage analysis may be supported. Data Usage Analytics Engine240may receive data relating to data access or other usage from various sources, including Data Store such as Oracle250, Teradata252, NoSQL254, etc. Data Usage Analytics Engine may store data at Data Store230. A user may access the Data Usage Analytics Engine through an interactive user interface via Web User Interface220and Self Service interface210. FIG.3illustrates an exemplary flow chart of a data usage analytics engine for database systems, according to an embodiment of the present invention. At step310, one or more queries may be identified. Queries may include SQL queries and other database related queries. At step312, tables may be identified. The tables may include internal catalog tables as well as other portions of data. At step314, data may be parsed to identify usage metrics. Parsing may identify used and unused tables or columns/attributes. Metrics may refer to a measure of usage patterns for tables or columns/attributes. For example, usage patterns may be identified and collected on a periodic basis (e.g., monthly basis) for analysis. At step316, data usage patterns may be identified. This may include what tables are used, what data is queries, data frequency, data intervals, etc. At step318, the patterns may be applied. This may include identify improved storage configurations. While the process ofFIG.3illustrates certain steps performed in a particular order, it should be understood that the embodiments of the present invention may be practiced by adding one or more steps to the processes, omitting steps within the processes and/or altering the order in which one or more steps are performed. FIG.4is an exemplary interactive user interface, according to an embodiment of the present invention.FIG.4provides data usage patterns for a specific Production Database (or group of databases),FIG.4illustrates schema name410and current selections412. Various metrics may be provided at420, including Count of Schernas\Tables; Count of Total\Unused Partitions; Data Size; Unused Partition Data Size; Total Column Count, for example, Monthly Snapshots430may include usage metrics on monthly intervals. Other time periods (e.g., daily, weekly, quarterly, annual, user specified time period, etc.) may be applied for additional details and analysis. Summary data may be provided, including Table Summary440and Column Summary442.FIG.4is merely exemplary; other metrics and calculations may be provided. FIG.5is an exemplary interactive user interface, according to an embodiment of the present invention.FIG.5illustrates Data Usage Analysis510for a specific database (or group of databases).FIG.5may include specifics including Database Name520, Table Name522, and Current Selection524. Metrics may include Data Size530, Count of Databases532, Count of Tables534and Total Column Count536, ale.FIG.5may also include graphical data such as Active Period in Months540and Column Summary550. Other variations may be applied. For example, Active Period may include other predetermined time period. The foregoing examples show the various embodiments of the invention in one physical configuration; however, it is to be appreciated that the various components may be located at distant portions of a distributed network, such as a local area network, a wide area network, a telecommunications network, an intranet and/or the Internet. Thus, it should be appreciated that the components of the various embodiments may be combined into one or more devices, collocated on a particular node of a distributed network, or distributed at various locations in a network, for example. As will be appreciated by those skilled in the art, the components of the various embodiments may be arranged at any location or locations within a distributed network without affecting the operation of the respective system. As described above, the various embodiments of the present invention support a number of communication devices and components, each of which may include at least one programmed processor and at least one memory or storage device. The memory may store a set of instructions. The instructions may be either permanently or temporarily stored in the memory or memories of the processor. The set of instructions may include various instructions that perform a particular task or tasks, such as those tasks described above. Such a set of instructions for performing a particular task may be characterized as a program, software program, software application, app, or software. It is appreciated that in order to practice the methods of the embodiments as described above, it is not necessary that the processors and/or the memories be physically located in the same geographical place. That is, each of the processors and the memories used in exemplary embodiments of the invention may be located in geographically distinct locations and connected so as to communicate in any suitable manner. Additionally, it is appreciated that each of the processor and/or the memory may be composed of different physical pieces of equipment. Accordingly, it is not necessary that the processor be one single piece of equipment in one location and that the memory be another single piece of equipment in another location. That is, it is contemplated that the processor may be two or more pieces of equipment in two or more different physical locations. The two distinct pieces of equipment may be connected in any suitable manner. Additionally, the memory may include two or more portions of memory in two or more physical locations. As described above, a set of instructions is used in the processing of various embodiments of the invention. The servers may include software or computer programs stored in the memory (e.g., non-transitory computer readable medium containing program code instructions executed by the processor) for executing the methods described herein. The set of instructions may be in the form of a program or software or app. The software may be in the form of system software or application software, for example. The software might also be in the form of a collection of separate programs, a program module within a larger program, or a portion of a program module, for example. The software used might also include modular programming in the form of object oriented programming. The software tells the processor what to do with the data being processed. Further, it is appreciated that the instructions or set of instructions used in the implementation and operation of the invention may be in a suitable form such that the processor may read the instructions. For example, the instructions that form a program may be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. That is, written lines of programming code or source code, in a particular programming language, are converted to machine language using a compiler, assembler or interpreter. The machine language is binary coded machine instructions that are specific to a particular type of processor, i.e., to a particular type of computer, for example. Any suitable programming language may be used in accordance with the various embodiments of the invention. For example, the programming language used may include assembly language, Ada, APL, Basic, C, C++, COBOL, dBase, Forth, Fortran, Java, Modula-2, Pascal, Prolog, REXX, Visual Basic, JavaScript and/or Python. Further, it is not necessary that a single type of instructions or single programming language be utilized in conjunction with the operation of the system and method of the invention. Rather, any number of different programming languages may be utilized as is necessary or desirable. Also, the instructions and/or data used in the practice of various embodiments of the invention may utilize any compression or encryption technique or algorithm, as may be desired. An encryption module might be used to encrypt data. Further, files or other data may be decrypted using a suitable decryption module, for example. In the system and method of exemplary embodiments of the invention, a variety of “user interfaces” may be utilized to allow a user to interface with the mobile devices or other personal computing device. As used herein, a user interface may include any hardware, software, or combination of hardware and software used by the processor that allows a user to interact with the processor of the communication device. A user interface may be in the form of a dialogue screen provided by an app, for example. A user interface may also include any of touch screen, keyboard, voice reader, voice recognizer, dialogue screen, menu box, list, checkbox, toggle switch, a pushbutton, a virtual environment (e.g., Virtual Machine (VM)/cloud), or any other device that allows a user to receive information regarding, the operation of the processor as it processes a set of instructions and/or provide the processor with information. Accordingly, the user interface may be any system that provides communication between a user and a processor. The information provided by the user to the processor through the user interface may be in the form of a command, a selection of data, or some other input, for example. The software, hardware and services described herein may be provided utilizing one or more cloud service models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS), and/or using one or more deployment models such as public cloud, private cloud, hybrid cloud, and/or community cloud models. Although the embodiments of the present invention have been described herein in the context of a particular implementation in a particular environment for a particular purpose, those skilled in the art will recognize that its usefulness is not limited thereto and that the embodiments of the present invention can be beneficially implemented in other related environments for similar purposes.
20,582
11860884
DETAILED DESCRIPTION Examples described herein are directed to assembly a database for use in modifying (augmenting or adjusting) queries for retrieving desired content. Modifying queries prior to searching using such a database provide more intuitive query results during entry of a target query. Processing of logs including prior queries yields a query processing layer data (QPL) database including target queries and relevant subqueries (letter/symbol combinations entered during generation of the target queries). The QPL database structure operates in a query processing layer (QPL) positioned between the text entry field of user device and a search engine. Subsequent subqueries are compared to the relevant subqueries in the QPL database and identification of a matching relevant subquery results in the associated target query sent for searching (instead of or in addition to the associated subquery). Additionally, the QPL database may correct spelling and supplement emoji subqueries with relevant text (also referred to as emoji understanding). As used herein, a target query refers to the complete word, phrase, symbol(s), or combination thereof that a user intends to enter for searching (e.g., heart). As used herein, a subquery refers to strings of one or more letters/symbols the user actually enters in the process of entering a desired target query (e.g., for target query “heart,” subqueries may be “h,” “he,” “hea,” “hear,” and “heart”). Additionally, mistakes (e.g., “heat”) may form part of the subquery if a user enters makes a mistake during the query entry process. The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products illustrative of examples of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various examples of the disclosed subject matter. It will be evident, however, to those skilled in the art, that examples of the disclosed subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. In accordance with one example, a method is provided for assembling a database for query generation. The method includes receiving a query history log, the query history log including target queries and a mapping between each of the target queries and associated subqueries for each of the target queries, selecting one or more of the associated subqueries for a first target query based on a conditional probability exceeding a threshold for the associated subqueries of the first target query compared to the matching associated subqueries of the other target queries, and including the first target query and the selected one or more associated subqueries for the first target query in the in-memory data structure store for query generation. In accordance with another example, a system is provided for assembling a database for query generation. The system includes a receiving port, a selection engine, and a generation engine. The receiving port is configured to receive a query history log, the query history log including target queries and a mapping between each of the target queries and associated subqueries for each of the target queries. The selection engine is configured to select one or more of the associated subqueries for a first target query based on a conditional probability exceeding a threshold for the associated subqueries of the first target query compared to the matching associated subqueries of the other target queries. The generation engine is configured to include the first target query and the selected one or more associated subqueries for the first target query in the in-memory data structure store for query generation. In accordance with another example, a non-transitory processor-readable storage medium is provided for assembling a database that stores processor-executable instructions that, when executed by a processor of a machine, cause the machine to perform operations. The operations performed by the machine include receiving a query history log, the query history log including target queries and a mapping between each of the target queries and associated subqueries for each of the target queries, selecting one or more of the associated subqueries for a first target query based on a conditional probability exceeding a threshold for the associated subqueries of the first target query compared to the matching associated subqueries of the other target queries, and including the first target query and the selected one or more associated subqueries for the first target query in a database for query generation. Examples described herein are useful for addressing one or more of the challenges faced by existing searching techniques. One challenge is providing search results for mobile-first platforms (e.g., platforms where most users engage with an application on a mobile device using a keyboard presented on a relatively small device screen (e.g., less than 10 inches by 5 inches). Typing on a mobile keyboard is tiring and error-prone and the inventors have discovered that the frequency and variation in typing errors identified in the data is substantial. A second challenge is addressing short queries where, for example, users, on average, make a selection after just over 4 keystroke actions. Traditional natural language processing (NLP) query understanding and complex semantic analyses yield little benefit under these conditions. A third challenge is localization where the content is, for example, visual in nature. Such content transcends linguistic and social borders and is, for the most part, globally understood and appreciated. But it is unlikely that someone searching in Spanish will be able to find a dancing hotdog image/overlay that is tagged in English, unless its tagging keywords (“hotdog”, “dancing”, etc.) are explicitly translated into Spanish and included in the index of the image/overlay, which is expensive, time-consuming, and inefficient. A fourth challenge is visually searching for visual content. For example, searching using emojis (e.g., using an emoji keyboard) is convenient and takes only one character. Traditional search engines, however, are unable to provide content tagged with a term such as the text “camel” using an emoji for a “camel” character unless the emoji is also included in the index of the content. FIG.1is a block diagram illustrating a system100, according to some examples, configured to automatically process query logs (including target queries and the associated subqueries entered during the development of target queries) to create a QPL database for modifying subsequent subqueries in order to provide more intuitive query results during entry of the subqueries. The system100includes one or more client devices such as client device110. The client device110includes, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDA), smart phone, tablet, ultrabook, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic, game console, set-top box, computer in a vehicle, or any other communication device that a user may utilize to access the system100. In some examples, the client device110includes a display module (not shown) to display information (e.g., in the form of user interfaces). In further examples, the client device110includes one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user that is used to access and utilize an online social platform. For example, client device110is a device of a user who is searching for content maintained by an online social platform. Client device110accesses a website on the social platform (e.g., hosted by server system108) directly or through one or more third party servers128(e.g., utilizing one or more third-party applications130). Application server104records target queries and the associated subqueries received from a client device110in database126. The application server104produces a QPL database (e.g., an in-memory database) by analyzing the target queries and associated subqueries using techniques disclosed herein for use in modifying future subqueries to provide more intuitive search results as a query is being entered. One or more users may be a person, a machine, or other means of interacting with the client device110. In examples, the user may not be part of the system100but may interact with the system100via the client device110or other means. For instance, the user may provide input (e.g., touch screen input or alphanumeric input) to the client device110and the input may be communicated to other entities in the system100(e.g., third-party servers130, server system108, etc.) via the network104. In this instance, the other entities in the system100, in response to receiving the input from the user, may communicate information to the client device110via the network104to be presented to the user. In this way, the user interacts with the various entities in the system100using the client device110. The system100further includes a network104. One or more portions of network104may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, another type of network, or a combination of two or more such networks. The client device110may access the various data and applications provided by other entities in the system100via web client112(e.g., a browser) or one or more client applications114. The client device110may include one or more client application(s)114(also referred to as “apps”) such as, but not limited to, a web browser, messaging application, electronic mail (email) application, an e-commerce site application, a mapping or location application, an online home buying and selling application, a real estate application, and the like. In some examples, one or more client application(s)114are included in a given one of the client device110, and configured to locally provide the user interface and at least some of the functionalities, with the client application(s)114configured to communicate with other entities in the system100(e.g., third-party server(s)128, server system108, etc.), on an as-needed basis, for data processing capabilities not locally available (e.g., to access location information, to authenticate a user, provide search results, etc.). Conversely, one or more client application(s)114may not be included in the client device110, and then the client device110may use its web browser to access the one or more third party applications130hosted on other entities in the system100(e.g., third-party server(s)128, server system108, etc.). A server system108provides server-side functionality via the network104(e.g., the Internet or wide area network (WAN)) to one or more third party server(s)128and one or more client devices110. The server system108includes an application program interface (API) server120, a web server122, and a query processing system124, that may be communicatively coupled with one or more database(s)126. The one or more database(s)126may be storage devices that store data (e.g., in a dataset) related to users of the server system108, applications associated with the server system108, cloud services, housing market data, and so forth. The one or more database(s)126may further store information related to third party server(s)128, third-party application(s)130, client device110, client application(s)114, users, and so forth. In one example, the one or more database(s)126may be cloud-based storage. The server system108may be a cloud computing environment, according to some examples. The server system108, and any servers associated with the server system108, may be associated with a cloud-based application. In one example, the server system108includes a query processing system124. The query processing system124may include one or more servers and may be associated with a cloud-based application(s). The query processing system124may receive search queries and user information (e.g., user ID and session ID), store the received queries and information in the database126, process the queries and information to create a QPL database, and access the QPL database to provide more intuitive search results during subsequent query entries. The details of the query processing system124are provided below in connection withFIGS.2A,2B, and2C. The system100further includes one or more third party server(s)128. The one or more third-party server(s)128may include one or more third-party application(s)130. The one or more third-party application(s)130, executing on third party server(s)128may interact with the server system108via API server120via a programmatic interface provided by the API server120. For example, one or more of the third-party applications132may request and utilize information from the server system108via the API server120to support one or more features or functions on a website hosted by the third party or an application hosted by the third party. The third-party application(s)130, for example, may provide search functionality and software version analysis functionality that is supported by relevant functionality and data in the server system108. FIG.2Ais a block diagram illustrating an example query processing system124. The illustrated query processing system124includes a query generation system202and a query modification system204. The query generation system202is an offline component that processes query logs of past queries (e.g., the last seven days) to develop a QPL database including target queries and associated subqueries. The query modification system204is an online component that compares a current query being entered by a user in a search field of their device to subqueries in the QPL database to identify a match and modifies the current query to include the associated target query (either by replacing or supplementing the current query) when a match is identified. It will be understood by one of skill in the art that the query generation system202and the query modification system204may operate in conjunction with one another or may be separate systems. As shown inFIG.2B, the query generation system202includes a selection engine210and a generation engine212. The selection engine210implements instructions to select desirable subqueries associated with each target query for inclusion in the QPL database. The generation engine212builds the QPL database from the target queries and selected subqueries. It will be understood by one of skill in the art that the selection engine210and the generation engine212may operate in conjunction with one another or may be separate systems. As shown inFIG.2C, the query modification system204includes a search engine interface220and a QPL database222(e.g., an in-memory database that resides in a memory of the server system). The search engine interface220provides an interface to a search engine (e.g., using an API available from the search engine) through which queries are sent and responses to the queries are received. The QPL database222stores the target queries and selected subqueries (e.g., in database126) for use in processing a subquery received from a user device110. It will be understood by one of skill in the art that the search engine interface220and the QPL database222may operate in conjunction with one another or may be separate systems. FIG.3is a diagram depicting example offline QPL database generation and online QPL database use. At block302, query history logs are assembled and stored (e.g., in database126). The query history logs include target queries and associated subqueries from actual queries (e.g., by a social media app user for content maintained by a social media provider). The query history logs may include logs that are for a predefined recent period of time (e.g., a rolling seven day period) so the query history logs remain current. At block304, spell correction occurs. In an example, the spelling of target queries are checked and corrected using a dictionary (and associated correction mappings) developed from the query history logs using techniques described herein. The spelling of subqueries may not be checked or corrected as the subqueries represent actual entries of users, which may contain common misspellings and typos useful in selecting intended target entries for others that make those same mistakes. At block306, query completion occurs. Query completion includes associating text corresponding to a symbol with target queries containing that symbol (e.g., by looking in a database including a list of symbols and associated text stored in database126) and vice versa, associating text or symbols corresponding with related text or symbols in a query (e.g., “heart” associated with “love” in a database including a list of associated terms/symbols stored in the database126), or a combination thereof. At block308, translation occurs. Translation includes associating a translation (e.g., in English) corresponding to a target query containing corresponding foreign language text (e.g., by looking in a database including a list foreign language text and associated English text stored in database126). In an example, the database for translation is developed by sending a foreign language target query (e.g., identified based on locale provided by the client device110or determined by the server system108) to a translation engine (e.g., Google Translate available from Google of Mountain View, Calif., USA). If an English translation is returned, the English translation is associated with the corresponding foreign target query (e.g., a mapping in the database126). Although an implementation with English as a base language is described, one of skill in the art will understand how to apply the teachings herein to different base languages. In one example, a Russian user (location ID—“ru”) enters the term “apõy3 (which is the Russian word for watermelon) in a search field of a client application running on their client device. After each character entry, the client application sends the current string of characters (along with the user ID, session ID, and locale) as a search query to an application server for the client application, which routes the string of characters and locale to the query processing system124in the query processing layer310. The QPL310identifies a match for apõy3 (i.e., watermelon) and modifies the search query to additionally include the translated term. QPL310receives the target queries and subqueries (along with associated corrections, completions, and translations) developed from the query history for future online queries from users. The modified search is sent by the QPL310to the search engine312. The search engine312identifies results based on the modified search query and the results are returned to the user via the application server and client server for display on the client device. A suitable search engine312is Elasticsearch (ES; available from Elastic NV of Mountain View, Calif., USA), which may be queried in real-time for user search terms. For example, when a user types “black and white”, this query is modified as described herein and sent to ES and a list of matching documents with corresponding BM25 or TF-IDF scores is returned (in some cases LTR ranking are applied). Each document has a series of “tags” or “words” associated with it which, depending on the particular application, are either manually generated tags or any free-form text associated with the document (such as names, etc.). FIG.4depicts an example query system400with a QPL310implemented as a mesh service402. An endpoint414of the application server104receives the search query (i.e., request). The application server104routes the search query to the QPL310for modification, if applicable. The application server104then routes the search query (as modified, if applicable) a search retrieval and result assembly system410. A retrieval engine422retrieves search result for content producers424(e.g., via a search engine312such as Elasticsearch), where different producers424(e.g., different content delivery aspects of an application, e.g., overlays, messaging, content development, etc.) get different documents. The search engine312selects the results from an indexed documents412. A ranking engine426then ranks the results according to ranking rules428and filters430(e.g., by applying machine learning). A blending engine432puts the results together according to rendering rules434and the rendered results are sent to the user. The mesh service402has access to a database404including query logs302that store historical queries for processing to build a query table408for current query modification. FIG.5depicts an example query generation system and query modification system useful for illustrating process flow. Queries508to a search front end412are stored in the search logs302. Modules process the queries in the search logs302. The spell correction and translation module304/308corrects spelling errors in the target queries and translates foreign language target queries to a base native language (e.g., English). The query expansion module306associates text corresponding to a symbol in a query with target queries containing that symbol and associates text or symbols related to the text or symbols in the query. A subquery and target query differentiation module502identifies target queries (e.g., the longest string of characters during a search session) and subqueries associated with those target queries. The emoji understanding engine504includes text associated with each of one or more symbols (e.g., for use by the query expansion module306). The processed queries are added to a remote dictionary server408(e.g., an in-memory database such as a redis cluster available from Redis Labs of Mountain View, Calif., USA). The remote dictionary server408stores the processed queries in indexes506(e.g., in database126). In an example, the processed queries are periodically refreshed (e.g., daily). In use, a query508(e.g., “Corazon”) is sent by the search front end414to the remote dictionary server408in addition to being logged in the search logs302for processing in order that future queries may benefit from the current search. The remote dictionary server408scans the indexes506for a match. If a matching term/symbol is found (e.g., the English language “Heart” corresponding to “Corazon”), the query is modified to include the matching term/symbol prior to sending the query to a search engine312(FIG.3) for processing. FIGS.6,7, and8depict flow charts600/700/800illustrating example methods for query database generation, dictionary generation, and query modification, respectively. Although the flowcharts may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a procedure, etc. The steps of a method may be performed in whole or in part, may be performed in conjunction with some or all of the steps in other methods, and/or may be performed by any number of different systems, such as the systems described inFIGS.1-5and9-11. FIG.6is a flow diagram illustrating an example method600for query processing, e.g., using the query processing system124. Although the below description of the method600refers to the query processing system124, other systems for query processing will be understood from the description herein. At block602, the query processing system124receives one or more query history logs302. Query generation system202receives the query history logs302on a periodic basis (e.g., daily). In an example, the query history logs include queries along with a corresponding user ID, session ID, and locale. The user ID and session ID enables grouping of the subqueries and the identification of an associated target query gathered from streams of queries from multiple users and multiple sessions. During a search, a user may start typing her query (e.g., “Heart”) in a search field of a GUI the application server104serves to their client device, interacts with results when the right ones show up and then erases the query to start another search (e.g., “Love”). Table 1 shows a hypothetical search session with two queries (“Heart” and “Love”) where each row in the table represents a separate user action in a sequence, such as character addition or deletion. TABLE 1HHeHeaHearHeartHearHeaHeHLLoLovLove In this example, the user intends to search for “Heart” first and then attempts another search for “Love”. The query processing system124differentiates between target queries (e.g., “Heart” and “Love”) and the other queries (referred to herein as subqueries) that led to them. At the end, it creates a mapping between all subqueries and the target query for each target query. TABLE 2h ->hearthe ->hearthea ->hearthear ->heartheart ->heartl ->lovelo ->lovelov ->lovelove ->love In one example, differentiation between target queries and subqueries is achieved by partitioning query events by user ID and session ID. An additional constraint such as breaking up sessions by an empty string (“ ”) may be used to detect multiple searches within a single session. Within each query session, the query processing system124identified the longest query by the number of characters as the target query and create a mapping between all subqueries and that target query. Duplicate entries may be removed by applying a dedup algorithm. At block604, the query processing system124selects useful subqueries for identifying target queries. In one example, search events and corresponding search results are generated after every user keystroke. Subqueries unlikely to provide suitable results are eliminated, e.g., for noise-signal and privacy reasons. The query processing system124identifies useful subqueries by computing an empirical estimate of conditional probabilities of all subquery to target query mappings, for example, “hea” and “heart”, i.e., P (“heart”|“hea”). The probability is compared to a threshold (e.g., of 50%) that promotes a candidate mapping into the next round. In an example, a subquery may also be a target query (e.g., subquery “love”→targe query “love”), which is use for translation. In one example, the query processing system124generates a subset of subqueries to target queries mappings over a period of time (e.g., the last 28 days of search events) with the following criteria:1. Do not consider queries that resulted in friending related actions (e.g., those entered to identify a specific username or display name friend), which are unlikely to be generally useful to a broad user base.2. A threshold number of user (e.g., at least 10 unique users) must establish a particular subquery to target query mapping per locale, which is useful for control the size of the matching database.3. Probability that the user will select the target query X given the subquery Y is greater than a threshold (e.g., at least 50%), which ensures that each X has a single and unique Y in the final mapping.4. A similarity coefficient (e.g., a Jaccard similarity coefficient) greater than a threshold (e.g., 0.5) to avoid abuse by removing associations that are distant in character space, such as “love”→“hate”. This produces a map of subqueries to target queries where all subqueries are unique and target queries are exactly how they have been entered by users. This is because, the subqueries should match what the users are actually entering. Note that the subqueries often are, but not always, prefix subqueries. In one example, the target queries are corrected and the subqueries are not. For example, the user may wonder around a bit on how they get to the final query (adding and removing characters) and if enough of them visit a particular state, it eventually makes it to the query completion mapping. At block606, the query processing system124identifies target query misspellings. The query processing system124may check the spelling of target queries using a dictionary developed from the query history logs (e.g., using techniques described herein), which includes mappings between correctly spelled target queries and common misspellings. In one example, a target query misspelling is identified when a match with a common misspelling is identified in the dictionary. The spelling of subqueries may not be checked or corrected as the subqueries represent actual entries of users, which may contain common misspellings and typos useful in selecting intended target entries for others that make those same mistakes. In this example, the query processing system124maintains common misspelling patterns in the subquery space (such as “hes” in Table 3), but target queries ideally match the tagging keywords in the index and, therefore, misspellings in those are highly undesirable. TABLE 3hhehesheheahearhearthearheahehllolovlove At block608, the query processing system124corrects target query misspellings. The query processing system124may correct identified misspelling using the dictionary. For a target query matching a common misspelling in the dictionary, the query processing system augments the target query with a corresponding correctly spelled target query from the dictionary. At block610, the query processing system124identifies target query matches. After spell correction, the query processing system124identifies target query matches and combines matching target queries and their associated subqueries into a single target query. At block612, the query processing system124identifies unique subqueries. After target query matching, the query processing system124removes duplicate subqueries (e.g., by applying a conventional dedup algorithm) such that unique subqueries remain. At block614, the query processing system124includes the remaining target queries and selected/unique subqueries in a database of the QPL310. At block616, the query processing system124identifies emojis. Search queries may include emojis on their own without any additional characters. For example, searches like “” may be input. In conventional search systems, such searches would not return any meaningful results besides exact matches in tag substrings or usernames. To address such situations, the query processing system124converts them to their text versions. At block618, the query processing system124identifies text associated with the emoji(s). For example, the search “” could be converted to “face relieved not done hourglass,” e.g., by comparing the individual emojis to entries in an emoji database including emojis and corresponding text for each emoji. The query processing system124identifies associated text when there is a match in the emoji database. At block620, the query processing system124includes the associated text in a database of the QPL310. The query processing system124may supplement the emoji(s) with the associated text or may replace the emoji. At block622, the query processing system124sends target queries to a machine translation engine. In an example, spell-corrected and emoji expanded target queries are sent for translation to English using a third-party translation engine such as Google Translate using Google Translate APIs available from Google of Mountain View, Calif., USA. At block624, the query processing system124receives the translation (if available) and a corresponding language identifier. In an example, the translation engine returns a translation (if available) along with the detected languages for each query. At block626, the query processing system124includes the translation in a database of the QPL310. When an available translation is returned, the query processing system124adds the translation into a mapping to the associated target query. FIG.7is a flow diagram illustrating an example method700for building a dictionary. The method is language agnostic and automatically detects desired spelling corrections from the data in the query logs. Spell correction may be dynamic, being built on the go relative to the current state of the dictionary at any given time. Although the below description of the method700refers to the query processing system124, other systems for query processing will be understood from the description herein. At block702, the query processing system124places the target queries in order. In one example, the query processing system124orders target queries in decreasing order by their relative frequencies of occurrence in the user queries. For example, the top target queries may be the cake emoji “” followed by the term “heart”. At block704, the query processing system124adds the first target query to the dictionary to start building the dictionary. In an example, the query processing system124adds the first most common search term (in full) to the dictionary with a concatenated locale to distinguish the same words in different languages, such as “bald” in English and German. At block706, the query processing system124selects the next target query. In an example, the query processing system124selects the second most common word, followed by the third, etc. At block708, the query processing system124determines if a spell correction of the next target query is within a predefined edit distance (e.g., an edit distance of one; any different letter in the same position is directly next to the correct letter on a known keyboard such as a qwerty keyboard) for a word already in the dictionary. If the spell correction is within the predefined edit distance, processing proceeds at block710. Otherwise, processing proceeds at block714. At block710, the query processing system124determines if the relative frequency of occurrence is greater than a predefined threshold (e.g., 1.0 percent). This is because it is likely that a misspelled term would occur less frequently than the correct version. If the relative frequency is greater than the predefined threshold, processing proceeds at block714. Otherwise, processing proceeds at block712. At block712, the query processing system124adds the current target query being processed to a correction map. At block714, the query processing system124, adds the current target query to the dictionary. Thus, in blocks708and710, the second most common word is compared with the first most common word in the dictionary being built. If its edit distance is, for example, less than or equal to 1 and its frequency is less than 1% of the first word, the query processing system124considers the second word to be a misspelling of the first. It is then replaced as described below everywhere in a target mapping with the correctly spelled version and is not added to the dictionary. The third most common word is then checked against all words added to the dictionary, followed by the fourth, etc. At each step, if the misspelling is identified, it is fixed in the target mapping, otherwise, it is added to the dictionary with the appropriate frequency count. The dictionary grows as the process is repeated for all queries and produces a language-specific vocabulary that reflects the intended usage of the search platform. It contains, for example, “good morning” and “ttyl”, even though these words may not be found in a conventional dictionary. FIG.8is a flow diagram illustrating an example method800for query processing, e.g., using the query processing system124. Although the below description of the method800refers to the query processing system124, other systems for query processing will be understood from the description herein. At block802, the query processing system124receives the search query. The query processing system124receives the search query from a client device110. At block804, the query processing system124monitors the locale of the search query. The processing system124detects a locale associated with the search query. The client application114on the client device110may add the locale to the search query, e.g., based on parameters gathered during device set up or gathered from sensors such as GPS sensors. At block806, the query processing system124compares the search query to mappings in a database of the QPL310. The query processing system124compares the current search query received from the client device110to spell correction mapping, translations mappings, expansion mapping, or a combination thereof. At block808, the query processing system124modifies the search query responsive to a match in the mappings in a database of the QPL310. The search query may be modified by replacing the search query (e.g., replacing a misspelled word with the correct word) or supplementing the original search query (e.g., adding the English version of a foreign word or adding text associated with an emoji) while retaining the original search query. At block810, the query processing system124sends the modified search query to the search engine. In an example, the query processing system124sends the modified search query to a third-party search engine such as Elasticsearch. At block812, the query processing system124receives a result for the modified search query from the search engine. In an example, the third-party search engine returns the search results for the modified search query to the query processing system124. At block814, the query processing system124returns the received result to the client device. In an example, the query processing system124returns the results to the client application114for display by the client device110. At block816, the query processing system124processes the original search query for use. The query processing system124sends the original search query to the search logs for subsequent processing and mapping as described herein. At block818, the query processing system124modifies the database of the QPL310responsive to processed search query. FIG.9is a diagrammatic representation of a machine900within which instructions908(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine900to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions908may cause the machine900to execute any one or more of the methods described herein. The instructions908transform the general, non-programmed machine900into a particular machine900programmed to carry out the described and illustrated functions in the manner described. The machine900may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine900may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions908, sequentially or otherwise, that specify actions to be taken by the machine900. Further, while only a single machine900is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions908to perform any one or more of the methodologies discussed herein. The machine900may include processors902, memory904, and I/O components942, which may be configured to communicate with each other via a bus944. In an example, the processors902(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor906and a processor910that execute the instructions908. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.9shows multiple processors902, the machine900may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory904includes a main memory912, a static memory914, and a storage unit916, both accessible to the processors902via the bus944. The main memory904, the static memory914, and storage unit916store the instructions908embodying any one or more of the methodologies or functions described herein. The instructions908may also reside, completely or partially, within the main memory912, within the static memory914, within machine-readable medium918(e.g., a non-transitory machine-readable storage medium) within the storage unit916, within at least one of the processors902(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine900. Furthermore, the machine-readable medium918is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium918“non-transitory” should not be construed to mean that the medium is incapable of movement; the medium should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium918is tangible, the medium may be a machine-readable device. The I/O components942may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components942that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components942may include many other components that are not shown inFIG.9. In various examples, the I/O components942may include output components928and input components930. The output components928may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components930may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location, force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components942may include biometric components932, motion components934, environmental components936, or position components938, among a wide array of other components. For example, the biometric components932include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components934include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components936include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components938include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components942further include communication components940operable to couple the machine900to a network920or devices922via a coupling924and a coupling926, respectively. For example, the communication components940may include a network interface component or another suitable device to interface with the network920. In further examples, the communication components940may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), WiFi® components, and other communication components to provide communication via other modalities. The devices922may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components940may detect identifiers or include components operable to detect identifiers. For example, the communication components940may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components940, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., memory904, main memory912, static memory914, memory of the processors902), storage unit916may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions908), when executed by processors902, cause various operations to implement the disclosed examples. The instructions908may be transmitted or received over the network920, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components940) and using any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions908may be transmitted or received using a transmission medium via the coupling926(e.g., a peer-to-peer coupling) to the devices922. FIG.10is a block diagram1000illustrating a software architecture1004, which can be installed on any one or more of the devices described herein. The software architecture1004is supported by hardware such as a machine1002that includes processors1020, memory1026, and I/O components1038. In this example, the software architecture1004can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1004includes layers such as an operating system1012, libraries1010, frameworks1008, and applications1006. Operationally, the applications1006invoke API calls1050through the software stack and receive messages1052in response to the API calls1050. The operating system1012manages hardware resources and provides common services. The operating system1012includes, for example, a kernel1014, services1016, and drivers1022. The kernel1014acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1014provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1016can provide other common services for the other software layers. The drivers1022are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1022can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1010provide a low-level common infrastructure used by the applications1006. The libraries1010can include system libraries1018(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1010can include API libraries1024such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1010can also include a wide variety of other libraries1028to provide many other APIs to the applications1006. The frameworks1008provide a high-level common infrastructure that is used by the applications1006. For example, the frameworks1008provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1008can provide a broad spectrum of other APIs that can be used by the applications1006, some of which may be specific to a particular operating system or platform. In an example, the applications1006may include a home application1036, a contacts application1030, a browser application1032, a book reader application1034, a location application1042, a media application1044, a messaging application1046, a game application1048, and a broad assortment of other applications such as a third-party application1040. The applications1006are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1006, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1040(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1040can invoke the API calls1050provided by the operating system1012to facilitate functionality described herein. FIG.11Ais a GUI1100depicting an example search result utilizing query completions. Because it is not easy to type on the mobile keyboard, the QPL310(FIG.3) is proactive in anticipating a user's intended search. When someone types “” (just three characters), 63% of the time they will finish with “” which is an “orange” in Russian. Since “” does not really mean anything on its own, it is likely prudent to return results for “” just after three characters typed. The GUI1100includes a search field1102and a results field1104. In the illustrated example, a user has entered a query1106(i.e., “happy b”) into the search field1102, which is sent to the application server104(FIG.1). The QPL310(FIG.3) of the query processing system124on the application server104includes a matching subquery (i.e., “happy b”) that is mapped to a target query (e.g., “happy birthday”). The QPL310modifies the query1106by replacing or adding the target query before sending to the search engine312. The search engine312returns results1108related to the modified search query that the search engine may have otherwise missed if it had only based the search on the original query (i.e., “happy b”). FIG.11Bis a GUI1120depicting an example search result utilizing spell correction. Due to small mobile keyboard, there are several persistent misspelling variations of many common words, such as “John,” e.g., “Jkhn”, “Jlhn” and “Nohn”. In all cases, it is the neighboring keys on the keyboard that get replaced. A search engine such as Elasticsearch does not return results for “John” given these misspelled variants of “John,” though there is a high probability that that is what a user intended to type. Similarly, mistyping “norning” results in a completely different set of results. The GUI1120includes a search field1102and a results field1104. In the illustrated example, a user has entered a query1126(i.e., “good norning”) into the search field1102, which is sent to the application server104(FIG.1). The QPL310(FIG.3) of the query processing system124on the application server104includes a matching subquery (i.e., “good norning”) that is mapped to a target query (e.g., “Good Morning”). The QPL310modifies the query1126by replacing or adding the target query before sending to the search engine312. The search engine312returns results1128related to the modified search query that the search engine may have otherwise missed if it had only based the search on the original query (i.e., “good norning”). FIG.11Cis a GUI1140depicting an example search result utilizing query expansion. “Visual” communication in very popular, so it makes sense that users would like to search for content by typing in emojis and potentially other forms of non-text queries. For example, if a user types a “” in the search field, the system should return results for the term “camel.” The GUI1120includes a search field1102and a results field1104. In the illustrated example, a user has entered a query1146(i.e., “”) into the search field1102, which is sent to the application server104(FIG.1). The QPL310(FIG.3) of the query processing system124on the application server104includes a matching subquery (i.e., “”) that is mapped to a target query (e.g., “birthday cake”). The QPL310modifies the query1146by replacing or adding the target query before sending to the search engine312. The search engine312returns results1148related to the modified search query that the search engine may have otherwise missed if it had only based the search on the original query (i.e., “”). FIG.11Dis a GUI1160depicting an example search result utilizing query translations. When someone searches for a non-English word such as “corazon,” which is “heart” in Spanish, Elasticsearch does not match the English “heart” tags with “corazon”. Rather it needs to be explicitly tagged with “corazon” to work. The QPL310can address this deficiency. The GUI1120includes a search field1102and a results field1104. In the illustrated example, a user has entered a query1166(i.e., “ap6y3”) into the search field1102, which is sent to the application server104(FIG.1) along with locale (i.e., “RU”). The QPL310(FIG.3) of the query processing system124on the application server104includes a matching subquery (i.e., “ap6y3”) that is mapped to a target query (e.g., “Watermelon”). The QPL310modifies the query1166by replacing or adding the target query before sending to the search engine312. The search engine312returns results1168related to the modified search query that the search engine may have otherwise missed if it had only based the search on the original query (i.e., “ap6y3”). It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or includes a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Unless otherwise stated, any and all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. Such amounts are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. For example, unless expressly stated otherwise, a parameter value or the like, whether or not qualified by a term of degree (e.g., approximate, substantially, or about), may vary by as much as ±10% from the recited amount. The examples illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other examples may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various examples is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
60,316
11860885
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. DETAILED DESCRIPTION OF THE PRESENT INVENTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention. Applicant has realized that selecting an item by scanning the entire dataset, starting from the first item all over again for each item in the set, is not efficient as the complexity is proportional to the dataset size. As the dataset grows, the average time to fetch or pick or select an item will increase, and the response time may worsen. Applicant has further realized that associative memory devices may be used to store large datasets and may provide an efficient in-memory system that may perform the “item select” method in a constant computation complexity, O(1), regardless of the size of the dataset. Memory devices that may provide such constant complexity are described in U.S. Pat. No. 8,238,173 (entitled “USING STORAGE CELLS TO PERFORM COMPUTATION”), issued on Aug. 7, 2012; U.S. Pat. No. 10,832,746, (entitled “NON-VOLATILE IN-MEMORY COMPUTING DEVICE”), issued on Nov. 10, 2020; U.S. Pat. No. 9,859,005 (entitled “MEMORY DEVICE”), issued on Jan. 2, 2018; U.S. Pat. No. 9,418,719 issued on Aug. 16, 2016 (entitled “IN-MEMORY COMPUTATIONAL DEVICE”) and U.S. Pat. No. 9,558,812 issued on Jan. 31, 2017 (entitled “SRAM MULTI-CELL OPERATIONS”), all assigned to the common assignee of the present invention and incorporated herein by reference. FIG.1, to which reference is now made, schematically illustrates an item select system100, constructed and operative in accordance with a preferred embodiment of the present invention. Item select system100comprises an item selector110and an associative memory array120that may store the dataset and any related information. Item selector110may select an item from the set according to one of the selection methods described hereinbelow. Item selector110further comprises a found and selected (FS) vector112, an “extreme item selector” (EIS)114that selects an item having the highest/lowest index in the set, and a “next index selector” (NIS)116that selects a next item in a linked list of items, described in detail hereinbelow. Item selector110may add an indication of the selected item to FS112that may be used to fetch or pick the selected item. Item selector110may remove the indication from FS112after the item is fetched. It may be appreciated that in each fetch operation, a single indication is present in FS vector112. Item selector110may further remove the item from the set after it has been fetched. Associative memory array120may be a memory device comprising numerous sections122, constructed and operative in accordance with a preferred embodiment of the present invention and as shown in more detail inFIGS.2A and2B, to which reference is now made. Section122may be arranged in rows210and columns220of memory units230, of which three columns are labeled LJ, J and RJ. Each memory unit230may store its own data232, and may have access to data232stored in adjacent memory units on each side. MU-J may read data232stored in memory unit230to its left, MU-LJ, as shown by dashed arrow241, or may read data232stored in the memory unit230to its right, MU-RJ, as shown by dashed arrow242. FIG.2Bschematically illustrates circuitry200associated with column J of section122(ofFIG.2A), constructed and operative in accordance with a preferred embodiment of the present invention. Circuitry200may enable memory unit MU-J to access adjacent memory units MU-RJ and MU-LJ and optionally perform Boolean operations between data232stored therein, and data232of MU-J. Circuitry200comprises two elements: a multiplexer (mux)260and a logic unit280and wires. Wires of circuitry200provide connectivity between memory units230of column J and elements of circuitry200. A wire250-J may provide connectivity between memory units230of column J and logic280. A wire250-LJ may provide connectivity between memory units230of column-LJ and mux260. A wire250-RJ may provide connectivity between memory units230of column RJ and mux260. It may be appreciated that a wire (250-LJ,250-J,250-RJ) between a column (LJ, J, RJ) and an element in circuitry200may read data232stored in any memory unit230in that column. Additionally or alternatively, wires250-LJ,250-J, and250-RJ may provide the result of a Boolean operation performed between data232stored in several memory units230cells in column LJ, J, RJ respectively. A wire270may provide connectivity between mux260and logic280and a wire290may provide connectivity between logic280and MU-J. Mux260may select to read data232from MU-LJ or from MU-RJ. Logic280may read data232from MU-J and may receive data from mux260. Logic280may perform Boolean operations between data232read from MU-J and data received from mux260. Logic280may write the outcome to a memory unit230of column J such as MU-J. It may be appreciated that using circuitry200, memory unit MU-J may replace its own stored data with data of an adjacent memory unit. Memory unit MU-J may alternatively perform a Boolean operation between its own stored data232and data from an adjacent memory unit and may replace its own data with the result of the executed Boolean operation. It may be appreciated that similar circuitry may be associated with each column of each section122(ofFIG.2A) of associative memory array120(ofFIG.1). FIG.3, to which reference is now made, illustrates the data and the way it is stored in associative memory array120. Associative memory array120may store a dataset310, an Index320having a unique value associated with each item and a Marker vector330. Each item of dataset310may be stored in a dedicated column220, spanning over several rows. An index related to a specific item may be stored in the same column220as the specific item. The indices may form a vector320stored in several rows of memory section122. An indication in a Marker vector330, stored in a row210(FIG.2A) of section122, may be stored in the same column of the specific item and may indicate whether the item stored in the column is part of the set. In one embodiment of the present invention, the value 1 in a cell of Marker vector330may indicate that the item stored in the column is in the set (accordingly, the value 0 in a cell of Marker vector330may indicate that the item stored in the column is not in the set). As illustrated inFIG.3, the actual storage and most of the computations is done vertically, in columns. It may be appreciated that the logical operations are performed in parallel on all columns220, i.e. concurrently on data related to all items stored in the dataset. FIG.3provides an example dataset310, stored in associative memory array120. Dataset310stores data items: Data-0, Data-1 . . . Data-n. In this example, out of the entire dataset, three items are in the elected set, those having the value 1 in Marker vector330and it may be appreciated that, in this example, data items Data-1, Data-2 and Data-x are elected and should be read. Item selector110may use any appropriate “item select” method. As described hereinabove, EIS114may select the item having the highest/lowest index in the set. and NIS116may select the next item in a linked list of items, starting with the first item in the list, provided that a linked list of the items in the set is built and is accessible in advance. Both EIS114and NIS116are described in detail hereinbelow. In one embodiment, item selector110may choose to use a method according to the density of the Marker vector330. As illustrated inFIG.4, to which reference is now made, Item selector110may check (step410) the density of Marker vector330. The check may be done by counting the number of markers in Marker vector330and dividing the result by the number of items in the entire dataset. If the ratio is smaller than a predefined value (such as 5%, 10%, 15% etc.), the Marker vector330may be considered sparse, and dense otherwise. Alternatively, the density may be determined by comparing the number of items in the set to a predefined value, and if the number of items in the set is smaller than the predefined value, Marker vector330may be considered sparse. It may be appreciated that the density of the marker may be evaluated in any other way. When Marker vector330is sparse, as indicated in step420, item selector110may use EIS114, while when Marker vector330is dense, as indicated in step430, item selector110may use NIS116. It may further be appreciated that item selector110may select EIS114or NIS116according to considerations, other than the density of Marker vector330. Item selector110may use only EIS114, only NIS116and any mixture of EIS114NIS116. EIS114may consider the index associated with a data item stored in the dataset, and the marker associated with the data item (both stored in the same column) as one composed value (CV) associated with the data item, where the marker bit is the most significant bit (MSB) of the composed value and the index provides the rest of the bits. Considering the CV this way, it is guaranteed that the elected items, marked with “1” in Marker vector330, will have larger values than non-elected items, marked with “0”, as binary numbers having a 1 in their MSB are larger than binary numbers having a 0 in their MSB, and that a single item will eventually be picked since each item has a unique index therefore, the CV is unique, and there will be a single extreme CV. It may be appreciated that EIS114may also consider the index and the inverse of the marker bit (NOT-marker) as the MSB of the CV, and find the lowest index in the set. Considering the CV this way, it is guaranteed that the elected items, marked with “1” in Marker vector330will have a 0 (NOT 1) as the MSB of their CV ensuring that they have smaller values than non-elected items, which have a 1 (NOT 0) in their MSB. EIS114may utilize any search method to find a maximum or a minimum value between the CVs. It may be appreciated that U.S. patent application Ser. No. 14/594,434, incorporated by reference, may describe a method for finding a maximum value in a dataset, with constant computation complexity regardless of its size, and may be used by EIS114. FIG.5, to which reference is now made, illustrates exemplary data stored in system100and its usage by the EIS114method. The different data items stored in system100are presented by several horizontal vectors: Data510storing the actual data; Index520storing the indices of the stored items; and Marker vector530storing the elected set of items. Additional information used by item selector110is a CV vector540and an FS vector112having an indication of the item selected by item selector110in a specific fetch request. It may be appreciated that CV540may not be actually stored in item select system100and is illustrated inFIG.5for clarity. It may be appreciated that for clarity, a value stored in location x of a vector is represented herein as vector[x]. In the example ofFIG.5, Marker vector530includes an indication in Marker[2], Marker[4], Marker[7], Marker[8], Marker[10] and Marker[14] indicating that items whose indices are 4, 7, 8, 10 and 14 respectively are in the set of elected items. It may be appreciated that the data items located in those indices are data-2, data-4, data-7, data-8, data-10 and data-14 respectively. CV540may be created by referring to the bit of Marker vector530as the MSB of a value created by the marker and the index of a data item. It may be further be appreciated that the largest values in CV540are 18, 20, 23, 24, 26, 30, which are associated with items data-2, data-4, data-7, data-8, data-10 and data-14 as expected. Using the method described in U.S. patent application Ser. No. 14/594,434, item selector110may find the largest number in CV540. It may be appreciated that data item Data-14 is associated with 30 which is the largest value in CV540. Item selector110may set the value of FS [14] as 1 in FS112, as shown, and may then read the item associated with FS[14]. After reading/fetching the item, item selector110may zero Marker[14], the bit associated with the relevant read data item, essentially removing the read item from the set. After zeroing the marker bit, CV[14] may be recalculated and FS[14] may be zeroed. The original values are indicated by column220A and the modified values are indicated by column220B. Item selector110may now be ready for another operation of EIS114. It may be appreciated that these steps, of finding the largest value and nulling the entry in Marker vector530, are repeated until there are no more marked objects in Marker vector530, i.e. the set is empty. As already mentioned hereinabove, when the number of items in the set is large, NIS116may be more efficient, as a linked list is built once, and once built, all relevant items may be directly accessed (instead of a maximum or a minimum search operation). NIS116may create a linked list of indices of marked items by computing the “deltas” between the markers. A delta is the number of cells in the marker vector having a value of “0” between each two consecutive cells having a value of “1”. Each computed delta may be added to the index of the current elected item, to get the index of the next elected item. FIG.6, to which reference is now made, illustrates exemplary data stored in system100and its usage by NIS116. The different data items stored in system100are presented by several horizontal vectors: Index610containing the indices of the stored items; data620containing the actual data; and marker vector630containing the elected set of items. NIS116may use additional information: a temporary vector, Temp635, used for intermediate computations; a Delta vector640; a List650and an FS vector112having an indication of one item currently selected by item selector110. In the example ofFIG.6, the Marker vector630includes an indication in Marker[3], Marker[5], Marker[10], Marker[19], Marker[20], Marker[23], Marker[25] and Marker[27] which indicates that items whose indices are the values of Index[3], Index[5], Index[10], Index[19], Index[20], Index[23], Index[25] and Index[27] respectively, i.e. data-3, data-5, data-10, data-19, data-20, data-23, data-25 and data-27, are in the set of elected items and linked list650should include these indices. NIS116may first concurrently compute the deltas between markers in Marker vector630where the delta is the number of zeros between each two consecutive ones in Marker vector630. The first item in the linked list may be the value of the first delta. In the example, the first marker bit is associated with the item whose index is 3, so the first value in List650(i.e. the value located in index 0) is 3 (List[0]=3). The next items in the List650may be computed by adding the delta (to the next marker) to the index of the current marked item, plus one which is the space occupied by the current marked item. The formula for computing the values of the items in List650is defined be Equation 1. List[x]=Index[x]+Delta[x+1]+1  Equation 1 In the example, the next item in List650is List[3]. The value to store in List[3] is computed by adding the relevant value from Delta640, Delta[3+1], which in this example is 1 and adding another 1, i.e. List[3]=3+1+1=5. Arrow651visually points to the column pointed by the first item in List650. The next item in List650is stored in the location pointed by the previous item, i.e. in List[3]. Arrow652visually points to the item pointed by the List[3], and arrow653visually points to the next item in the list. It may be appreciated that the value of the last item in List650may be invalid as it may point outside the linked list. In the example, and as illustrated by arrow654, the last bit set in Marker630is Marker[27], and it “points” to 29 which is outside the list. The detailed mechanism for building the linked list and for using it is described hereinbelow. FIGS.7,8and9, to which reference is now made, are a flow chart700(inFIG.7) describing steps performed by NIS116for building a linked list and the data of Delta640and List650while performing the steps of flow700. In step710, NIS116may create Delta640and List650and may initialize them to 0. In addition, NIS116may create Temp635, to store a copy of Marker630to be used and manipulated during computation while keeping the original values of Marker630untouched for later use. FIG.8illustrates an example of values stored in Index610, Marker630, Temp635, Delta640and List650in step710ofFIG.7. Returning toFIG.7, step720is a loop that may repeat the computation of step730K times. K is a certain predefined value that may be defined by any heuristic such as a fixed number, the size of the dataset divided by the number of marks, the size of the dataset divided by the number of marks times 2, or any other heuristic. Step730describes how the delta is concurrently computed in the entire vector. The value Delta[i], in each location i of Delta640, (i=0 . . . N, N is the size of the dataset) is incremented as long as there is a “0” in the same location in the temporary vector, i.e. in Temp[i]. In each iteration, the value in each cell i of Temp vector635may be calculated as the result of a Boolean OR of the value of cell i and the value of cell i+1, which is the next cell in the vector as defined in equation 2. Temp[i]=Temp[i] OR Temp[i+1]  Equation 2 It may be appreciated that the effect of the Boolean OR operation on the entries of Temp635is essentially to copy entries having the value “1” to the left at most K times. It may be appreciated that only the value 1 is copied to the left, while a value of 0 is not copied. As long as the value of an entry in Temp635is “0”, the value of Delta640is incremented. i.e. in each iteration, the value of each cell Delta[i] in Delta vector640may be calculated as Delta[i] plus the inverse of the value stored in the respective cell, Temp[i], in Temp635, as defined by equation 3. Delta[i]=Delta[i]+NOT(Temp[i])  Equation 3 The result is that, over the iterations, each Delta[i] will be incremented by 1 until Temp[i] becomes 1. FIG.9, to which reference is now made, illustrates the values of Delta640and Temp635after several iterations. Temp1illustrates the values of Temp635after the first iteration, Temp2illustrates the value of Temp635after the second iteration, and Tempkillustrates the value of Temp635after the last (Kth) iteration, where the value of each cell in Temp635is calculated using equation 2. It may be appreciated that the value of the last cell of Temp635does not change as there is no cell after the last. Note that the value of a cell in Temp635may be changed from 0 to 1 bit not vice versa, from 1 to 0. It may be appreciated that after the last iteration, the distance between Deltak[0] and the nearest set marker (in Marker630) is 3, the distance between Deltak[1] and the nearest Marker is 2, between Deltak[2] and the nearest marker is 1 and Deltak[3] is the actual location of the marker thus the distance to the nearest marker is 0. Delta1illustrates the values of Delta640after the first iteration, Delta2illustrates the value of Delta640after the next iteration and Deltakillustrates the value of Delta640after the last (Kth) iteration where value of each cell in Delta640is computed using equation 3. Note that, the value of a cell is increased and represents the number of zeroes encountered in Temp635in the past iterations for each i, and eventually the number of zeros between an item and the next marked item. Returning toFIG.7, step740describes the calculation of List650after the last iteration of step730. List650may be created from the deltas stored in Delta640, and Index610. The first item in List650may be the value of the first delta, which may be associated with the first item in the dataset having the index 0. All the items of List650, except the first one, may be calculated as a function of the delta and the index of the item “pointing” to the next one, as defined in equation 1 hereinabove. FIG.10, to which reference is now made, illustrates the value of List650calculated using the values stored in Temp635and in Delta640, as shown. It may be appreciated that the first item in List650points to the first marked item, i.e. List[0]=3. The next item in List650is List[3] that points to 5. The entries of List650are pointed to by arrows655and the other entries of the list have meaningless values, which are not used by NIS116. It should be noted that K, the number of iterations, is selected without any connection to the actual deltas between markers in an actual marker vector. Thus, it is possible that there may be a delta which is larger than K. In this case, the large delta cannot be computed in only K iterations and thus, it will not be listed in List650. Instead, List650may contain entries List[x] that are intermediate pointers that point to an unmarked item, i.e. the value of Marker[x] may be 0. Nevertheless, List[x] may point to another entry in List650that may eventually be associated with a marked entry. Once List650is ready, it can be used for reading the items. The item fetching flow, implemented by NIS116, is described inFIG.11, to which reference is now made. Flow1100describes in step1110that the first item to read may be the one with the index located in the first item of the list, i.e. index=List[0]. Step1130indicates that if the marker is set, i.e. Marker[index]==1, NIS116(ofFIG.1) may fetch Data[index], and in step1140, NIS116may read the index of the next item to fetch from the current location in the list, i.e. index=List[index]. Step1145checks that the index is valid, i.e. its value is less than N, which is the size of the set. If the index is larger than the set, the flow terminates in step1150. It may be appreciated by the person skilled in the art that the steps shown in flow700and flow1100are not intended to be limiting and that both flows may be practiced with more or less steps, or with a different sequence of steps, or any combination thereof. It may be appreciated that item select system100may reduce the computation time for fetching elected items from a database. The computation steps may be performed concurrently on all cells of the vectors Temp635and Delta640. The abilities of the associative memory device, that is, concurrent Boolean operation on all columns and concurrent access to data stored in adjacent cells in the same row, may provide a computation complexity that does not depend on the vector size for both methods of item select. For a set of size P, the computation complexity of EIS114is O(P) and the computation complexity of NIS116is O(K). While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
23,707
11860886
DETAILED DESCRIPTION The present disclosure is generally described in detail with reference to embodiments illustrated in the drawings. However, other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented herein. Reference will now be made to the exemplary embodiments illustrated in the drawings, and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the inventions as illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the invention. Embodiments of the present disclosure relate to methods, systems, and computer storage media having computer-executable instructions embodied therein that, when executed, perform methods in accordance with embodiments hereof, for creating and sharing bots. The disclosed system and method reduces the labor overhead that burdens the processing units that run the program and process user inputs, and improves the ability of the processing units to display information and interact with the user. This may be accomplished in some embodiments by reducing the data gathering required to accomplish specific tasks desired by the user in a manner that reduces labor overhead burdening the processing units operating in accordance with the disclosed system or improves the ability of the processing units to display information and interact with the user. For instance, the system adaptably arranges structured data in a user interface ameliorating visual interference among aspects of structured data being displayed. In various embodiments, a system is provided for automating or otherwise streamlining data manipulation so as to capture unstructured data and responsive to inputs, provide structured data, improving the efficiency of system operation and managing processor and network load. The system may monitor and direct user activities so as to allow the user to capture spend less time managing activities that may be automated by the system. This provides the benefits of reducing user input to the system, which reduces the processing labor overhead required to perform these activities if the user was otherwise performing the activities himself. Through the use of bots, the system is also capable of streamlining the user's experience by reducing the input required from the user, so that the user may accomplish tasks or goals with minimal input and structure data output by bots may be arranged in a non-interfering, adaptable way. In addition to reducing the processing labor overhead, this also improves the ability of the processors to display information and interact with the user because the user is able to accomplish more tasks with less input than would otherwise be possible without the system. In various example embodiments, the system may serve three communities, such as customers, creators, and suppliers. As discussed herein, a customer may be a person who wants to use products and services generated within the system. A creator may be a person who wants to embark on a journey of creating products and services, for instance, by utilizing bots. A supplier may be a person who fulfills the delivery of products and services to customers, again, for instance, by using bots or by taking directives from bots via a user device. For example, one scenario used as an example in this disclosure is a customer purchasing a pair of shoes in the system which is designed by another system user and fulfilled through integration with third-party APIs (such as a Nike or Amazon API) or fulfilled through supply chain partners or other system users. A creator embarks on a journey of creating her own shoe. Her profile is created with random, unstructured data. Once she reaches a certain level, the system will automatically structure her data to create a stronger profile. The system then pulls her profile and begins to design her shoe (with user input), it sends the shoe to a supplier who 3D prints it and ships to the customer who purchased. Moreover, certain concepts relate advantageously to the aspects of the system disclosed herein. For instance, the system may be sensor driven, meaning an apparatus attached to user will capture user's location, environment noise level and digital interactions. The system may include randomizing of user data, particularly for new users, meaning that the system will automatically update user's profile with random data. This builds the user's profile based on random assumptions. The user may be provided with a map of experiences, such as depicted inFIGS.20A-C, wherein users can see past locations, environment noise levels and digital interactions via a map built into system. Moreover, via the interface ofFIGS.20A-C, the system enables user experience input, meaning users can adjust their locations, environment noise levels and digital interactions to change random assumptions. The system may provide structured data which may include a structured product or a structured service. This means the system will automatically develop custom products and services (BOT) based on user's profile. As used herein, a bot may include an automated action. Furthermore, patternless decisions may be provided, which include a set of randomized reversible automated actions. The system may intake unstructured data. This may include information automatically collected and stored in random fields. The system may analyze unstructured data. Thus, the system may automatically structure the unstructured data once triggered by user's profile ranking. The system may also create data relationships. This means the system will automatically generate new structured databases once triggered. The system may determine the user's path. In other words, the system will use newly generated structured databases to create a path for users. Also, a user path or user's path may include an industry identified by system based on structured database and a real time trajectory in along that path, meaning the system will display structured database to user as it is being formed. As mentioned, a user profile may exist, which is an editable but automatically generated user identity. Also, the system may structure products and services to provide automatic real time fulfillment of needs and wants. Among those structured data outputs provided to a user, a project task list and schedule may include a logistical requirement for structured product or service fulfillment. Finally, communications among entities and modules discussed herein may include bidirectional communication, meaning that data can be automatically generated and structured by the system for a user, and a user may also manipulate their own data to generate the profile of their choice, including offering their own personal data for sale. The system also contemplates bidirectional communication to suppliers, meaning that structured products or service can be automatically generated, and suppliers can also request modifications to structured products or services, for instance, those the supplier is called on by the system to fulfill. Aspects of the systems, for instance, various system actors such as users or a system operator may be autonomous business organizations. Similarly, various structured products and services may be associated with third party autonomous business organizations, meaning an organization operating without boards of directors or management teams. Finally, reference may be made to user progress or similar concepts, meaning a measurement of user production (number of user profile updates, number of launched products and services, number of fulfilled products and services, etc.). A user may also provide input, so that the concept of monitoring user input includes actively monitoring user location, context environments and digital interactions. Having briefly described an overview of embodiments of the present disclosure, an exemplary operating environment implementing various aspects of the present disclosure is described below. Referring to the drawings in general, and initially toFIG.1Ain particular, an exemplary operating environment for implementing embodiments of the present disclosure is shown and designated generally as computing device100. Computing device100is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the present disclosure. Neither should the computing environment100be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. The present disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant, smartphone, tablet, or other such devices. Generally, program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks, or implement particular abstract data types. Embodiments of the present disclosure may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Embodiments of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With continued reference toFIG.1A, computing device100includes a bus110that directly or indirectly couples the following devices: memory112, one or more processors114, one or more presentation components116, input/output (I/O) ports118, I/O components120, and a power supply122. Bus110represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG.1Aare shown with lines for the sake of clarity, in reality, delineating various components is not so clear. That is, the various components ofFIG.1Amay be integrated into a single component, or the various components ofFIG.1Amay be parsed into any number of additional components. For example, in some circumstances a presentation component116, such as a display device, may be an I/O component120. Likewise, in some instances, a processor114may comprise memory112. As such, it should be appreciated that the diagram ofFIG.1Ais merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present disclosure. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” “smartphone,” etc., as all are contemplated within the scope ofFIG.1and reference to “computer,” “computer device,” “computing device,” and other similar terms known to one of ordinary skill in the art. Computing device100typically includes a variety of computer-readable media. By way of example, and not limitation, computer-readable media may comprise Random Access Memory (RAM); Read Only Memory (ROM); Electronically Erasable Programmable Read Only Memory (EEPROM); flash memory or other memory technologies; CDROM, digital versatile disks (DVDs) or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to encode desired information and be accessed by, for example, computing device100. Memory112(also referred to herein as a database) includes computer storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device100includes one or more processors that read data from various entities such as, for example, memory112or I/O components120. Presentation components116present data indications to a user or other device. Exemplary presentation components116may include a display device (also referred to as a visual display), speaker, printing component, and haptic-feedback component, among others. I/O ports118allow computing device100to be logically coupled to other devices including I/O components120, some of which may be built-in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, mouse, keyboard, and touchscreen components, among others. With reference toFIG.1B, processor114may comprise various logical modules. For instance, a plurality of modules may be interconnected by a bus151to a bus controller150which may direct communication among the modules and among external resources such as via an I/O (input/output) module152. For example, a processor114may comprise a directive transceiver162. A directive transceiver162may transmit or receive instructions between a user and aspects of processor114, as further described herein. The directive transceiver162may transmit instructions to a user, via a GUI present on a user device, to execute one or more task. For example, a directive transceiver162may instruct a user to meet another user. Moreover, a directive transceiver162may interoperate with an I/O module152to structure elements on a GUI of multiple users, for instance, instructing two or more users to meet, and then providing corresponding queries to the user to complete. A processor114may comprise a smart data extraction engine163. A smart data extraction engine163may access data resources, for instance, sensor data provided by a sensor overseer153, or public-facing data from third-party resources, for instance, the Internet, or credit reporting agencies, or transaction data, or location data and/or the like. The smart data extraction engine163may retrieve such data in an unstructured form and provide it via the bus151to other aspects of the processor114for structuring and further processing. A processor114may comprise an alignment translator164. An alignment translator164may interoperate with an I/O module152to, responsive to the structuring of data by aspects of the processor114, arrange the structured data in GUI elements (for instance, as depicted inFIGS.10,17,18, and21A-C), for non-interfering display. Because the structured data may change and may require differing amounts of space for display, or different categories of structured data may be displayed at different times (for instance, charts, drawings, radio buttons, textual items), the alignment translator164may translate GUI elements in any direction, scale, or selectively include or omit elements based on data received at the I/O module152, such as data indicative of device type, priority of data to be displayed in view of context environment, and/or the like. A processor114may comprise a user path mask engine165. A user path mask engine165may ingest unstructured data and may establish and/or modify a user path by masking available user paths from user path repository159to select a particular (“selected”) path for a particular user. A processor114may comprise a profile randomizer166. For example, each user may have a profile comprising all structured and unstructured data regarding that user stored in the platform user database158and/or provided by third-party resources via the I/O module152. However, in various embodiments, little or no data is available for a user, or the system, such as for a new user, disregards available information and instantiates a profile for the user comprising randomly selected data not specifically filtered according to data regarding that user, or comprising randomly selected subsets of data regarding that user. A processor114may comprise a profile sentinel167. A profile sentinel167may, in coordination with the I/O module152, monitor unstructured data, for instance, user inputs and monitor structured data, for instance, a user's editing of the user's profile and may update records, such as those stored in or provided to the user path repository159, the user path mask engine165, and the directive transceiver162in response to the user's editing of the user's profile. A processor114may comprise an evolution controller168. An evolution controller168may direct the profile sentinel167to cause automatic updates to the user's profile based on changes to data regarding the user. For instance, the platform user database158may comprise a database of all interactions between a user and the system as well as between a user and other users on the system. The evolution controller168thus interoperates with the platform user database158to direct the profile sentinel167to update a user profile. In response, a user path mask engine165may further amend a user path, and vice versa. A processor114may comprise a bot creation engine169. A bot comprises a script, combination of scripts, template, and or the like configured to ingest unstructured data and output structured data. The bot creation engine169interoperates with an I/O module152so a user may actively create new bots and modify existing bots. A processor114may comprise a bot utilization engine170. A bot utilization engine170may load a bot and may interoperate with any of the other modules disclosed herein in order to effectuate the transformation of the unstructured data into structured data by the bot. The bot utilization engine170may interoperate with the I/O module152to display a first bot to a user corresponding to the user profile and hide a second bot from the user not corresponding the user profile. A processor may comprise a bot repository160a user path repository159and a platform user database158, though in further embodiments, the processor may be separated from these aspects by a network161, so that the connection is provided logically with assistance of an I/O module152. A bot repository160may comprise a database storing all bots. A user path repository159may comprise a database storing all potential user path elements which may comprise a user path, and storing masked user paths provided by the user path mask engine165that are specific to specific users. A platform user database158may comprise a database of all users and/or all structured and/or unstructured data regarding a user. A processor may comprise a public/private facing data integrity engine156. In various embodiments, private data not for disclosure to other users is processed to provide structured data for display by an alignment translator164. The public/private facing data integrity engine156may oversee the transfer of data from user's to system aspects, and the transfer of data from system aspects to a GUI and maintain the relative privacy of private-facing data and the relative publicity of public-facing data. For instance, the public/private facing data integrity engine156may set, change, and monitor flags associated with data, or may read the content of data to determine if private data is mixed with public data, such as by machine learning, string recognition, image recognition, and/or the like. The processor may comprise a notification generator155. In various instances, the system may provide alerts to a user, such as to direct the user to interact with other users, machines, network devices, or other aspects of a context environment in which a user is operating. A context environment comprises the surrounding conditions, location, individuals, and tasks proximate to a user, as well as the relevant characteristics of the user related to interaction with the surrounding conditions, location, individuals, and tasks proximate to the user as well as networked devices that the user may interact with, and the specific interactions that the user makes with the networked devices. Such networked devices may include smartphones, tablets, browser sessions, and/or the like. For instance, a user may be directed to communicate with a subject matter expert having specific knowledge related to an element of a user path set by a user path mask engine165. The user may be notified by a notification generator155a time and place to engage in the communication. Subsequently, the notification generator155may provide a notification to the user path mask engine165and the evolution controller168directing an automatic revision to a user profile so that a new and/or different bot, such as the hidden second bot, is unhidden, and/or element of a user path may be selected from the user path repository159by the user path mask engine165and the alignment translator164may display a element on a GUI indicative of the notification and or user path element. The processor may comprise a behavior assessment and compliance engine154. A behavior assessment and compliance engine154may actively monitor interactions of a user with the system; for instance, interactions with the context environment, such as with other users having other networked devices communicative with the user's own networked devices, interactions with the notification generator155, interactions with the profile sentinel167, and interactions with the sensors monitored by the sensor overseer153. In response to a failure to complete aspects of a user path, or in response to behavior harmful to other users or deleterious to the system, the behavior assessment and compliance engine154may terminate or limit access to aspects of the system by the user. Finally, the processor may comprise a sensor overseer153. A sensor overseer153may interface various sensors to the processor114and may control operation of sensors and may relay sensor data to other modules such as the evolution controller168and user path mask engine165to characterize a context environment. For example a vision sensor may be emplaced and may detect and identify interactions between the user and other users, such as by facial recognition. A sound sensor may be emplaced and may detect and identify patterns of life in a user, such as sleep/wake cycles, types of locations (coffee shop, movie theater, gymnasium) based on sensed sounds, and entertainment preferences of a user. While various aspects of a processor114of a system have been discussed, attention is now directed toFIG.1Cand an associated discussion of different system180components which may comprise devices containing a processor114. For instance, a platform183may reside in a server and may have one or more processor114. The platform183may enable communication among other system aspects and may ingest data185, structured and unstructured, as well as may provide data185that is structured. For instance, a creator187may comprise a user who in certain context environments adopts a role of creating and/or modifying bots186stored in a bot repository160(FIG.1B). A supplier184may be a user who in certain context environments adopts a rule of following instructions provided by a bot186, for instance, a processor114(FIG.1B) with a behavior assessment and compliance engine154(FIG.1B) may monitor notifications provided by a notification generator155(FIG.1B) and alterable by a user path mask engine165(FIG.1B) a user path stored in the platform user database158(FIG.1B). In this manner a bot186may direct a supplier184to accomplish tasks off-system to effectuate operation of a bot created by a creator187. Moreover, the platform183may interface with external supply chains181that are outside system182and interconnected by communication links such as the Internet, to further accomplish tasks off-system to effectuate operation of a bot186created by a creator187(FIG.1B). For instance, a bot186may require the provision of a product for modification (for instance, shoes for customization) and the supply chain181may provide the shoes and/or customization hardware or services. Reference is now made toFIGS.1A-1C, as well asFIG.2, which illustrates an exemplary embodiment of a networked device environment200which may include one or more system180. Network device environment200may include as a logical aspect therein, the system180(FIG.1B) which therein may include processors114(FIG.1A). The networked device environment200is merely an example of one suitable environment and is not intended to suggest any limitation as to the scope of use or functionality of the present disclosure. Neither should the networked device environment200be interpreted as necessarily having any dependency or requirement related to any single component or combination of components illustrated therein. The networked device environment200includes one or more servers, such as server210. The one or more servers may host databases storing data related to various aspects of the networked device environment200, including data for interfacing with users of the networked device environment200. In the embodiment illustrated inFIG.2, the server210is a user server for performing user data mining (as described below), setting up user accounts, receiving user input, processing user input, and other tasks. In some embodiments, the data stored in the databases202may include user data, which may include any data related to a user of the networked device environment200, and one or more bots, which are comprised of executable code for performing tasks, including automated tasks. In the embodiment illustrated inFIG.2, the bots are stored in a bot database204, which is also known as the bot repository160. Although the bot database204is shown as being hosted on server220, it should be appreciated that the bot database204may be hosted in other locations and may be considered logically an aspect of a processor114as shown inFIG.1B. The user data is information relevant to a user of the networked device environment200. A user may include an individual person, or a company or vendor. The user data may include information about the user such as, for example, the user's address, age, height, or any other information that is related to the user. This other information may include data associated with various accounts pertaining to a user (e.g., FACEBOOK accounts, TWITTER accounts, etc.), or any other information relevant to the user. In some embodiments, a user manually inputs the user data. In some embodiments, a user grants access to their various accounts, and the networked device environment200mines data associated with the various accounts to populate the user data with information obtained from the data mining. In further embodiments the networked device environment200mines data associated with the user from third-party sources not associated with user accounts, such as by mining public records, network traffic, and the like. The user data may also include information such as user preferences, user activities, or any other information that may be determined from a user's activities including patterns and inferences drawn from the user's activities or patterns. The user data may be used to produce a user profile for interfacing with the networked device environment200. In some embodiments, the networked device environment200includes additional servers for performing tasks associated with operating the networked device environment200. Example servers may include a bot server220(for example, a bot creation engine169) for generating bots, executing bots (for example a bot utilization engine170), storing bots in the bot database204(for example, bot repository160), storing bot templates in a template database206(for example, bot repository160), determining relevant bots (e.g., bots that are trending among users, and bots that were recently used by a particular user), and for handling other tasks related to building, executing, and/or managing bots such as, for example, allowing particular bots to be duplicated and/or revised. Another example server includes a messenger server225for operating messenger programs. The messengers programs provide a platform for users to communicate through the networked device environment200. Another example server includes server227for managing an Artificial Intelligence Personal Assistant (AIPA) that is capable of serving as an interface between a user and the networked device environment200. The AIPA server227may, in some embodiments, provide automated interaction with a user. For example, the AIPA server227is capable of requesting information from a user, assisting a user with searching for a particular bot, assisting a user with solving a problem or accomplishing a goal, making recommendations to a user, and using artificial intelligence to interact with a user in various other ways. In various aspects, the AIPA is a logical representation of aspects of the processor114. The networked device environment200also includes a miscellaneous server229for performing other tasks such as generating graphical user interfaces (GUI), displaying GUIs to users, operating Internet browsers, or performing any other tasks or operations disclosed herein. It should be appreciated that various operations described herein with respect to a particular server may alternatively (or additionally) be performed by other servers or components comprising the networked device environment200, including aspects of processor114disclosed inFIG.1B. The networked device environment200also includes a user computing device230for interfacing with the networked device environment200, wherein the user computing device230may be any type of computing device, such as device100described above with reference toFIG.1A-C. By way of example only and not limitation, the user computing device230may be a personal computer, desktop computer, laptop computer, handheld device, cellular phone, digital phone, smartphone, PDA, or the like, having a processor114. It should be noted that embodiments are not limited to implementation on such computing devices. The networked device environment200also includes a network250for providing a data connection between each of the components of the networked device environment200, wherein the data connection may be wired or wireless. The network250may include any computer network or combination thereof. Examples of computer networks configurable to operate as network250include, without limitation, a wireless network, landline, cable line, digital subscriber line (DSL), fiber-optic line, local area network (LAN), wide area network (WAN), metropolitan area network (MAN), or the like. The network250is not limited, however, to connections coupling separate computer units. Rather, the network250may also include subsystems that transfer data between servers or computing devices. For example, the network250may also include a point-to-point connection, the Internet, an Ethernet, an electrical bus, a neural network, or other internal system. In an embodiment where the network250comprises a LAN networking environment, components may be connected to the LAN through a network interface or adapter. In an embodiment where the network250comprises a WAN networking environment, components may use a modem, or other means for establishing communications over the WAN, to communicate. In embodiments where the network250comprises a MAN networking environment, components may be connected to the MAN using wireless interfaces or optical fiber connections. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may also be used. Furthermore, the network250may also include various components necessary to facilitate communication with a mobile phone (e.g., cellular phone, Smartphone, Blackberry®). Such components may include, without limitation, switching stations, cell sites, Public Switched Telephone Network interconnections, hybrid fiber coaxial cables, or the like. Components of the servers210,220,225,227, and229may include, without limitation, a processing unit, internal system memory, and a suitable system bus for coupling various system components, including one or more databases for storing information (e.g., files and metadata associated therewith). Each server may also include, or be given access to, a variety of computer-readable media. By way of example, and not limitation, computer-readable media may include computer-storage media and communication media. In general, communication media enables each server to exchange data via network250. More specifically, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information-delivery media. As used herein, the term “modulated data signal” refers to a signal that has one or more of its attributes set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above also may be included within the scope of computer-readable media. It will be understood by those of ordinary skill in the art that the networked device environment200is merely exemplary. While the servers210,220,225,227, and229are illustrated as single boxes, one skilled in the art will appreciate that they may be scalable. For example, the servers210,220,225,227, and229may in actuality include multiple boxes in communication and/or may be combined as elements of a single box, such as a processor114. The depictions are meant for clarity, not to limit the scope of embodiments in any form. The user computing device230may comprise a web browser, which is one or more software applications enabling a user to display and interact with information located on a web page, which may be a password-protected web page or online portal for users only. In an embodiment, the web browsers may communicate with servers210,220,225,227, and229, and other components accessible over the network250. The web browser may locate web pages by sending a transferring protocol and the URL. The web browser may use various URL types and protocols, such as hypertext transfer protocol (HTTP), file transfer protocol (FTP), real-time streaming protocol (RTSP), etc. The web browser may also understand a number of file formats—such as HTML, graphics interchange format (GIF), tagged image file format (TIFF), portable document format (PDF), or joint photographic experts group (PDF) file format, and the like—the wealth of which can be extended by downloaded plug-ins. Additionally, the web browser may be any browser capable of navigating the Web, such as Internet Explorer®, Netscape Navigator, Mozilla, Firefox, etc. In operation, a user may access a web page using the web browser on the user computing device230. The web page may be stored on a server such as, for example the servers210,220,225,227, or229, which are configured to transmit the HTML and other content associated with the web page to the user computing device230. The web browser may be configured to render the web page and display it to the user. In essence, the networked device environment200provides a platform that serves as an interface between users of the networked device environment200. The networked device environment200may be used to automate or otherwise streamline user activities such as paying bills, ordering goods or services, locating information, or performing other tasks so as to allow the user to spend less time managing activities that may be automated by the networked device environment200. This also provides the benefits of reducing user input to the networked device environment200, which reduces the processing labor overhead required to perform these activities if the user was otherwise performing the activities themselves. Through the use of bots, the networked device environment200is also capable of streamlining the user's experience by reducing the input required from the user, so that the user may accomplish tasks or goals with minimal input. In addition to reducing the processing labor overhead, this also improves the ability of the processors to display information and interact with the user because the user is able to accomplish more with less input than would otherwise be required without the networked device environment200and, more specifically, the bots. A bot may accomplish various other interactions within a networked device environment200for instance, to transform unstructured data into structured data. For instance, a bot may ingest sensor data from a sensor overseer153regarding user behaviors. Moreover, a bot may allow a user to change the sensor data. The bot further coordinates activities related to unstructured data, for instance, identifying people, APIs, and other machines or servers which interactions are needed and unstructured data provided to and/or structured for provision to. IN this manner, the network operation is improved by the transaction of only relevant data among the various entities, rather than needing the retransmission of unstructured data among all entities, as well as reducing processor overhead by centralizing the transformation of unstructured data into structured data, eliminating duplicative processing within the networked device environment200. Moreover, the bot may generate a user profile based on the unstructured data and further based on the structured data and interactions with other networked device environment components. A bot may be created by users, for instance, a creator187(FIG.2) or system administrators. A bot may be created by at least three ways: creating a new bot from a template, modifying an existing bot, or providing a bot through an application programming interface (API). Bots may be updated, duplicated, or deleted from the bot repository160(FIG.2) by the user who created the bot or by an administrator of the networked device environment200. For example, reference is now made toFIGS.3-9.FIG.3provides an example flow chart of a method for creating a bot300, andFIGS.4-9provide example screenshots corresponding to the disclosed example method for creating a bot. The method ofFIG.3and screenshots ofFIGS.4-9correspond to an example for creating a bot for buying a ticket to an event. At step301ofFIG.3, the networked device environment200receives a user input indicative of the user's desire to create a bot. This may be facilitated by a user's selection of the “bot generator” button410illustrated inFIG.4. At step302, the bot server220generates a GUI that is displayed on the user device230prompting the user to build a bot from a template. InFIG.5, the bot templates are listed in a menu option501. At step303, the bot server220receives the user selection of an “event” bot502. At step304, the bot server220retrieves the event template601from the template database206and generates a GUI that is displayed on the user device230to present the event template601to the user as shown inFIGS.6and7. The event template601include data fields for the event name602, event time603, event date604, event location605, cost of admission606, a description of the event607, and an image associated with the event. As shown inFIG.8, a user may select an image801from the user device230to complete the data field701. Once this information is entered into the data fields by the user, the bot server220receives the user data to build the bot at step305. As shown inFIG.9, a preview of the bot901may be presented to the user. Once the user approves the bot by selecting the “publish” button902, the bot is stored in the bot database204at step306. In some embodiments, the user may save the bot by selecting the save button903, or can edit the bot by selecting the edit button904. Reference is now made toFIG.10, which illustrates a sample screenshot of an embodiment of a GUI1000presented on a user device230. As shown inFIG.10, the GUI1000includes a button1001for generating a bot, a button1002for accessing the AIPA, and a listing1003of bots1003A,1003B, and1003C. In the embodiment illustrated inFIG.10, the bots listed are those that are trending among users. In other words, the trending bots are those that are currently the most popular bots being used by other users of the networked device environment200. The trending bots are shown by selecting the trending button1004. Similarly, the user's recently used bots may be displayed when the user selects the “Recent” button1005. The messenger application may be accessed when the user selects the “Messenger” button1006, and the user's profile is accessed when the user selects the “Me” button1007. An example illustration of the messenger application is provided inFIG.11, wherein a bot1101is provided in a message1102send using the messenger application. In some embodiments, when a user selects a bot (e.g., by selecting one of the bots displayed to the user), the bot server220processes the bot selection in accordance with the method illustrated inFIG.12and the screenshots provided inFIGS.13-16. The method inFIG.12and the screenshots inFIGS.13-16correspond to an example of a user selecting a bot to purchase shoes. At step1201ofFIG.12, the bot server220receives a user input indicative of bot selection. A user may select or access a bot by clicking on a bot presented in the messenger application (seeFIG.11) or browser (seeFIG.18), performing a search in the messenger or browser, receiving a bot from the AI Personal Assistant (in messenger or email form), or the completion of a triggering event (e.g., user wearing a smart backpack walks into a room, and Bluetooth connection established between a local network and the smart backpack, thereby serving as a triggering event). InFIG.13, the user selects the “<KD35>” bot1301, and the bot server220receives the user selection at step1201ofFIG.12. At step1202, the networked device environment200then retrieves user data such as, for example, the user's location data, from the user data stored in the user database202. This information is used to customize the selection of the product to be sold via the bot so that the product corresponds with the user's preferences as determined from analyzing the user data. At step1203, a server (e.g., server210or server220) analyzes the user data (e.g., the user's “likes,” user preferences, etc. from user activity and the user data in the database202) to narrow down a selection of bot options that are likely to coincide with current context environment and/or the user's preferences by matching keywords from user data, matching other user data, matching business data, and/or using data from sensors to data, such as keywords, which describes or relates to the item to be sold via the selected bot. As step1204, the narrowed option(s) is presented to the user as shown inFIG.14. As shown inFIG.14, a shoe1401is presented to the user having a size10and color of red. This may be determined by analyzing the user data to determine the user wears a size10shoe and prefers the color red (for example, from analyzing data pertaining to previous shoe purchases). Moreover, the shoe1401may have been designed by another user while following a user path, as discussed elsewhere herein. At step1205, the user selects the shoe option by selecting the “Pay” button1405, and the bot server220receives the user's selection of the option. At step1206, the server210or220then retrieves user payment and shipping information from the user data, and populates payment and shipping fields1501for completing purchase as shown inFIG.15. At step1207, the networked device environment200presents the user with the option to correct or confirm payment and shipping data as shown inFIG.15. If the user confirms the payment and shipping data by selecting the confirm button1505inFIG.15, the networked device environment200processes the purchase with the vendor/user and displays purchase confirmation1601to the user at step1208, and as shown inFIG.16. If the user changes the payment or shipping data, then the server220updates the payment/shipping data at step1209, and returns to step1207. In some embodiments, the bots may serve as a shortcut for accomplishing a particular task by accessing data stored in the system. For example, if a user wishes to order a pizza, the user traditionally searches for a pizza vendor, downloads a specific app for a particular pizza vendor, creates a user account for the vendor, specifies the pizza to be made, enters payment information, enters delivery information, then completes the order. With a bot, however, the user simply accesses a bot for ordering a pizza. The system then accesses the database to retrieve a listing of pizza vendors. Alternatively, the system may, in some embodiments, perform an Internet search for relevant interfaces for ordering pizza (for example, the computer may execute an Internet search for “pizza” then store the search results in memory as available pizza vendors). The system presents the listing of vendors to the user on a display. The system then receives a user input indicating selection of the user's preferred vendor. The selected vendor may, in some embodiments, be stored in memory and associated with the user data as a preferred pizza vendor of that particular user. The system then accesses information from the vendor (either from an application programming interface (API) of the user/vendor or from the user/vendor's user data) to obtain options for ordering a pizza. These options are then presented to the user on the display so the user can select the pizza to be ordered. The user's selection is then received by the system, and the order is transmitted to the vendor. The system then accesses the user data to retrieve the user's location (as provided by, for example, the user's current GPS location, which is contained in the user data) to provide a delivery address, and to retrieve the payment data associated with the user (e.g., credit card information, billing address, etc.) to be used to complete the purchase of the pizza. This information is then provided by the system to the vendor to complete the pizza order. In some embodiments, the user's actions with respect to accessing this particular bot, and any data that is generated from accessing the bot, is stored in a database (e.g., user database202) and associated with the user data and associated with the user's recent activity. In some embodiments, the system may present the user's recent activity to the user so that the user can conveniently access the recently used bot in the future. As demonstrated by the foregoing example, the system streamlines the ordering process for the user to reduce the input needed from the user for ordering a pizza, thereby reducing the computational labor overhead burdening the processors. Additionally, this process improves the system's ability to display information to and interact with the user by reducing the data gathering required to accomplish the task desired by the user. Specifically, regarding user input, the above example demonstrates a streamlined process whereby the user only enters data to select a vendor and a particular pizza. The disclosed system eliminates the user input related to the steps of searching for a pizza vendor, downloading a specific app for a particular pizza vendor, creating a user account for the vendor, entering payment information, and entering delivery information. Various embodiments of the disclosed networked device environment200may include data to facilitate the user interface for creating and sharing bots. For example, the system may use metadata tags, which are names of the bots. The metadata tags can be displayed to a user based on their user data, current activity, trends, or based on other user data. The user data for a user may include business data such as APIs for vendors/members. The APIs may serve as bots that are stored in the bot database204. In some embodiments, the networked device environment200includes an open source portion, which serves as a platform for allowing the networked device environment200to communicate with members/vendors. Because some vendor APIs are generally public or available, the networked device environment200can retrieve these APIs without input from the vendor. Users/vendors can access the networked device environment200through the open source platform to develop/build bots. In some embodiments, the system includes an artificial intelligence server227. The artificial intelligence server227determines, in some embodiments, which bots are relevant to users based on user data, activities, trends, recent activity data, recently used bots, etc. The AI server227analyzes data in the entire networked device environment200to determine trending bots and other relevant bots. In some embodiments, the AI server227is capable of serving as an interface between a user and the networked device environment200. The AIPA server227may, in some embodiments, provide automated interaction with a user. For example, the AIPA server227is capable of requesting information from a user, assisting a user with searching for a particular bot, assisting a user with solving a problem or accomplishing a goal, making recommendations to a user, and using artificial intelligence to interact with a user in various other ways. For example, in some embodiments, a user can contact the AIPA server227by, for example, using the messenger application or sending an email to the AIPA. The AIPA is able to read and understand the content of the message/email, and can respond to the user in an effort to help the user with solving a problem or accomplishing a goal. In some embodiments, the user data includes payment data, such as credit card data, banking information, shipping information, and any other information that is used to purchase goods or services. In some embodiments, the networked device environment200may include a database for various forms for use by vendors/users to accomplish certain tasks. For example, if the user wants to schedule a dentist appointment, the dentist's office may require the user to complete forms. These forms may be stored in the forms database and then presented to users who want to use the dentist's office. In some embodiments, the networked device environment200may include a subscription server that is used to manage the user accounts, charge membership fees for user access to the system, manage user subscription to the system, or other such tasks. In some embodiments, the networked device environment200includes a messenger server225for operating a messenger application. The messenger application provides a platform for users to communicate with each other via messages sent through the networked device environment200to other user devices230. The messenger application also allows users to share bots through messages as shown inFIG.11. Bots are automated tasks or executable processes performed by the computer processors associated with the networked device environment200. The bots streamline processes that, if performed by a user, would require additional, unnecessary computing labor and bandwidth consumption performed by the processors, as well as cause GUI items to be displayed in an overlapping and potentially unintelligible arrangement. Sending a bot via the messenger application is simpler than sending links, which are generally very long and inconvenient. Additionally, the bot is different than a hashtag because the bot executes an action (for example, purchasing an item), whereas a hashtag does not. With additional reference toFIG.20, in some embodiments, a bot may be generated using the bot server220according to a bot handling method2000. The bot server220includes a bot template database206that stores bot templates for use with specific categories or industries for bots. Examples of bot templates include, but are not limited to, the following: education industry templates, media industry templates, food industry templates, government templates, health industry templates, gaming industry templates, sports industry templates, retail/wholesale templates, activism templates, and family templates. In other words, bots may be created for virtually any industry or activity. In some embodiments, a processor associated with the bot server220or user server210may execute code to communicate with a user when creating a bot. For example, in some embodiments, the processor receives input from a user indicating the user's desire to create a bot (step2002). In response, the processor prompts the user to build a bot using one of the templates, receives data from the user, such as in association with metadata tags (step2003), receives external data from other sources both within the system and outside the system and both structured and unstructured (collectively, “business data”) (step2004), and creates the bot by applying a bot template (step2001). The bot handling method2000may go on to provide open source outputs, such as providing data of step2004for open source release (step2005) and further may apply artificial intelligence (step2006) to structure the user data and the business data to improve processing time and reduce data traffic. The bot handling method2000may continue following the application of machine learning to displaying aspects of the business data according to the directives of an alignment translator164(step2007) so that further manual user input may be accepted (step2008). This manual input may be from a variety of system users. For instance a creator may be involved in steps2002-2006, and a customer may be involved in steps2008and2009in order to purchase a product or service provided in associate with the bot. For instance, step2009may include interaction of the bot formed according to the bot template with a payment system (step2009) to accept and/or process a payment. As well as a form database as mentioned (Step2010) and legal services/document database (step2010). Moreover,FIG.20also shows that a decision is made (step2014) so that data is displayed in a user readable non-interfering way (step2013). Notably,FIG.20also shows various asynchronous aspects of the bot handling method2000which also participate in the decision made so that the data is displayed in a user readable non-interfering way. Thus, for discussion of asynchronous aspects, specific apparatus elements will be noted, rather than steps. These apparatus elements feed data to decision step2014and further influence the making of the decision of how to arrange the data in a user readable non interfering way. For instance, step2014may further include drawing data from a subscription system2012which allows user devices to access the method2000in accordance with terms of subscription service. A corporate headquarters2016and a tech support organization2015may further provide rules for the decision step2014and data to influence the decision step2014. Furthermore, a smart hive2019may be a memory or other data store, or one of the repository/database/memory aspects discussed with reference to other figures, or may be a logical combination thereof wherein data relevant to a user and/or user device and actions of the same may be stored. Moreover, the decision outcome of the decision step2014may be stored so that future similar decisions regarding the display of input data (step2013) may proceed with reduced processing overhead, based on the loading of the historical display of input data at a previous point in time. A “CODED < >” code repository2018wherein bots and/or data regarding bots may further provide various bots for display and decision step2014may determine a user readable non-interfering arrangement thereof, or may determine to exclude some or all bots from display. Finally, a membership browser2017may interact with the subscription system2012to permit users to add, remove, update, or otherwise manage their relationship with the subscription system2012and specifically, the rights of their user device(s) to participate in method2000as a display device for the display of input data at step2013. In some embodiments, the networked device environment200may include a browser application that is capable of providing an Internet browser that is integrated with the networked device environment200. In some embodiments, the user may login to the browser to gain unrestricted access to websites for which the user database202contains the user's login information. For example, the user database202may store a user's bank account information, including login information. When the user visits their bank webpage, the browser pulls the bank account login information and completes the login process for the user automatically. In this way, websites that a user visits using the browser are automatically accessible without the user having to login to those websites. Example browser GUIs are illustrated inFIGS.17and18. As shown inFIG.17, the browser includes a user login screen where the user enters their username in the username data field1701and their password in the password data field1702. The browser GUI inFIG.18includes a search field1801, a listing of recommended bots1802, trending bots1803, and recently used bots1804. Single login functionality disclosed herein reduces computational labor overhead by automating tasks and consolidating data used to complete such tasks. Further example browser GUIs are illustrated inFIGS.21A-C. A browser may include a user dashboard2100wherein a user may interact with bots and manage data in the system. The user may, for example, be running bots to facilitate education and entrepreneurship in the shoe industry, thus the user dashboard2100may be adaptively arranged and configured to depict elements relevant to the shoe industry in a non-interfering arrangement. For instance, a log of stores visited by a user2103may be in various instances provided, but in a location proximate to other indicia of collected data such as location data2101and sensor data2102that is located so as to not overlap services2104-2109based on the user data. The user dashboard2100may display services that may be provided by bots to the user based on the user data. For example, the user interested in shoe industry entrepreneurship may be provided a link to customize shoes for sale to other users on the system2104, or to create customized fashion items2106, or the user may be presented bots that allows the user to provide services to other user's bots, for instance, to work as a fashion item delivery service2109, a food delivery service2108, or to perform various life tasks such as shopping2105and visiting restaurants2107. Thus a plethora of bots are presented based on user data and allowing the user to operate both as a creator187(FIG.1C) and as a supplier184(FIG.1C). Within the user dashboard2100, collected user data may include location data2101and sensor data2102such as noise levels. The user may control the sensor, such as by setting which sensor(s) are active at a sensor control interface2114. The user dashboard2100may allow the user to manage the data within a user profile2110, for instance, a profile picture2112, user interests2111, and user friends2113. The user dashboard2100may display statistics regarding the performance of a user's bots. For interest, a user running a bot to create customized fashion items2106may be shown statistics relevant to that bot. In this manner, the elements shown on the screen are adaptively selected, arranged, and displayed in a user readable non-interfering manner depending on which bots are active. For instance, a new order graph2115may be displayed showing historical trends related to order frequency, a total savings graph2116may be displayed showing the user's accumulation of financial savings over time, and accomplished purchases charts2117may show quantities and trends of purchases over time. The systems and methods discussed herein may be implemented to, with the aid of bots, synthesize structured deliverable data from unstructured data. For instance, with reference toFIG.19A-B, a method of structuring data1900by a data structuring sub-unit1950of a processor114is disclosed. Notably, the data structuring subunit1950may comprise a logical data bus1952to allow communication among modules under control of a bus controller1951. Moreover, as a logical unit of processor114, the data structuring sub-unit1950may further communication with modules on bus151under control of bus controller150(SeeFIG.1B). The method1900may comprise gathering unstructured data and creating a profile (Step1910). Aspects of the processor114discussed inFIG.1Bmay be logically arranged to interoperate with those according toFIG.19B. For instance, an I/O module152of a processor113may receive unstructured data from any source discussed herein, such as third-party repositories, sensors, etc., and also receive structured user data from a platform user database158via a network161and these may be communicated to aspects ofFIG.19B, such as a unstructured data collection module1970. Such a step may include substeps, for instance, collecting a user (step1911) then collecting an interest from that user corresponding to unstructured data, a user selection and/or corresponding to a random assignment (step1912), for instance, the shoe industry, or party planning, or the like. The method may continue with ingesting mask data (step1920). For instance, mask data may comprise a filter configured to select or deselect discrete elements of unstructured data in order to assign a subset of discrete elements a sequence to a user to create an assigned “user path” which comprises a workflow involving tasks to educate the user about the interest and facilitate the user's engagement in commercial transactions as a creator187(FIG.1C) and/or as supplier184(FIG.1C) within that interest. Again, aspects of the processor114discussed inFIG.1Bmay be logically arranged to interoperate with those according toFIG.19B. For instance, a user path mask engine165may establish a user path by masking path elements drawn from a user path repository159according to user data and business data within a user profile, third-party data sources, and from a sensor overseer153and an I/O module152and communicate them to a mask data ingesting engine1980comprising a mask repository submodule1981(which may share data with the user path repository159, or may be another logical descriptor of the same data store). A path element may be an option to create a bot, an option to use a bot, an instruction to engage in a transaction, an instruction to engage in a meeting, such as a meetup or hackathon, an instruction to go to a location, and/or any other aspect that potentially may form a portion of a user path. Such a step may include substeps, for instance, accessing a mask repository to retrieve path elements drawn from a user path repository159(step1921), evolving a user profile to update the user path by masking path elements drawn from the user path repository159(step1922), and further evolving the mask applied for the user so that the user path can also evolve as the user profile evolves due to interactions of the user with the system and other users as well as unstructured and structured data, such as provided by the sensor overseer153monitoring a sensor (step1923). The method may continue with executing a user path (step1930) by a path execution engine1990. For instance, unstructured data (and structured data) may be implemented, or alternatively a random assignment may be made to create for the user a user path (also called a “trajectory”) (step1931). For instance, a user may be shown bots related to a specific industry and work or educational opportunities in that industry. The user may be provided notifications via a notification generator155revealing elements of the user path via a graphical user interface. For instance, the notifications may include instructing that user to engage in events such as “meetup” events to learn and “hackathon” events to test the user's skills. The user may follow the user path, or may deviate from it. The system may alter the user path based on the deviations or based on the user following the user path. More specifically, data linkages may be created (step1932) associating unstructured data with the user path to characterize the user's behavior in following the path or deviating from it. Notably, this step may further lead to further evolving of the user profile in response to changes in the data linkages (adding, deleting, etc.) (Step1922). Moreover, the user path as well as the data linkages maybe evolved as the user interacts with the system along the user path or deviates from the user path (step1933). These aspects may be shared from bus1952to bus151for use by modules ofFIG.1B. By creating data linkages associating specific data with specific aspects of a user path, the network is improved because data retransmission is reduced and further the GUI is improved because data not having proper linkages may be removed from or prevented from being displayed. For example, data indicating that a user attended a meetup, such as location data from a sensor that is provided by a sensor overseer153(FIG.1B) may be linked to an aspect of the user path instructing a user to attend a meetup and gain domain specific knowledge and skills before being allowed to create a bot in a certain domain, for instance, shoe design. The association of the linkage to the aspect of the user path captures the authorization of the user to proceed to create a bot in that domain, without needing the retransmission of sensor data, thereby improving network operation by reducing traffic. Furthermore, the GUI may actively recreate at direction of an I/O module152(FIG.1B) operating under the influence of an evolution controller168(FIG.1B) so that an alignment translator164(FIG.1B) populates the bot creation controls onto the users screen following the successful data linkage, and effectuates the population in such a manner as to avoid interference with other displayed information. The method may further continue with converting the myriad unstructured data as well as the evolutionary profile, mask, linkages, and user path into a synthesized structured deliverable (step1940) by a deliverable synthesis module1960. In various embodiments, the deliverable comprises a display of data that is tailored to the user and displayed on a GUI in a non-interfering arrangement. Thus, as with the other modules discussed, one may appreciate that the deliverable synthesis module1960may be considered a logical grouping of aspects presented inFIG.1B, and/or may comprise an additional module configured to interoperate with the logical grouping of aspects presented inFIG.1B. For instance, the user profile may be transmitted from a platform user database158to bus1952for availability to other aspects of the system. Notably, step1940of creating a synthesized structured deliverable may include sub steps. For instance, the user profile may be transmitted to the bus1952(step1941) and received by a mask data ingesting engine1980, a mask may be launched to reconcile the content of the user profile (step1942) with a user path to evaluate how closely the user is following the path or deviating from the user path by the path execution engine1990, and in response, recommendations may be published to the GUI display recommending one or more bot to the user for the user to interact with (step1943) by the deliverable synthesis module1960. In this manner, the GUI display is further enhanced by limiting the publication of bots to only that subset potentially relevant to the user and not displaying the entire library of bots. The delivery synthesis module may comprise aspects to more effectively effectuate step1943. For instance, the delivery synthesis module may comprise a screen element selector1963configured to select which aspects of the structured data (a first subset of the structured data) to display and which aspects of the structured data (a second subset of the structured data) for non-display due to network congestion, graphical user interface size limitations, and an arrangement of extant elements on the graphical user interface, and/or the like thus improving network performance and adaptably creating the GUI arrangement as needed. The delivery synthesis module may also comprise a visual interference detector1962configured to detect actual or potential interference among one or more element arranged on the GUI (for instance, aspects ofFIG.20) and direct the screen element selector1963to select or deselect elements to ameliorate interference, or to move elements around the screen to ameliorate interference. Finally, the deliverable synthesis module1960includes an evolution controller1961(for example, an evolution controller168depicted inFIG.1B) configured to ingest user interaction with GUI elements and flow back amendments to the user profile and/or user path. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. The steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc., are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or the like, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory, processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory, processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory, processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
75,303
11860887
DETAILED DESCRIPTION OF SOME EMBODIMENTS Some embodiments will now be described for the handling of big data. Some embodiments will be described in the context of handling game data. However, it should be appreciated that embodiments may be used to handle any type of big data and the invention is not limited to the handling of game data. For example, some embodiments may be applied to the scenarios where a user's interaction with one or more websites or social media platforms is tracked. Other embodiments may be applied in environments where a large number of transactions or event occur. For example some embodiments may be applied to share transactions. Some embodiments may be applied to vehicular traffic scenarios or weather monitoring applications. FIG.1schematically shows a system300of some embodiments. The system300comprises a server320which may store databases of game players' details, profiles, high scores and so on. In practice, one or more databases may be provided. Where more than one server is provided, the database(s) may be provided in one database or across two or more servers320. Where more than one server is provided, different servers may be provided in different locations to other servers. The server320may also have a games data function. This may comprise a memory to store the computer game program and a processor to run the games program. In some embodiments, the database function may be provided by different entities to those providing the game or other supported function. The server may communicate via for instance the internet310to one or more user devices305and may further provide connections to a social network330such as Facebook™. It should be appreciated that any other network may alternatively or additionally be used with other networks instead of or in addition to the internet. It should be appreciated that embodiments may be deployed in different game system architectures. For example, the computer game may be implemented as a computer game that is stored in the memory of the user device200and is run on the processor of the user device200. However, the server320may handle some elements of the game in some embodiments. By way of example only, a game applet may be provided to the user device200and the locally running applet will generate, for example, the graphics, sounds, and user interaction for the game play on the user device200. Some data may be fed back to the server320to allow interaction with other user devices305. The data which is fed back may also allow scoring and/or cross platform synchronization. In some embodiments, the game may be implemented as a computer program that is stored in a memory of the system, for example the server320, and which runs on a processor of the game server. Data streams or updates are supplied to the user device200to allow the user device200to render and display graphics and sounds in a browser of the user device200. Such an approach is sometimes referred to as a web services approach. It should be appreciated, however, that such an approach does not necessarily require the use of the Internet. It should be appreciated in other embodiments, the server may have a different, non-game function, depending on the application supported by the system. Reference is made toFIG.2which schematically shows a data pipeline. In this Figure, an arrangement is shown where game events may be stored, for example in a data warehouse. The data which is stored in the data warehouse can be analysed. The pipeline comprises game servers510, TSV (tab separated value) log files520, a log server530and a data warehouse540. At the data warehouse, data is processed from raw data to a dimensional model which may be used to provide reports (or provided directly to data scientists). An extract, transfer, load ETL process may be used to transform the raw data to the dimensional model. Reports may be provided from the raw data and/or the dimensional model. Reference is now made toFIG.3which shows a backend architecture. This architecture again has an arrangement where game events may be stored, for example in a data warehouse. The data which is stored in the data warehouse can be analysed using analysis tools350. User devices200such as described in relation toFIG.1are provided. The user devices200communicate with game servers340via the internet310or other suitable network. The game servers340may be any suitable servers. The game servers provide game services. The game servers may listen to requests or tracking calls from the clients on the user devices. One or more game data servers342are arranged to store the player's current progress and other associated states. The servers may be sharded database servers or any other suitable server or servers. In some embodiments, these one or more servers may be relational database management systems. In some embodiments, the data in the game data servers may comprise data that is only used by the actual game. The game data format may in some embodiments be dependent on the associated game. In other embodiments, the data format may be the same across two or more games. The incoming events are stored in a database cluster344, and may also be written to files in a data warehouse and business infrastructure346. The data warehouse and business infrastructure may be a distributed file system. Each event or at least some events are mapped to a table. The table may be provided in a data cube348. The use of tables may make it simpler to compute aggregates over the data and/or do more complex batch analysis. Some embodiments relate to a rule based event aggregator RBEA. In some embodiments, RBEA provides a scalable real-time analytics platform. This platform may be used for stream analytics. The platform may be implemented by computer executable code running on one or more processors. The one or more processors may be provided in one or more servers and/or one or more computing devices. This may be run on for example the data which is generated by the game servers. Of course, in other embodiments, the data which is generated or provided will depend of the functionality supported. This analysis is “real time” as opposed to the example discussed in relation toFIG.2or3where the analysis is carried out on the data which is stored in the data warehouse. Stream analytics may use events which may alternatively be referred to as data records or data. These events may be analysed in real time or after they have been received. The events may be provided in one or more streams. Data from a single stream or data from two or more streams may be used. In some embodiments, the analytics may compare two or more streams or compare one or more streams with historical values and/or models. Depending on the analytics, anomalies may be detected or an alert may be triggered if a specific condition occurs. The condition may be an error condition or any other suitable condition. It should be appreciated that analytics may be used to detect anomalies in some embodiments. However this is by way of example and other types of functions may be alternatively or additionally be supported which for example allow data to be collected and aggregated, trends to be identified and/or any other analytics to be supported. Some embodiments may provide aggregated data as an output. An output may be provided for a user. This output may be displayed, for example on a dashboard. The output may be provided as an input to a further computational process supported by one or more processors. The processors may for example be in one or more computers or servers. Some embodiments may use a frame work for distributed big data analytics. The frame work may use a distributed streaming dataflow engine. The frame work may executes dataflow programs in a data-parallel and pipelined manner. The frame work may have a pipelined runtime system which may allow execution of bulk/batch and/or stream processing programs. The execution of iterative algorithms may be supported natively. Programs may be compiled into dataflow programs that can be executed in a database cluster environment. A central or distributed data storage system may be used. Data may be provided from queues or in any other suitable way. To give some context to the issues of big data, the applicant has over 390 million monthly unique users and over 30 billion events received every day from the different games and systems. It should be appreciated that these numbers are by way of example and embodiments, may be used with more or less than these example number of events. It should be appreciated that embodiments may have application to much smaller data sets as well as in the context of big data. With big data, any stream analytics use-case becomes a real technical challenge. It is desirable to have computer implemented tools for data analysts that can handle these massive data streams while keeping flexibility for their applications. Generally complex data stream analytics have required specialist knowledge. The approach provided by some embodiments simplifies the complex data stream analytics so the requirements for specialist knowledge is reduced. It should be appreciated, that some embodiments may be used alternatively with relatively small streams of data. In some embodiments, for analysis and/or other data needs outside of the core game, event data is used. To explain some example embodiments, the example event data is game data. However it should be appreciated that the data may be any other suitable data, depending on the functionality supported. In some embodiments, the event data may be a simple text log with a fixed schema (tab delimited text) defines what happened in the game. It should be appreciated that the data may be any other suitable format, depending on the functionality supported. An example event describing a game start is as follows: 10005SagaGameStart2coreUserIdepi-lev-gameRoundIdIndicates that a(long)sodeel(long)saga subgame has(int)(int)started, now in-cluding agameround id. The first field provides an event number, the second field describes the event that has occurred, the third field defines the user identity, the fourth field describes the episode in which the event occurred, the fifth field describes the level in which the event occurred and the fifth field describes the game round in which the game event occurred. Some games may have one or more episodes or chapters which each comprise one or more levels. Some games may have only levels. An example of the received raw event data is as follows: 20131017T113040.393+0200 17 10005 1006627249 7 12 1382002240393 It should be appreciated that in other embodiments any other suitable format may be used for the event data. A subset of the data, may be loaded to a database cluster. This may support faster ad hoc querying and/or better support complex database queries. In some embodiments, real-time aggregates may be computed over the events by aggregating data from all the streams into a database/database cluster and provides a data source for release monitoring and/or real-time dashboards. Data warehouse engineers and data scientists usually work with relational data and the tools associated with it. Event stream data has a relatively different nature when it comes to complex analysis. A number of challenges may be addressed using basic aggregates and/or some simplifications. Typically a query language may be used. However at least some events may be related to other events by for example time and/or the context in which they occurred. However, for questions such as what the user did before a game-start or how they navigated through a game (funnels, sessions, etc.), a basic database query language is limited. Currently proposed options for dealing with these issues for relating different events are as follows: 1. Require a game developer to add the context wanted in a game, such as placement, and relational key. However, this may complicate the development work. This also requires the game developer to understand in advance what data might be required. 2. Select from the event tables in which there is interest, sort the events on player/time and run them through computer implemented code that associates the data, such as a custom reducer. This may be relatively inefficient in the daily processing. The events are stored with one table per event and immediately followed up with a plurality of different queries that put them back in the order they happened with different constellations of events. That data may only be seen when the daily batch has run. 3. Make a simplified model that can run for example in a basic database language. This is not always possible. Accordingly some embodiments aim to provide a RBEA, such that is possible to perform the analysis in real-time. Accordingly, the RBEA is able to support connecting events or data in time and/or storing contextual information for the events or data in a scalable way, while providing results directly from the live streams. The RBEA may be widely accessible with easy to use web interfaces. In some embodiments, RBEA is a platform designed to make large-scale complex streaming analytics accessible for users. RBEA may be such that object-oriented programming language scripts can be simply deployed. The object-oriented programming language may be any suitable object-oriented programming language. The interface which is displayed may be a web interface or any other suitable interface. The scripts may be deployed using a few “clicks” or any other suitable user interaction with the user interface. In some embodiments, a script may be deployed while one or more other scripts are running. The RBEA may be arranged to provide instantaneous results without requiring the user to have details of the deployment. This architecture may relieve data analysts or other users from the burden of managing large streaming clusters and deployments. RBEA scripts may run on a hardware cluster and may deliver substantially real-time results from the live event streams. In some embodiments, the scripts may alternatively or additionally be run using stored data. Using RBEA, easy access may be provided for one or more stream analytics tools for defining and updating user states, writing outputs to one or a plurality of different output formats and/or creating global aggregators across all the users or a subset of users. The RBEA API (application program interface) is configured such stream analytics tasks may be easy to write without requiring any knowledge of the underlying streaming engine while still achieving good performance at scale. An example of a simple RBEA script will now be provided. A script is a user defined program to be executed by the RBEA. The following script, which has been annotated for ease of understanding, counts all the finished games in 1-minute windows, while also writing the game end events to a text file: // Counter for the number of people finishing a game in a given minute.// Defining the method to receive live eventsdef processEvent(event,context) {// Collect output data from “context”, assign to variable in memory“output”def output = context.getOutput( )// Collect aggregator variables from “context”, assign to variable inmemory //“agg”def agg = context.getAggregators( )// Create an empty counter that counts up to 60 seconds// Create a counter with window size of 1 minutedef gameEndCounter = agg.getCounter(″GameEnds″, 60000)// Determine if the event passed to this function is a game end eventif (isGameEnd(event)) {// If this is a game end, increment countergameEndCounter.increment( )// Write the event/result to storageoutput.writeToFile(″GameEndEvents″, event)}} A process event (processEvent) method is defined that will receive the live events one-by-one. The output object is obtained from the context. A counter is created called GameEnds with a window size of 1 minute (i.e., 60,000 milliseconds). For every incoming event it is checked whether this is a game end, and if so, the counter is incremented and the event is written to a text file named GameEndEvents. The script may be saved as FinishedGames. Reference is made toFIG.4which shows a web interface. A list of saved scripts is shown along with options to deploy the script, edit the script or delete the script. If the deploy option is selected the interface will show which script(s) are running. The output of a script can be displayed using a display option. In this example the RBEA created a table for the aggregator output that can be simply accessed by selecting the show button to provide instant data exploration. In this regard, reference is made toFIG.5which schematically shows two formats in which the data may be displayed. The game end events written to the text file can also be accessed as expected and downloaded on demand from the servers. In particular the events recorded for 5 one minute periods are shown in table form and also graphically represented, in the example shown inFIG.5. It should be appreciated that once the data has been collected, it may be presented or output in any suitable format. The data can of course be further manipulated, in some embodiments. In some real-world applications analysts would like to work with state that they compute for the users, such as the current session or current game. Computing state for the hundreds of millions of users is a challenge in analytics applications. Previous solutions were such that real-time applications could only access stale user state (for example pre-computed by batch jobs) which often did not meet the application requirements. In RBEA developers are able to create and update user states in real-time. This uses hardware and/or computer software which support state handling capabilities. The RBEA provides a simple abstraction, referred to as a field, that allows users to define arbitrary user-state in a way that is transparent to the system. New fields can be registered by passing them to a registerField(field) method of the registry in the initialize method of the script. Fields are defined by specifying one or more of the following attributes:1. Field name: This is a string reference for accessing the value from the state data StateData2. Update function: Defines how the Field will be updated for each incoming event. The update function may come in two flavors: (State, Event)→State and (Context, Event)→State.3. Initializer: By default states are initialized to null, but it is possible to define an initializer function (UserID→State) or an initial state value. The availability of fields lends itself to a clean pattern for stateful streaming programs:1. Define any state used by the application as fields in the initialize method.2. For each event or data received, access the state for the current user, current user device or other identifier from the state data.3. Enrich the current input and do the processing Some embodiments allow for the computing of total transactions per level. In other words some embodiments, allow for the determining of a number of events associated with a particular state. Consider the example where it is desired to compute total revenue per level in a game every half hour. From the process event method's perspective, every time there is a transaction, it would be desirable to add the amount to an aggregator for the current level. The problem is that transaction events do not contain information about the current level. Whenever a player starts a new game, there is a game start event which contains the level information and subsequent transactions should belong to that level. To solve this use case in the framework of some embodiments, it is desirable to keep track of the current level for each player as a state. This is the type of stateful application that Fields can be used for: // Compute total revenue per level in a given game every 30 minutes// Define methoddef processEvent(event, ctx) {// Collect aggregator variables from “ctx - context”, assign to variable in memory//“agg”def agg = ctx.getAggregators( )// Collect state data from “ctx”, assign to variable in memory “state”def state = ctx.getStateData( )// Define sum aggregator with 30 minute window sizedef amountPerLevel = agg.getSumAggregator(″Amount″, 30*60*1000)// The aggregated values (amountPerLevel) are written to a text file instead ofRelational database management systemamountPerLevel.writeTo(OutputType.FILE)// Determine if the event passed to this function is a transactionif(isTransaction(event)) {// Retrieve the current level from state dataInteger currentLevel = state.get(″CURRENT_LEVEL″)// Increment counter for current level, each level having its own counteramountPerLevel.setDimensions(currentLevel).add(getAmount(event))}}// New method to register the current level the user is playingdef initialize(registry) {// Define current level state, initialized to a null value (−1)def currentLevel = Field.create(″CURRENT_LEVEL″, {// Update the level for each new game startInteger prevLevel, Event e −> isGameStart(e) ? getLevel(e) : prevLevel}).initializedTo(−1)// The state is registered (current level) for this job so it is computed automaticallyregistry.registerField(currentLevel)} The current level field automatically keeps track of which level each user is currently playing. This information can be easily accessed for the current player (based on the event) from the state data as can be seen in the process event method. This state data can be used in one or more different scripts. It should be appreciated that in this example, state is level. The state can be any other suitable parameter. In some embodiments, the parameter may be provided in one set of event data but is required in conjunction with different event data which does not include that parameter. Some embodiments may require two or more state conditions to be part of the script or method. In some embodiments, information which is used as state information may simply be provided by received events. In some embodiments, to update state information may require some processing. For example the currently stored state information may be modified by the received information. For example the received information in the stream may indicate an increment or decrement amount. Of course any other processing may be performed. In some embodiments the state information may need to be determined from received information. That determination may require processing of the received data, optionally with one or more other and/or previous data. In some embodiments, the stored state may be updated using information about a new event and the previously stored information about an event to create a new state value that is stored. For example a level may be changed in response to receive a level complete event. Thus the current level is the current state, the new event would be level completed and the new current level would be determined therefrom. Another example could be to track whether a user has crushed 100 red candies by tracking successful game end events. For example an event relating to a successful game end comprising information that 20 red candies were crushed is received. On receiving a subsequent event indicating 10 red candies crushed, a total of 30 red candies is going to be stored i.e the currently stored 20 candies and the new 10 candies. Game events are given by way of example only and the events in question will depend on the context in which embodiments are deployed. Reference is made toFIG.6which shows the image displayed when the application is deployed. The application is executed by the RBEA backend job. The backend is an instantiation of the REBEA system. A stream processing job runs on a suitable framework that serves as the backend for the RBEA. The text file contains the aggregated amounts per level which can be accessed through the GUI (graphical user interface). By selecting the show option the aggregated amounts per level are shown. It should be appreciated that in some embodiments, any other suitable user interface may be provided alternatively or in addition. In some embodiments the aggregated amount information may be provided in the alternative or in addition to another computer implemented process. The RBEA interfaces may be configured to abstract away at least some or all of the stream processing internals from the users. For example one or more of the following may be abstracted away from the users: Reading event streams; Parallelizing script execution; Creating global windowed aggregators; Creating and updating user states; Writing output to one or more target formats; and Fault-tolerance and consistency. Executing these abstractions in a way that it will scale to many parallel RBEA jobs, on the billions of events and millions of users may require a streaming dataflow engine with one or more of the following properties: Highly scalable state abstractions; Support for custom windowing logic; Support for cyclic data flows; and Exactly-once processing guarantees. It should be appreciated that with differing scales of events and/or users may allow different criteria to be used in selecting an appropriate dataflow engine or platform. Only one deployed and continuously running job may serve as a backend for all running RBEA scripts. However, in other embodiments, the function may be provided by two or more scripts. The scripts may be running in operators (as described later) sharing the cluster resources in an efficient way. Scripts deployed on the web frontend are sent to the already running job, and the lifecycle management (adding/removing scripts, handling failures etc.) of the RBEA scripts is handled by the operators themselves. Different RBEA operations (incrementing aggregators, writing output) are translated into different outputs for the operators. Reference is now made toFIG.9. By way of example only, a use case will be considered. In this example use case, the amount of revenue associated with a particular level is to be monitored. In order to be able to monitor this, information about a current level and information about the purchases made while playing that level is required. The events which are provided from the client devices, in this example, do not have the purchase information and the game level in the same event. Rather, the game level is provided along with a user identity in one type of event. Information about purchases are provided in different events with user identity. InFIG.9,904references an event stream from a first user and906represents an event stream from a second user. For example, event900may represent a game start event for the first user and will have the user identity of the first user, an indication that a game is being started and a game level. Event902may represent a game purchase event for the first user and will have the user identity of the first user, an indication of the game item being purchased and a purchase price. Event908may represent a game start event for the second user and will have the user identity of the second user, an indication that a game is being started and a game level. Event910may represent a game purchase event for the second user and will have the user identity of the second user, an indication of the game item being purchased and a purchase price. Some embodiments provide an approach which allows such queries to be run on data streams. In particular, embodiments cause the events which are required for the query to be created. A query is written using the RBEA API that may do one or more of read and/or modify state, aggregate data, create outputs and anything else supported by the RBEA. In the case where the query is the amount of revenue associated with a particular level, the events which are created will have the current game level and the purchase price. InFIG.9, a partition915is provided for each respective user. A partition is defined as all events belonging to the same key (in this example the key is the user id). In the example show inFIG.9a first partition915ais associated with the first respective user and a second partition915bis associated with the second respective user. Thus embodiments may partition events by user identity. It should be appreciated that in other embodiments, a different criteria may be used to partition events. The scripts which are being run for the respective queries are deployed in the partitions for each user. In the example shown, scripts S1to S4which are deployed with respect to the first user's data are referenced922a. Scripts S1to S4which are deployed with respect to the second user's data are referenced922b. In reality one physical machine may for example contain millions of user partitions. In some embodiments scripts are stored once on every physical machine, so partitions share the scripts. However, in other embodiments, more than one copy of a script may be provided on a given physical machine. When a script is deployed, it is determined what state is required for the query. For example, in the case of the example query, the state will be the current game level. This state is stored in a state data store920for that user. The state data store for the first user is referenced920aand the state data store for the second user is referenced920b. This state can be used by any query. For example another query may be the number of attempts to complete a particular level. The level state can be used in that latter query as well as the query with the amount of revenue per state. It should be appreciated that when a particular value for a state changes, the value in the state data store is updated. The scripts when deployed will output the required events930for the associated query. Those events will be directed to the appropriate consumer of the events. In some embodiments, all events are passed to a given consumer which will discard the unwanted events, that is events not relevant to the consumer of the events. In other embodiments, only the events required by a consumer of the events will be provided to that consumer. InFIG.9, some example consumers of events comprise an aggregator934, an output932and/or any other suitable functionality. The consumer of the events will in turn run a script to provide the required output. For example in the case of an aggregator, the data from the received events may be summed. Scripts925supporting further queries may be broadcast and received by each of the user partitions915and thereby deployed. Those scripts may use existing state information or cause the required state information to be stored from the received information. In this way, embodiments allow analytics scripts to be run on live streams. Conventional approaches may require a window approach where data for a given time period is stored and then several scripts are run against the stored data to achieve a single query. This can be resource intensive, particularly where a number of different queries are being run. An advantage of some embodiments is that events are read only once and different scripts are sent to the user partitions. The events thus are read once but are used by more than one script. This contrasts with other real-time approaches which may read the data independently for each deployed application (script). Another example of a query that may be run relates to a test mode. A test mode may be allocated a test mode identifier. That test mode identifier may be stored as state information and one or more different types of events may be output with that test mode identifier. Some embodiments may thus allow state to be shared between different queries or scripts being run. The input events may comprise a time stamp. Alternatively or additionally, the output events may comprise a time stamp. Reference is made toFIG.7which schematically shows in more detail how RBEA scripts are deployed/executed on an engine. The user states are updated based on the defined update function and the new received event by the update state part700. If there is a change in the user state, one or more call backs may be triggered (if a user script is registered as a listener to these changes in state) by the trigger call backs part702. After updating the state and triggering possible call backs, the process event methods is executed by the execute processor704. The update fields part, the trigger call backs part and the executer processor correspond to functions of the partition915ofFIG.9. A web front end part710is configured to allow scripts to be written and deployed. The compute aggregates part706is configured to provide an aggregation of results and corresponds to the aggregation function934ofFIG.9. In some embodiments, the update state part can provided an input to the compute aggregates part706, depending on the defined update function. One or more of the update state part700, trigger call backs part702, the compute aggregates part706and the execute processor part704are configured to provide outputs to a write output part. The write output part708is configured to provide an output to the output part of the web interface part710and/or one or more outputs, e.g. a message broker output, a relational database management system output and/or a file output. This write output and the compute aggregates part may correspond to the output932, aggregation934and other functionality936ofFIG.9. In some embodiments, there may be four main stages of computation:1. Read event streams and receive newly deployed scripts.2. Update user states, trigger user defined call-backs and run the process event methods (processEvent methods) of the deployed scripts.3. Compute windowed aggregates as produced by the scripts.4. Write the outputs to the selected one or more formats. Each of these stages will now be discussed in more detail. Reading the events and scripts—the live event streams are read with a consumer that tags events with a category or feed name from where they are coming. This allows users can freely decide what category or feed names they want to listen to when running their scripts. A keyed stream may be created from the event stream keyed by the user identity. Scripts may be received in text format from the web frontend through a message broker as simple events, and are parsed into the appropriate EventProcessor interface. New scripts are may be hot-deployed inside the already running job. In particular the scripts can be received by the user partitions and deployed whilst the system is running other scripts. When a script is received, a check is made to see if it uses any of the existing stored state(s) or if it needs some other state. If the new script need state that is not stored, the system is configured such that this new state will be determined from received events and stored in said data store. The new script can be received in a script stream. This is generally different to the event stream. However, in some embodiments, the events may be in the same stream as the scripts. Embodiments may be scalable in that a machine may be provided for a first set of users and a further machine for a second set of users and so on. In embodiments, the same scripts are deployed in the each partition of the same machine. In some embodiments, the same scripts are deployed in different machines. The scripts may be broadcast to the different machines and compiled locally on the machines. In some embodiments, one or more stateless scripts may run in parallel to one or more state based scripts. These scripts can run in parallel on the same machines and/or partitions. In other embodiments, the stateless scripts may be run separately to the state scripts. In some embodiments, the same scripts may be run not only on real time data but also on stored data. The scripts may be run at the same time and the results of the real time processing and the processing of the data may be compared. In some embodiments, run time metrics associated with the running of one or more scripts may be determined. These metrics may comprise one or more of: time taken for script to execute; which state is being accessed; is any state being accessed; and any other suitable metric. These run time metrics may be used to control how a script is deployed and/or the number of users which are supported by a machine which is deploying the script. The run time metrics may be for a particular script and/or a set of scripts. Computing states and running the scripts—user states are computed in the same operator where the scripts are executed to exploit data locality with key-value state abstractions. For this an operator which receives both the event streams and the user scripts as events is used. The user scripts may be broadcast. For new events, the processEvent method of the already deployed RBEA scripts is called. For new scripts, these may be hot-deployed inside the operator so it will be executed for subsequent events. The operator may be a map operator. The following class shows a simplified implementation of the execution logic: // Define new class “RBEAProcessor” (methods and variables)class RBEAProcessor// This class extends the standard method “RichCoFlatMapFunction”// Flatmap A FlatMap is an operator that receives one input and mayproduce zero or more outputs. A CoFlatMap means that events from twostreams are processed and a different method (flatmap1/2) is triggeredbased on which stream the event comes from.// Flattening converts a list of lists to a list.// e.g. list(list(1,2,3),list(2,6,8)) becomes list(1,2,3,2,6,8) once flattenedextends RichCoFlatMapFunction<Event, DeploymentInfo,BEA> {// Computed fields (information) for the current userValueState<Map<String, Object>> = userStates;// Omitted details...// tuple is an ordered list, flatmap1 takes event data and updatesinformation relating to the current //userpublic void flatMap1(Event event, Collector<BEA> out) {// Update states for the current userMap<String, Tuple2<?, ?>> updatedFields = updateFields(event,out);// Send information back up the chain if user state has changed.// If any fields have changed the update call backs are triggeredon those tiggerUpdateCallbacks(updatedFields, out);// Execute user scripts// Call the processEvent methods of the user scriptsexecuteScripts(event, out);}// new method, named flatmap2public void flatMap2(DeploymentInfo info, Collector<BEA> out) {// Create an event processor named “proc” in memory (Instantiate theevent processor)EventProcessor proc = info.createProcessor( );// The processor are added to the list of processorsaddProcessor(proc);// Start processor (Call the initialize method of the processor)initializeProcessor(proc);}} When the operator receives a new event it retrieves the current user state from the state backend, updates the states, then executes all the scripts that listen to the current category or the like. A state backend is used to persist states, which is preferably scalable. The backend may be an embeddable persistent key value store. During script execution most calls to the API methods are translated directly into output elements which are collected on the output collector. For example, when the user calls output.writeToFile(fileName, myData) in their script, the operator provides an output that encodes the necessary information that the sinks will need to write the user data into the appropriate output format. Different types of API calls (Aggregators, Relational database management system output, Message broker output, etc.) will, of course, result in different output information but generally contain the information that is enough for downstream operators to know how to deal with them. The operator may produces some information on the currently deployed processors, such as notification on failures. This is used for removing faulty scripts from all the subtasks. This may alternatively or additionally be used to report the error back to the frontend so that users can fix their scripts. A co-flat map operator at the end produces three main types of output: data output, aggregation, and job information. The flat map operator applies a function to every item emitted by a channel, and returns the items so obtained as a new channel. Whenever the mapping function returns a list of items, this list is flattened so that each single item is emitted on its own. Cooperators allow the users to jointly transform two data streams of different types, providing a simple way to jointly manipulate streams with a shared state. It is designed to support joint stream transformations where union is not appropriate due to different data types, or in case the user needs explicit tracking of the origin of individual elements. Computing window aggregates—windowing functionality is used to do the actual aggregation on the aggregator output coming out from the main processing operator. The information received is in the form of: (job_id, aggregator_name, output_format, window_size, value). It should be appreciated that this is by way of example only and in some embodiments one or more of the data in the information may be omitted. In some embodiments, alternatively or additionally, one or more other data may be provided. RBEA may support sum aggregators, counters, and/or custom aggregators. Computing of the window aggregates is provided in some embodiments. The windows may be processed based on event time extracted from the events. In some embodiments, different window sizes per key are provided in the dataflow. In other embodiments, fixed size windows may be used. In some embodiments, timestamp extractors are defined for the incoming event streams which operate directly on the consumed data for correct behaviour. To create different window sizes on the fly, flexible window mechanisms may be used to define the window assigner that puts each element in the correct bucket based on the user-defined aggregator window. To do this, a tumbling event time window assigner is extended: //Create new class named “AggregtionWindowAssigner” whichextends// “TumblingEventTimeWindows”class AggregtionWindowAssigner extends TumblingEventTimeWindows {// Public method (easily accessible by other classes)“AggregtionWindowAssigner” calls a// function “super”public AggregtionWindowAssigner( ) {super(0);}// Modify standard behaviour@Override// Public method “assignWindows” - returns “Collection<TimeWindow>”public Collection<TimeWindow> assignWindows(Object in, longtimestamp) {//get the aggregate input object “in” in a BEA data formatBEA aggregateInput = (BEA) in;// get window size of aggregateInputlong size = aggregateInput.getWindowSize( );// calculate the start and end time of the time windowlong start = timestamp − (timestamp % size);long end = start + size;// return the start and end time of the windowreturn Collections.singletonList(new TimeWindow(start, end));}} Now that this has been done, a window reduce operation may be performed to sum the aggregator values in each window and send it to the correct output. Writing the outputs—the user may output to one or a plurality of different output formats in their processing scripts. Each output record generated by calling one of the output API methods will hold some metadata for the selected output format. For example: File output: file name Table output: table name Message broker: category name There may be one operator for each output format that will write the received events using the metadata attached to them. These operators may produce some information for the web frontend so that it can show the generated output to the user. For instance when a first record to a new output file is received, it outputs some meta information for the web frontend so that it can display this file for the user for the running script. Reference is made toFIG.8which shows a data processing pipeline. Some of the features of the data processing pipeline are configured to allow for communication with the web frontend and/or to handle script failures in a robust way. FIG.8describes a data processing pipeline. The data processing pipeline may containing a number of data sources and functional operators. Data transitioning through the data processing pipeline may comprise at least one of event information, user information, aggregator information and iterator information. The following may correspond generally to the event and script stream ofFIG.9. Data source ID=3, with a source based on job deployment may provide data to Operator ID=4. Operator ID=4, handles at least one of timestamps and watermarks. Operator ID=4 may output data to Operator ID=5. Operator ID=5 may receive data from Operator ID=4. Operator ID=5 handles read processors. Data Source ID=1 with a source based on event data may output data to Operator ID=2. Operator ID=2 wraps events. Data source ID=−1 provides an iteration source which may be used, for example, for counting purposes. Operator ID=9 executes event processors. This may correspond to block915ofFIG.9. Operator ID=9 may receive data from at least one of Operator ID=2, Operator ID=5, and data source ID=−1. Operator ID=9 may interface with data that originated from at least one of Data source ID=1, Data source ID=3, and data source ID=−1. Operator ID=9 may provide a data output. Operator ID=9 may pass data to at least one of Operator ID=10 and Operator ID=11. FOperator ID=10 may filter processor information. That is to say Operator ID=10 may selectively pass information forward in the data processing pipeline, based upon filtering criteria. The filtering criteria of Operator ID=10 may be a predetermined function. Operator ID=10 may provide data to at least one of Operator ID=34 and Operator ID=43. Operator ID=34 may filter failures. More specifically, Operator ID=34 may be used to determine errors that have occurred during the data processing in the data processing pipeline. Operator ID=34 may provide a data output. Operator ID=39 may receive data from at least one of Operator ID=34 and Operator ID=37. Operator ID=39 may operate on deployment information. Operator ID=39 may provide an output. Data Sink ID=−2 may provide an iteration sink. Data Sink ID=−2 may receive data from Operator ID=39. Operator ID=11 may receive data from Operator ID=9. Operator ID=11 may filter data, for example it may filter BEA data. Operator ID=11 may provide data to at least one of Operator ID=15, Operator ID=28, Operator ID=32, and Data Sink ID=26. Operator ID=15 may provide aggregation. More specifically, Operator ID=15 may provide a bucket aggregator. Operator ID=28 may provide a file output. Operator ID=28 may provide a file output to Operator ID=43, wherein the file output data may contain at least event data, such as transaction data. Operator ID=32 may provide an output. Data Sink ID=26 may provide an output. Operator ID=15 may provide data to Operator ID=36. Operator ID=356 may provide aggregates per second. Operator ID=15 may provide data to Operator ID=31. Operator ID=31 may provide an aggregator output. Operator ID=15 may provide data to Operator ID=28. Operator ID=15 may provide data to Operator ID=32. Operator ID=15 may provide data to Data Sink ID=26. Operator ID=36 may provide data to Operator ID=37. Operator ID=37 may provide an indicator if the value of AggregatesPerSec is too large. Operator ID=37 may fail if the number of Aggregations per second is too large. Operator ID=43 may receive data from at least one of Operator ID=37, Operator ID=31, Operator ID=28, and Operator ID=32. Operator ID=43 may create job information. Data Sink ID=44 may push to frontend. More specifically, Data Sink ID=44 may provide Sink: Push to frontend. Data Sink ID=44 may receive data from Operator ID=43. The main processing operator (Execute EventProcessor) is configured to output two type of events: actual processing events generated by the scripts; and job information about deployment/failures and so on. Information about the errors in the scripts may be shown on the web front-end for easier debugging Output handling may happen in flat map operators which forward newly created File/Table/information to the web frontend. Iterative streams may be used to propagate job failures from one subtask to another. The number of events each script sends to the outputs is monitored. The scripts that generate too many events are failed to avoid crashing the system. A communication protocol may be used between the web interface and the job to decouple the two systems. The communication protocol may be any suitable communication protocol or message brokering communication protocol. RBEA provides a tool that can be used to do complex event processing on the live streams, easily, without having to have knowledge of operational details. RBEA scripts may be managed and executed in a runtime approach where events and script deployments are handled by single stream processing job that takes care of both processing (script execution) and the life-cycle management of the deployed scripts. In some embodiments, event data may be collected for a computer implemented game being played on a user device. Event data may comprise data relating to something (event) which has occurred such as a level has been completed, the player has started playing the game, a particular booster has been used or the like. The event data may also comprise associated information such as a user identity, a device entity, a location of the user, the game being played and/or the like. The event data may comprise contextual information about the game at the point at which the event occurred such as how much life a player has left when the event occurs or the like. The event data which is collected may comprise any one or more of the above data and/or any other suitable data. The code when run will based in input data provide an output for the required query. The code may be run on one or more processors in conjunction with one or more memories. The code may be run on the same at least one apparatus which provides the processing and/or or on at least one different apparatus. The apparatus may be at least one server or the like. Various embodiments of methods and devices have been described in the foregoing. It should be appreciated that such may be implemented in apparatus, where the apparatus is implemented by any suitable circuitry. Some embodiments may be implemented by at least one memory and at least one processor. The memory may be provided by memory circuitry and the processor may be provided by processor circuitry. Some embodiments may be provided by a computer program running on the at least one processor. The computer program may comprise computer implemented instructions which are stored in the at least one memory and which may be run on the at least one processor. In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, and/or CD. The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims. Indeed there is a further embodiment comprising a combination of one or more of any of the other embodiments previously discussed.
52,849
11860888
DETAILED DESCRIPTION The approach is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements. It should be noted that references to “an” or “one” or “some” embodiments in this disclosure are not necessarily to the same embodiment, and such references mean at least one. A new approach is proposed that contemplates systems and methods to identify events that are the topic of content shared and viewed by users of a social network. An event detection system is configured to access a repository that contains a collection of media content. The media content may for example include images, videos, audio clips, and the like, wherein the media content comprises features that include: tags (e.g., hashtags or other similar mechanisms to label and sort content); captions that comprises one or more words or phrases; continuous numerical values; geolocation data (e.g., geo-hash, check-in data, coordinates); as well as temporal data (e.g., timestamps). The first step in detecting events is constructing links between similar media content. Media content is assumed to be similar if they are created, or otherwise accessed and used at around the same time at nearby locations. The event detection system identifies groups of similar content among the collection of media content, based on similarities between corresponding geolocation data and temporal data associated with the content. Media content created at the same or nearly the same time, or at the same or nearly the same location, have a high likelihood of being related. The event detection system therefore groups together clusters of media content based on the corresponding geolocation and temporal data. In some embodiments, “similarity” is further defined by a designation of temporal parameters and location parameters, wherein the temporal parameters include an interval of time (t_T) and the location parameters include a maximum geolocation distance (t_L) between any two points. Thus, picking a large t_T and t_L will result in larger clusters of media content, while a small t_T and t_L will result in smaller clusters of media content. In response to clustering the media content, the event detection system extracts features from the clusters of media content, and designates the features to corresponding feature categories. The temporal trends and geological proximity of certain clusters of features may therefore be representative of an event. For example, a first cluster of content may comprise content that includes a first set of features. The event detection system extracts the features from the cluster, and designates a feature category to include the first set of features. As discussed above, the features of the media content include: tags (e.g., hashtags or other similar mechanisms to label and sort content); captions that comprises one or more words or phrases; continuous numerical values; geolocation data (e.g., geo-hash, check-in data, coordinates); as well as temporal data (e.g., timestamps). The event detection system generates a graph to represent a latent three-dimensional (3D) space, wherein the graph comprises an X, Y, and Z axis. In some embodiments, the X axis represents temporal values, the Y axis represents location values, and the Z axis represents feature values. Because clusters of media content are presumed to be similar, the associated features of the media content among the content should also have similar representations in the latent 3D space, along the Z-axis. Thus, in response to extracting the first set of features from the cluster of content, the event detection system allocation a region of the Z axis to the first set of features, and assigns the region a value. The cluster of content may thereby be plotted and represented in the 3D space based on corresponding geolocation data, temporal data, and content features. Clusters of content may thus be identified, such that each cluster is presumed to relate to the same or a similar event. The content may also form clusters that may only be identified based on a perspective of the graph. For example, by viewing the graph from the perspective of the Y-axis and the Z-axis alone, a number of clusters may be depicted, and similarly, another set of clusters may be depicted from the perspective of the X-axis and Z-axis. Based on heuristics, two pieces of media content are assumed to be “similar” if they happen at the same time and at nearby locations. The two pieces of media content would therefore have similar representations within the 3D space. Network Regularization In some embodiments, the representation of media content “C” is characterized by the average of its associated features (e.g., tags, captions, continuous numerical values), as: x=1/DxΣt∈Dxet Where etis the vectorized representation of tag/entity t, and Dxis the set of tags/entities associated with the content C. For each content pair “i” and “j,” denote their representations as xiand xjrespectively. As per the traditional network embedding models, the probability of observing an edge between i and j as sigmoid(xi·xj). The absence of an edge will happen with probability of 1−pij, wherein “sigmoid” is the sigmoid function: sigmoid(x)=1/(1+e−x). Clustering Cost For a given cluster of media content “K,” the center may be defined as the average of all media content that comprises the cluster K, which may be represented as: ck=1/ckΣst∈Ckxi The cluster assignment of media content si(denoted as cai) is represented as: cai=arg mink∥xi−ck∥2 Ideally, each cluster should be as coherent as possible, while isolated enough to be differentiated from other clusters. The average of all points of any given cluster is defined as the center of the cluster. Coherence is measured by the mean distance from every point of a cluster (where every point represented a distinct piece of content) to the center of the cluster (“intra-cluster distance”). Denoting the center of cluster k as ck, then the distance within the cluster k is defined as: dkintra=Σst∈Ckxi The overall intra-cluster distance is thus the summation over all clusters: dintra=Σkk=1dkintra=Σkk=1Σi∈Ck∥xi−ck∥2 The inter-cluster distance is defined as the summation of pairwise center distance between every pair of clusters: dintra=Σi,j:i≠j∥ci−cj∥2 The total clustering cost will be a weighted average of the two terms. Putting them together, the objective is a weighted sum of the costs above, where the weights are model hyper-parameters, indicating how much emphasis is placed on each component. In some embodiments, we seek to minimize the objective function below: min−Σi,jlog(pYijij·(1−pij)1−Yij)+μ·(dintra−λ·dinter) Where μ and λ greater than 0 are two hyper-parameters. Optimization & Implementation In some embodiments, the objective function are optimized with respect to model parameters, i.e., the vector embedding for tags and entities from captions Standard iterative optimization algorithms can be applied. Generalization to Continuous Features The attributes of content may also be continuous numerical values. Mapping discrete attributes to their embeddings may be treated as a one-layer neural network with lookup vectors as the weight matrix and one-hot sparse encoding as features. Therefore it is natural to generalize this concept to continuous vectors, by (1) concatenating the discrete (one-hot) and continuous attributes at the raw-feature level, and force the weight matrix to be quasi-diagonal; or (2) concatenate the vectors at the output layer, where the discrete and continuous components are treated separately from each other. Mathematically they will be the same. Evaluation East cluster contains events of a certain type. Since specific meanings are not assigned to each latent dimension, in some embodiments we may assume that the type of the event is defined as the most frequent feature (i.e., tag, caption, etc.) in media content that comprises a cluster. In order to reduce the signal from less informative words (e.g., a, and, the, of, etc.) we may use a TF-IDF, a weighted sum of word frequencies when counting the occurrences of keywords. Since features of media content may include free text inputs (e.g., captions), there may be a lot of noise that could negatively effect the evaluation. In some embodiments, “stop-words” as well as words that are shorter than three Unicode characters are removed from all text based features. Thus, in such embodiments, a weighted sum of features may be calculated for each cluster, and an event type may be determined for the cluster based on the most frequent feature, based on the weighted sum. The event type may thereby be assigned to the corresponding cluster. Since the “ground truth” labels of an event type are rather subjective and sparse, we will mainly focus on the case studies on the results in terms of evaluating the model. We investigate the following two aspects: 1) what is the type of the event, or as a more specific example, “is this event a concert of protest?” The type of event is one possible output; and 2) what specific keywords or anomalies are associated with the event. For example, given the event is a basketball game, which teams are playing? Who is winning? These are another possible output. The first set of keywords may be identified by the major clusters. Major clusters may contain stories about the same type of event, and the event can be found by various statistics of the cluster. Weighted count of tags seems like a reasonable measure for now. Cluster density is another metric to be considered, since content about an event tend to highly correlate with each other both temporally and geologically. The second set of keywords may be characterized by the anomalies in the tag embedding space. The intuition is these keywords should be highly distinguished from the background words (which stay around the origin in the embedding space). Another approach may be to find the tags (i.e., feature values) that are far away from every cluster center. The event detection system reports the most frequent keywords (i.e., content) from each cluster in a table that represents a particular region or location. For example, a table may be generated to depict keywords related to a particular location. The most frequent keyword from each cluster located within the particular location, and received during a temporal period may be displayed. By reviewing the table, an event may be inferred. For example, a table may be assembled to depict a particular city (e.g., Indianapolis, Indiana) on a particular day (e.g., May 27, 2018). The event detection system may access a content repository that includes content received from the location and during the time, and identify clusters of content based on the metadata of the content (e.g., the geolocation and temporal data). Having clustered the content based on the metadata, a table may be generated wherein the table comprises a display of the most frequent keyword of each cluster. By reviewing the table, a user may identify the most common keywords from each cluster in order to infer an event. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple client devices102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). Accordingly, each messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, it will be appreciated that the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. In some embodiments, this data includes, message content (including content features), client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. In other embodiments, other data is used. Data exchanges within the messaging system100are invoked and controlled through functions available via GUIs of the messaging client application104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. Dealing specifically with the Application Program Interface (API) server110, this server receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The Application Program Interface (API) server110exposes various functions supported by the application server112, including account registration, login functionality, the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104, the sending of media files (e.g., images or video) from a messaging client application104to the messaging server application114, and for possible access by another messaging client application104, the setting of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the adding and deletion of friends to a social graph, the location of friends within a social graph, opening and application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114, an image processing system116, a social network system122, and an event detection system124. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes an image processing system116that is dedicated to performing various image processing operations, typically with respect to images or video received within the payload of a message at the messaging server application114. The social network system122supports various social networking functions services, and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph304within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the messaging server application114. FIG.2is block diagram illustrating further details regarding the messaging system100, according to example embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of some subsystems, namely an ephemeral timer system202, a collection management system204and an annotation system206. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, collection of messages (e.g., a SNAPCHAT story), or graphical element, selectively display and enable access to messages and associated content via the messaging client application104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing collections of media (e.g., collections of text, image video and audio data). In some examples, a collection of content (e.g., messages, including images, video, text and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The collection management system204furthermore includes a curation interface208that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface208enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user generated content into a collection. In such cases, the curation interface208operates to automatically make payments to such users for the use of their content. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The annotation system206operatively supplies a media overlay (e.g., a SNAPCHAT filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as, social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay including text that can be overlaid on top of a photograph generated taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In one example embodiment, the annotation system206provides a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The annotation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another example embodiment, the annotation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system206associates the media overlay of a highest bidding merchant with a corresponding geolocation for a predefined amount of time FIG.3is a block diagram illustrating components of the event detection system124that configure the event detection system124to access a repository that comprises a collection of content, identify clusters of similar content within the collection of content based on temporal and geolocation data, generate a graph that comprises an X-axis, a Y-axis, and a Z-axis, wherein the X and Y axis correspond to temporal and geolocation values, and the Z axis corresponds to feature values, extract content features from each of the clusters of content, and allocate regions of the Z-axis to the extracted content features from each of the clusters of features, in order to plot vector representations of the content on the 3D graph, according to certain example embodiments. The event detection system124is shown as including a content module302, a graphing module304, an allocation module306, and a clustering module308, all configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of these modules may be implemented using one or more processors310(e.g., by configuring such one or more processors to perform functions described for that module) and hence include one or more of the processors310. Any one or more of the modules described may be implemented using hardware alone (e.g., one or more of the processors310of a machine) or a combination of hardware and software. For example, any module described of the event detection system124may physically include an arrangement of one or more of the processors310(e.g., a subset of or among the one or more processors of the machine) configured to perform the operations described herein for that module. As another example, any module of the event detection system124may include software, hardware, or both, that configure an arrangement of one or more processors310(e.g., among the one or more processors of the machine) to perform the operations described herein for that module. Accordingly, different modules of the event detection system124may include and configure different arrangements of such processors310or a single arrangement of such processors310at different points in time. Moreover, any two or more modules of the event detection system124may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices. FIG.4is a flowchart illustrating a method400for plotting a representation of media content within a three-dimensional graph, according to certain example embodiments. Operations of the method400may be performed by the modules described above with respect toFIG.3. As shown inFIG.4, the method400includes one or more operations402,404,406,408,410, and412. At operation402, the content module302accesses a repository that comprises a collection of content, such as media content. The media content comprises metadata that includes content features, as discussed above. For example, the content features include text strings such as tags (e.g., hashtags or other similar mechanisms to label and sort content); captions that comprises one or more words or phrases; continuous numerical values; geolocation data (e.g., geo-hash, check-in data, coordinates); as well as temporal data (e.g., timestamps). At operation404, the content module302extracts the metadata that includes the geolocation data and the temporal data from the media content. The geolocation data, and the temporal data may each define a geolocation value and a temporal value. At operation406, the graphing module304generates a graph that comprises a first axis that represents location values, a second axis that represents temporal values, and a third axis that represents feature values. At operation408, the graphing module304plots a representation of the media content at a position within the graph, wherein coordinates of the position of the representation are based on the temporal value, the geolocation value, and the content feature. FIG.5is a flowchart illustrating a method500for detecting similarities in media content, according to certain example embodiments. Operations of the method500may be performed by the modules described above with respect toFIG.3. As shown inFIG.5, the method500includes one or more operations502,504, and506. At operation502, the graphing module502plots a first representation of a first media content at a first position within a three-dimensional graph, wherein the three-dimensional graph comprises a first axis that represents location values, a second axis that represents temporal values, and a third axis that represents feature values, and wherein coordinates of the first position are based on metadata of the first media content that includes geolocation data, temporal data, and a content feature. At operation504, the graphing module502plots a second representation of a second media content at a second position within a three-dimensional graph, wherein the three-dimensional graph, wherein coordinates of the second position are based on metadata of the second media content. At operation506, the clustering module308detects a similarity between the first media content and the second media content based on the first representation and the second representation. For example, as discussed inFIG.6, the clustering module308may receive clustering parameters that define geological and temporal thresholds FIG.6is a flowchart illustrating a method600for clustering content based on clustering parameters, according to certain example embodiments. Operations of the method600may be performed by the modules described above with respect toFIG.3. As shown inFIG.6, the method600includes one or more operations602,604, and606. At operation602, the clustering module receives clustering parameters that include a temporal threshold and a geological threshold. At operation604, the content module302extracts metadata from content accessed at a content repository. Based on the clustering parameters, the clustering module308may identify one or more clusters of content, wherein the geolocation data and temporal data of the content within a cluster are all within the threshold deviation from one another as defined by the clustering parameters. At operation608, based on the clustering of the content, the allocation module306allocates media content to a particular content group based on the metadata and the clustering parameters. FIG.7is a flowchart illustrating a method700for generating a table that depicts events at a location, according to certain example embodiments. Operations of the method700may be performed by the modules described above with respect toFIG.3. As shown inFIG.7, the method700includes one or more operations702,704, and706. At operation702, the content module302defines a content group based on geolocation data and temporal data. For example, in response to extracting the metadata from the first media content, as in operation404of the method400, the content module302may define a content group based on the geolocation data and the temporal data extracted from the metadata of the media content. At operation704the allocation module306allocates content features from content received at the same time and location defined by the content group to the content group. At operation706, the graphing module304generates a table to depict the content group, wherein the tables includes all content features assigned to the content group. Consider table1002ofFIG.10as an illustrative example. FIG.8is a flowchart illustrating a method800for allocating features values to an axis of a graph, according to certain example embodiments. Operations of the method800may be performed by the modules described above with respect toFIG.3. As shown inFIG.8, the method800includes one or more operations802,804,806, and808. At operation802, as in operation406of the method400, the graphing module304generates a graph that comprises a first axis that represents location values, a second axis that represents temporal values, and a third axis that represents feature values. At operation804, the content module302extracts metadata that include geolocation data, temporal data, and at least a content feature from a media content. For example, the content feature may include a text string. At operation806, the content module302generates a vector value based on the text string. In some embodiments, the value of the content feature may be based on the term frequency-inverse document frequency (tf-idf) of a given content feature. The tf-idf is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling. The tf-idf value increases proportionally to the number of times a word appears in the document and is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general. At operation808, the graphing module304allocates a location along the third axis to the content feature, wherein the location is based on the vector value calculated in operation806. FIG.9is a diagram depicting a three-dimensional (3D) graph900for identifying clusters of similar content, according to certain example embodiments. As seen inFIG.9, the 3D graph comprises a Y-axis902, an X-axis904, and a Z-axis906, wherein the Y-axis902comprises a set of temporal values, the X-axis904comprises a set of location values, and the Z-axis906comprises a set of feature values. As seen in the 3D graph900, a representation of media content908may be depicted as a point in the 3D space represented by the graph900. FIG.10is a diagram1000depicting a table1002comprising a display of content features that represent an event at a location, according to certain example embodiments. As seen in the diagram1000, the table1002may include a display of content features representing clusters of content received from a particular location and time. For example, a user may provide an input to define a location and time, and in response, the event detection system124may perform one or more of the methods described inFIGS.4,5,6,7, and8, and generate the table1002. The table1002therefore provides a visualization of clusters of content, enabling the user to infer an event based on the most common keywords (i.e., content) displayed. Based on a review of the table1002, a user may therefore infer that the content “INDY,” and “RACE” appear most frequently in various clusters based on the corresponding geolocation and temporal data. Software Architecture FIG.11is a block diagram illustrating an example software architecture1106, which may be used in conjunction with various hardware architectures herein described.FIG.11is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture1106may execute on hardware such as the machine1200ofFIG.12that includes, among other things, processors1204, memory1214, and I/O components1218. A representative hardware layer1152is illustrated and can represent, for example, the machine1100ofFIG.11. The representative hardware layer1152includes a processing unit1154having associated executable instructions1104. Executable instructions1104represent the executable instructions of the software architecture1106, including implementation of the methods, components and so forth described herein. The hardware layer1152also includes memory and/or storage modules memory/storage1156, which also have executable instructions1104. The hardware layer1152may also comprise other hardware1158. In the example architecture ofFIG.11, the software architecture1106may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1106may include layers such as an operating system1102, libraries1120, applications1116and a presentation layer1114. Operationally, the applications1116and/or other components within the layers may invoke application programming interface (API) API calls1108through the software stack and receive a response as in response to the API calls1108. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware1118, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1102may manage hardware resources and provide common services. The operating system1102may include, for example, a kernel1122, services1124and drivers1126. The kernel1122may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1122may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1124may provide other common services for the other software layers. The drivers1126are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1126include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1120provide a common infrastructure that is used by the applications1116and/or other components and/or layers. The libraries1120provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system1102functionality (e.g., kernel1122, services1124and/or drivers1126). The libraries1120may include system libraries1144(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries1120may include API libraries1146such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPREG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1120may also include a wide variety of other libraries1148to provide many other APIs to the applications1116and other software components/modules. The frameworks/middleware1118(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications1116and/or other software components/modules. For example, the frameworks/middleware1118may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware1118may provide a broad spectrum of other APIs that may be utilized by the applications1116and/or other software components/modules, some of which may be specific to a particular operating system1102or platform. The applications1116include built-in applications1138and/or third-party applications1140. Examples of representative built-in applications1138may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications1140may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications1140may invoke the API calls1108provided by the mobile operating system (such as operating system1102) to facilitate functionality described herein. The applications1116may use built in operating system functions (e.g., kernel1122, services1124and/or drivers1126), libraries1120, and frameworks/middleware1118to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as presentation layer1114. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.12is a block diagram illustrating components of a machine1200, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.12shows a diagrammatic representation of the machine1200in the example form of a computer system, within which instructions1210(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1200to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions1210may be used to implement modules or components described herein. The instructions1210transform the general, non-programmed machine1200into a particular machine1200programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine1200operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1200may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1200may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1210, sequentially or otherwise, that specify actions to be taken by machine1200. Further, while only a single machine1200is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1210to perform any one or more of the methodologies discussed herein. The machine1200may include processors1204, memory memory/storage1206, and I/O components1218, which may be configured to communicate with each other such as via a bus1202. The memory/storage1206may include a memory1214, such as a main memory, or other memory storage, and a storage unit1216, both accessible to the processors1204such as via the bus1202. The storage unit1216and memory1214store the instructions1210embodying any one or more of the methodologies or functions described herein. The instructions1210may also reside, completely or partially, within the memory1214, within the storage unit1216, within at least one of the processors1204(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1200. Accordingly, the memory1214, the storage unit1216, and the memory of processors1204are examples of machine-readable media. The I/O components1218may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1218that are included in a particular machine1200will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1218may include many other components that are not shown inFIG.12. The I/O components1218are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1218may include output components1226and input components1228. The output components1226may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1228may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components1218may include biometric components1230, motion components1234, environmental environment components1236, or position components1238among a wide array of other components. For example, the biometric components1230may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components1234may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environment components1236may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1238may include location sensor components (e.g., a Global Position system (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1218may include communication components1240operable to couple the machine1200to a network1232or devices1220via coupling1222and coupling1224respectively. For example, the communication components1240may include a network interface component or other suitable device to interface with the network1232. In further examples, communication components1240may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1220may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components1240may detect identifiers or include components operable to detect identifiers. For example, the communication components1240may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1240, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Glossary “CARRIER SIGNAL” in this context refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Instructions may be transmitted or received over the network using a transmission medium via a network interface device and using any one of a number of well-known transfer protocols. “CLIENT DEVICE” in this context refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smart phones, tablets, ultra books, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “COMMUNICATIONS NETWORK” in this context refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. “EMPHEMERAL MESSAGE” in this context refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “MACHINE-READABLE MEDIUM” in this context refers to a component, device or other tangible media able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. “COMPONENT” in this context refers to a device, physical entity or logic having boundaries defined by function or subroutine calls, branch points, application program interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “PROCESSOR” in this context refers to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”, “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC) or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. “TIMESTAMP” in this context refers to a sequence of characters or encoded information identifying when a certain event occurred, for example giving date and time of day, sometimes accurate to a small fraction of a second.
60,257
11860889
DETAILED DESCRIPTION In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure. Aspects of the disclosure are capable of other embodiments and of being practiced or being carried out in various ways. In addition, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. FIG.1shows a system100. The system100may include at least one client device110, at least one database system120, and/or at least one server system130in communication via a network140. It will be appreciated that the network connections shown are illustrative and any means of establishing a communications link between the computers may be used. The existence of any of various network protocols such as TCP/IP, Ethernet, FTP, HTTP and the like, and of various wireless communication technologies such as GSM, CDMA, WiFi, and LTE, is presumed, and the various computing devices described herein may be configured to communicate using any of these network protocols or technologies. Any of the devices and systems described herein may be implemented, in whole or in part, using one or more computing systems described with respect toFIG.2. Client device110may access server applications and/or resources using one or more client applications (not shown) as described herein. Client device110may be a mobile device, such as a laptop, smart phone, or tablet, or computing devices, such as a desktop computer or a server. Alternatively, client device110may include other types of devices, such as game consoles, camera/video recorders, video players (e.g., incorporating DVD, Blu-ray, Red Laser, Optical, and/or streaming technologies), smart TVs, and other network-connected appliances, as applicable. Database system120may be configured to maintain, store, retrieve, and update information for server system130. Further, database system may provide server system130with information periodically or upon request. In this regard, database system120may be a distributed database capable of storing, maintaining, and updating large volumes of data across clusters of nodes. Database system120may provide a variety of databases including, but not limited to, relational databases, hierarchical databases, distributed databases, in-memory databases, flat file databases, XML databases, NoSQL databases, graph databases, and/or a combination thereof. Server system130may be configured with a server application (not shown) that is capable of interfacing with client application and database system120as described herein. In this regard, server system130may be a stand-alone server, a corporate server, or a server located in a server farm or cloud-computer environment. According to some examples, server system130may be a virtual server hosted on hardware capable of supporting a plurality of virtual servers. Network140may include any type of network. For example, network140may include a local area network (LAN), a wide area network (WAN), a wireless telecommunications network, and/or any other communication network or combination thereof. It will be appreciated that the network connections shown are illustrative and any means of establishing a communications link between the computers may be used. The existence of any of various network protocols such as TCP/IP, Ethernet, FTP, HTTP and the like, and of various wireless communication technologies such as GSM, CDMA, WiFi, and LTE, is presumed, and the various computing devices described herein may be configured to communicate using any of these network protocols or technologies. The data transferred to and from various computing devices in a system100may include secure and sensitive data, such as confidential documents, customer personally identifiable information, and account data. Therefore, it may be desirable to protect transmissions of such data using secure network protocols and encryption, and/or to protect the integrity of the data when stored on the various computing devices. For example, a file-based integration scheme or a service-based integration scheme may be utilized for transmitting data between the various computing devices. Data may be transmitted using various network communication protocols. Secure data transmission protocols and/or encryption may be used in file transfers to protect the integrity of the data, for example, File Transfer Protocol (FTP), Secure File Transfer Protocol (SFTP), and/or Pretty Good Privacy (PGP) encryption. In many embodiments, one or more web services may be implemented within the various computing devices. Web services may be accessed by authorized external devices and users to support input, extraction, and manipulation of data between the various computing devices in the system100. Web services built to support a personalized display system may be cross-domain and/or cross-platform, and may be built for enterprise use. Data may be transmitted using the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocol to provide secure connections between the computing devices. Web services may be implemented using the WS-Security standard, providing for secure SOAP messages using XML encryption. Specialized hardware may be used to provide secure web services. For example, secure network appliances may include built-in features such as hardware-accelerated SSL and HTTPS, WS-Security, and/or firewalls. Such specialized hardware may be installed and configured in the system100in front of one or more computing devices such that any external devices may communicate directly with the specialized hardware. Turning now toFIG.2, a computing device200that may be used with one or more of the computational systems is described. The computing device200may include a processor203for controlling overall operation of the computing device200and its associated components, including RAM205, ROM207, input/output device209, communication interface211, and/or memory215. A data bus may interconnect processor(s)203, RAM205, ROM207, memory215, I/O device209, and/or communication interface211. In some embodiments, computing device200may represent, be incorporated in, and/or include various devices such as a desktop computer, a computer server, a mobile device, such as a laptop computer, a tablet computer, a smart phone, any other types of mobile computing devices, and the like, and/or any other type of data processing device. Input/output (I/O) device209may include a microphone, keypad, touch screen, and/or stylus through which a user of the computing device200may provide input, and may also include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual, and/or graphical output. Software may be stored within memory215to provide instructions to processor203allowing computing device200to perform various actions. For example, memory215may store software used by the computing device200, such as an operating system217, application programs219, and/or an associated internal database221. The various hardware memory units in memory215may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Memory215may include one or more physical persistent memory devices and/or one or more non-persistent memory devices. Memory215may include, but is not limited to, random access memory (RAM)205, read only memory (ROM)207, electronically erasable programmable read only memory (EEPROM), flash memory or other memory technology, optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the desired information and that may be accessed by processor203. Communication interface211may include one or more transceivers, digital signal processors, and/or additional circuitry and software for communicating via any network, wired or wireless, using any protocol as described herein. Processor203may include a single central processing unit (CPU), which may be a single-core or multi-core processor, or may include multiple CPUs. Processor(s)203and associated components may allow the computing device200to execute a series of computer-readable instructions to perform some or all of the processes described herein. Although not shown inFIG.2, various elements within memory215or other components in computing device200, may include one or more caches, for example, CPU caches used by the processor203, page caches used by the operating system217, disk caches of a hard drive, and/or database caches used to cache content from database221. For embodiments including a CPU cache, the CPU cache may be used by one or more processors203to reduce memory latency and access time. A processor203may retrieve data from or write data to the CPU cache rather than reading/writing to memory215, which may improve the speed of these operations. In some examples, a database cache may be created in which certain data from a database221is cached in a separate smaller database in a memory separate from the database, such as in RAM205or on a separate computing device. For instance, in a multi-tiered application, a database cache on an application server may reduce data retrieval and data manipulation time by not needing to communicate over a network with a back-end database server. These types of caches and others may be included in various embodiments, and may provide potential advantages in certain implementations of devices, systems, and methods described herein, such as faster response times and less dependence on network conditions when transmitting and receiving data. Although various components of computing device200are described separately, functionality of the various components may be combined and/or performed by a single component and/or multiple computing devices in communication without departing from the invention. FIG.3shows an example system300for providing a cascading data impact visualization tool according to one or more aspects of the disclosure. The system300may comprise a first server305. The first server305amay be a computing device, such as the computing device200shown inFIG.2. The first server305amay comprise a memory, such as the memory215of the computing device200shown inFIG.2. The memory of the first server305amay comprise one or more applications310, such as the applications219of the computing device200inFIG.2. The applications310may provide one or more business applications and/or services to one or more business users360. A plurality of data resources may be associated with each of the applications310. The data resources may comprise a collection of data tables315stored in a database, such as the database221of the computing device200shown inFIG.2. The data tables315may comprise one or more table columns and data elements. The data resources may also comprise one or more reports320. The reports320may be generated by the applications310based on the data tables315. The reports320may be generated using a reporting tool (e.g., Tableau®) and may comprise, for example, charts, graphs, and other analytic tools for visualizing the data tables315. The applications310may provide the reports320to the one or more business users360, a user developer, such as the user345, or other data resources via network375. The data tables315may be related to each other by specific fields (e.g., table columns, cells, and data elements). As an example, the data tables315may include a customer information table comprising a plurality of columns with customer information, such as customer ID numbers, first names of customers and last names of customers. A first column may comprise the customer ID numbers, a second column may comprise the first names of the customers associated with customer ID numbers in the first column, and a third column may comprise the last names of the customers associated with the customer ID numbers in the first column. The data tables315may also include a customer address table. A first column of the customer address table may comprise customer ID numbers and a second column may comprise the addresses of the customers associated with the customer ID numbers in the first column. Thus, the customer ID numbers column is a related field between the customer information table and the customer address table. One or more of the data tables315may be joined by combining the related data on common fields (e.g., columns). For example, the customer address table may be joined with the customer information table on the common customer ID numbers table columns. The common fields (e.g., table columns) that the tables are joined on may have the same data type. The join of the tables may break if the data type of the common fields (e.g., table columns) is changed after the tables are joined. One or more data lineages may be determined for the data resources, such as the data tables315and the reports320. The data lineages may be based on the interrelationships and dependencies among the data resources. As discussed above, one or more of the data tables315may be related by specific fields (e.g., table columns) Additionally, the reports320may be generated based on one or more of the data tables315. The data lineage(s) of a table column may indicate where the data of the table column originates from and the downstream dependencies of that table column. Thus, the data lineage for a data resource may indicate the interrelationships between the data resource and other data resources. In the example above, the related field (e.g., table column) between the customer information table and the customer address table is the customer ID numbers column. Thus, a data lineage for the customer ID numbers column in the customer address table may indicate a relationship with the customer ID numbers column in the customer information table. A data lineage for a data resource may also track a flow of data from a source to the data resource (e.g., a target or destination). For example, the data lineage of a table column may track the flow of data from the table column (e.g., source) to a target, such as a report. An original source may be a table column that is referenced by other resources but does not reference the data of any other resource. The target may be a data resource that consumes the data in the table column, for example another table column or a report. FIG.4shows three data lineages for a report405associated with a business service or application. The data lineages for the report405may track a flow of data (or interrelationships between data) from one or more original source data resources to the report405(e.g., target or destination). The data resources indicated by the data lineages may be associated with different levels of a database, for example levels D1, D2A, D2B, etc. As shown inFIG.4, the first data lineage of the target report405may comprise two tables at a D1 level of a database, such as a first D1 table410aand a second D1 table410b. The second data lineage may comprise two tables at a D2A level of the database, such as a first D2A table415aand a second D2A table415b. The third data lineage may comprise two tables in a D2B level of the database, such as a first D2B table420aand a second D2B table420b. Additionally, the data lineages may also comprise data in a first PL table425aand a second PL table425b. The first data lineage may track the flow of data from an original source D1 table410ato the report405as: (D1 table410a)→(D2A table415a)→(D2B420a)→(PL table425a)→(report405). One or more original source data462in the D1 table410amay be referenced or utilized by the D2A table415a. For example, a column in the D1 table410amay be referenced by a column in the D2A table415a. As another example, the values of a column in the D2A table415amay be generated based on data in a column of the D1 table410a. The first data lineage also indicates that a dependency between the D2A table415aand the D2B table420a. For example, one or more data464in the D2A table415amay be referenced or utilized by the D2B table420a. The first data lineage also indicates the one or more data466in the D2B table420amay be referenced or utilized by the PL table425a. Finally, the data lineage indicates that the report405may be generated based on data470in the PL table425a. For example, the data in the first and second PL tables425aand425bmay be associated with one or more wrappers for Procedural Language extensions to SQL (PL/SQL) source code. The second data lineage for the report405may track the flow of data from an original source D1 table410bto the report405as: (D1 table410b)→(PL table425a)→(report405). The third data lineage for the report405may track the flow of data from an original source D2A table410bto the report405as: (D2A table415b)→(D2B table420b)→(PL table425b)→(report405). For example, one or more original source data472in the D2A table415bmay be referenced or utilized by the D2B table420b. The third data lineage also indicates that one or more data474in the D2B table420bmay be referenced or utilized by PL table425b. The third data lineage also indicates that the report405may be generated based on one or more data476in the PL table425a. The data lineage of a data resource may track a flow of data at a table or column level. For example, a column level data lineage may indicate the data resources at the column level as: L0 [S3]↔L1↔D1 Table D2A Table↔D2B Table/PL↔Table Tableau Report. L0[S3] may be a data element S3 within a column L0 in a D1 Table. L0[S3] may be referenced by a column L1 in the D1 Table. The L1 column may be referenced by a D2A Table. The D2A Table may be referenced by a D2B Table and a PL Table. The D2B Table and the PL Table may be utilized to generate a Tableau Report. Referring back toFIG.3, the system300may comprise a second server305b. The second server305bmay be a computing device, such as the computing device200shown inFIG.2. The second server305bmay comprise a memory, such as the memory215shown inFIG.2. The memory of the second server305amay comprise a data impact visualization application325that may be configured to determine and/or generate a visualization of data or resources associated with one or more business services and/or applications based on data lineages330associated with the data tables315and stored in a database, such as the database221shown inFIG.2. The data impact visualization application325may also be configured to provide a user interface (UI) that may be displayed on a display of a client device340for a user345. The client device340may be a computing device, such as the computing device200shown inFIG.2. FIG.5shows an example of a user interface500that may be provided by the data impact visualization application325. The user interface500may display a visualization of one or more business applications and/or services provided by the applications310inFIG.3. The visualization may indicate data resources and interrelationships between the data resources. As shown inFIG.5, the visualization may indicate the names of the one or more business applications and/or services (services505), such as “Art,” “Bus,” “Birds,” and “Best.” The visualization may indicate the data tables315and reports320that are associated with each of the services505, for example, the D1 tables510, the D2A tables515, the D2B tables520, and the reports525. The visualization may indicate the interrelationships between the D1 tables510, the D2A tables515, the D2B tables520, and the reports525for each of the services505. As an example, the data resources associated with the service “Art” include a D1 table510, ART_RM_WRK_LOC, and a D2A table515, RM_WORK_LOC_LOG. This may indicate that one or more original source data in the ART_RM_WRK_LOC table may be referenced by the D2A table515, RM_WORK_LOC_LOG. No other data resources for the service “Art” directly or indirectly reference the D1 table510, ART_RM_WRK_LOC, or the D2A table515, RM_WORK_LOC_LOG. As another example, the data resources associated with the service “Best” include several D1 tables510such as BEST_LS_1099_STGG, BEST_LS_AC_CHRG_OFF_ACCT, BEST_LS_AC_CHRG_OFF_ACCT_HI, BEST_LS_AC_CHRG_GL_FEED, BEST_LS_AC_CHRG_OFF_LRI_HIST, BEST_LS_AC_CHRG_OFF_RCVRY, BEST_LS_ACCT_ADR, and BEST_LS_ACCT_CHRG_OFF_GL_JR (“Best” D1 tables510). One or more original source data in each of the “Best” D1 tables510may be referenced by one or more D2A tables515. For example, one or more original source data in the “Best” D1 table510, BEST_LS_1099_STGG, may be referenced by the D2A table LEGL_1099_STGG_FACT. One or more original source data in another “Best” D1 table510, BEST_LS_AC_CHRG_OFF_ACCT, may be referenced by at least three of the D2A tables, such as CHRGOF_ACCT_FACT, CHRGOF_ACCT_HIST_FACT, and CHRGOF_RCVRY_FACT. One or more of the D2A tables515may be referenced by one or more of the D2B tables520. For example, one or more data in the D2A table515, CHRGOF_ACCT_FACT, may be referenced by the D2B table520, PL_SALES_TAX_RFND. One or more reports525associated with the service “Best” may be generated based on the D2B tables520. For example, a report520, Sales Tax Refund SF, may be generated based on the D2B table520, PL_SALES_TAX_RFND. Thus, one of the data lineages or flow of data for the report520associated with the service “Best” may be indicated as: (BEST_LS_AC_CHRG_OFF_ACCT)→(CHRGOF_ACCT_FACT)→(PL_SALES_TAX_RFND)→(Sales Tax Refund SF). The user interface500may provide one or more pull-down menus for selecting one or more of the D1 tables510, the D2A tables515, the D2B tables520, and the reports525. For example, one of the D1 tables510may be selected via a D1_TB_NM pull-down menu530, one of the D2A tables may be selected via a D2A_TB_NM pull-down menu532, one of the D2B tables may be selected via a D2A_TB_NM pull-down menu532, and one of the reports525may be selected via a TBL_RPT_NM pull-down menu540. The user interface500may display various statistics related to the displayed visualization, such as a total number of the D1 tables510, the D2A tables515, the D2B tables520, and the reports525associated with the displayed services505. For example, the user interface500may comprise a first graphic, image, or icon550indicating that there are 423 D1 tables510associated with the services505. A second graphic, image, or icon552may indicate that there are 373 D2A tables515associated with the services505. A third graphic, image, or icon554may indicate that there are 137 D2B tables520. The total number of the D1 tables510, the D2A tables515, the D2B tables520, and the reports525associated with the displayed services505may be updated based on changes to the displayed visualization. For example, filtering based on a table, column, or report name may update the visualization and thus, the number of data resources indicated by the graphics550,552and554. The visualization of data displayed in the user interface500may be updated based on a user selection of one or more of the D1 tables510, the D2A tables515, the D2B tables520, and the reports525. As discussed above, the user interface500may provide pull-down menus for selecting one or more of the D1 tables510, the D2A tables515, the D2B tables520, and the reports525.FIG.6shows a result of table level filtering within a user interface600. The user interface600may provide one or more pull-down menus630,632,634, and640, for selecting respectively one or more of the D1 tables610, the D2A tables615, the D2B tables620, and the reports625. The user interface600shows a result of selecting a D2A table “ACCT_LOAN_DIM,” via the D2A_TBL_NM pull-down menu632. The data in the user interface600may be filtered based on the interrelationships between “ACCT_LOAN_DIM” table and other data resources. For example, the D1 tables610may be updated to show the names of tables that are directly or indirectly referenced by the “ACCT_LOAN_DIM” table. The D2B tables620may be updated to show the name of tables that directly or indirectly reference one or more data in the “ACCT_LOAN_DIM” table. The reports625may be updated to show the names of reports that directly or indirectly reference one or more data in the “ACCT_LOAN_DIM” table. For example, for the business application or service “Best,” the D2B table “ACCT_LOAN_DIM” references one or more data located in two D1 tables610, “BEST_LS_BADCARPAYACCOUNTS,” and “BEST_LS_LAWI_ACCTS.” Additionally, there are no D2B tables620that reference the D2A table “ACCT_LOAN_DIM.” There are no reports625that are generated based on data in the D1_TB_NM tables and the “ACCT_LOAN_DIM” table. As another example, for the service “Acme,” the D2A table “ACCT_LOAN_DIM,” references one or more data in the D1 table ACME_LOAN_MASTER. One or more data in the D2A table “ACCT_LOAN_DIM” is referenced by several D2B tables620such as the D2B table “METRC_ACCT_VR_SL_TAT.” A report625named “Vehicle Remarketing Daily Huddle SF” may be generated for the service “Acme” based on one or more original source data in the D1 table “ACME_LOAN_MASTER,” one or more data in the D2A table “ACCT_LOAN_DIM,” and one or more data in the D2B table “METRC_ACCT_VR_SL_TAT.” FIG.7shows a result of table level filtering within a user interface700. The user interface700shows a result of selecting a D2B table “PL_KS_10D_NTC_LTR,” via the D2B_TBL_NM pull-down menu734. The visualization of the services705displayed in the user interface700may be filtered based on the interrelationships between the selected D2B table “PL_KS_10D_NTC_LTR” and other data resources. For example, the services705may be updated to show the services that directly or indirectly reference one or more data in the D2B table “PL_KS_10D_NTC_LTR.” The D1 tables710may be updated to show the names of tables that are directly or indirectly referenced by the D2B table “PL_KS_10D_NTC_LTR.” The D2A tables715may be updated to show the names of D2A tables that are directly or indirectly referenced by the D2B table “PL_KS_10D_NTC_LTR.” The reports725may be updated to show the names of reports that directly or indirectly reference one or more data in the D2B table “PL_KS_10D_NTC_LTR.” For example, for the service “Best,” the updated visualization in the user interface700shows that the D2B table “PL_KS_10D_NTC_LTR” may reference one or more data located in a D2A table “CUST_SPCL_HNDLG_RQST_FACT.” Additionally, the D2A table “CUST_SPCL_HNDLG_RQST_FACT” may reference a D1 table “BEST_LS_CUST_SPCL_HNDLG_RQST.” The updated reports725show two reports “Correspondence_KS10daybrwr” and “Correspondence_KS10daycbrwr” that may each generated based on one or more original source data in the D1 table “BEST_LS_CUST_SPCL_HNDLG_RQST,” one or more data in the D2A table “CUST_SPCL_HNDLG_RQST_FACT,” and one or more data in the D2B table “PL_KS10D_NTC_LTR.” FIG.8shows a result of column level filtering in a user interface800. The visualization may also display the column names within the D1 tables810, D2A tables815, D2B tables820, and reports825. For example, the D1 columns811may indicate columns within the D1 tables810, the D2A columns816may indicate columns within the D2A tables815, and the D2B columns may indicate columns within the D2B tables820. The user interface800may provide pull-down menus for selecting one or more of the D1 tables810, D1 columns811, D2A tables815, D2A columns816, D2B tables820, D2B columns821, and reports825. For example, one of the D1 tables810may be selected via a D1_TB_NM pull-down menu830, one of the D1 columns811may be selected via a D1_COL_NM pull-down menu835, one of the D2A tables815may be selected via a D2A_TB_NM pull-down menu832, one of the D2A columns816may be selected via a D2A_COL_NM pull-down menu831, one of the D2B tables820may be selected via a D2A_TB_NM pull-down menu834, one of the D2B columns821may be selected via a D2B_COL_NM pull-down menu833and one of the reports825may be selected via a TBL_RPT_NM pull-down menu (not shown). The user interface800shows a result of selecting a D1 column “APPID,” via the D1_COL_NM pull-down menu835. The visualization of the services (not shown) displayed in the user interface800may be filtered based on the interrelationships between the selected D1 column “APPID” and other data resources. For example, the services may be updated to show the services that directly or indirectly reference one or more data in the D1 column “APPID.” The D1 tables810may be updated to show the name of the table corresponding to the D1 column “APPID” such as “ACME_LOAN_MASTER.” The D2A tables815may be updated to show the names of the D2A tables, such as “ACCT_DLY_SNAP FACT” and “ACCT_LOAN_DIM,” that directly or indirectly reference one or more data in the D1 column “APPID” of the D1 table “ACME_LOAN_MASTER.” The D2A columns816may be updated to indicate the specific columns in the D2A tables “ACCT_DLY_SNAP FACT” and “ACCT_LOAN_DIM” that directly or indirectly reference one or more data in the D1 column “APPID” of the D1 table “ACME_LOAN_MASTER.” For example, the D2A columns816indicate that the columns “APPN_DIM_ID” and “APPN_ID” in the D2A table “ACCT_DLY_SNAP FACT” may reference data in the D1 column “APPID” of the D1 table “ACME_LOAN_MASTER.” The D2B tables820may be updated to indicate the D2B tables that reference data in the updated D2A columns816. For example, one or more data in the D2 columns “APPN_DIM_ID” and “APPN_ID” in the D2A table “ACCT_DLY_SNAP FACT” may be referenced by the D2B table “PL_ST_PERFTB_PERFNC_MN_.” The D2B columns821may be updated to indicate the specific columns in the updated D2B tables820. For example, the D2B columns820indicate that a column “TTL_TYPE_ID” in the D2B table “PL_ST_PERFTB_PERFNC_MN_” may reference one or more data in the D2A columns “APPN_DIM_ID” and “APPN_ID.” Although not shown in the user interface800, one or more reports generated based on data in the D1 column “APPID” may also be indicated. Referring back toFIG.3, the data impact application325that may be configured to determine an impact of a data change on one or more business applications and/or services. The data change may represent a proposed modification to a data resource associated with a business application or service, such as a change to the schema, data type, and/or attribute related to a table, column, and/or data element. The data change may represent an addition of a table, column, and/or data element. As an example, a single customer name column of a customer information table may include both the first and last names of the customers. A developer of a business application or service may wish to modify the customer information table so that the first name of a customer and the last name of the customer are located in two separate table columns, for example, a customer first name column and a customer last name column. Prior to implementing the modification, the developer may determine an impact of the modification on other data resources that may depend on the data to be modified (e.g., the customer name column) For example, a downstream data resource that references the customer name column may break when the customer name changes, if it is not updated to handle the first name of a customer and the last name of the customer in two separate table columns. The data lineages for the customer name column may indicate other tables and/or reports that directly or indirectly reference the customer name column. The data impact visualization application325may, based on the data lineages, determine the dependencies on the customer name column. The dependencies may be determined based on utilizing the name of the customer information table as a primary key. The dependencies may indicate the reports, tables and/or data resources referencing the customer name column of the customer information table. Determining the dependencies may include post processing a target data resource, such as a report. The post processing may identify one or more data lineages of the customer name column based on the name of the customer information table as a primary key. For example, one of the data lineages of the customer name column may indicate that a customer address table relies on the customer name column of the customer information table. Another data lineage may indicate that a customer transactions table relies on the customer name column of the customer information table. Thus, both the customer address table and the customer transactions table may need to be updated to handle the proposed modification to the customer name column. As another example of a data change, a customer of a business application and/or service, such as a business owner, may wish to change the maximum characters for a customer name from 50 to 70. A customer information table may include a customer name column. The data lineages of the customer information table may indicate that both a customer address table and/or a customer transactions table depend on the customer name column of the customer information table. Therefore, the data lineages may indicate the customer information table is a source for the customer address table and the customer transactions table. The data lineages of the customer information table may identify other table columns that depend on or reference the customer transactions table and the customer address table. Thus, prior to implementing the proposed modification, the maximum number of characters of a customer name column in the customer transactions table may need to be modified to support at least 70 characters. This improves the efficiency of modifying/updating databases to prevent errors, reduce downtime, and provide higher quality service for customers. As another example, the data change may represent a corruption of one or more data, or unavailability of one or more data, such as a value in a table column that is referenced by other tables or data resources. For example, a modification to a table column or data element may result in a failure of a downstream tool that depends on the modified table column A support user may provide information related to the failure to the data impact application, for example, a report that shows an error. Based on the information related to the failure, a support user may utilize the data impact visualization application325to determine, for example, that the maximum number of characters in the customer name column may be a possible reason for the error or failure. The support user may visualize the data dependencies that may be related to the error and resolve the error based on identifying downstream resources that may need to be modified to handle the modified maximum number of characters in the customer name column. In this manner, the data impact visualization application325helps identify and correct database linking errors in a timely and more efficient manner thereby improving the speed and efficiency of linked databases. FIG.9is a flow diagram of an example method900for determining and visualizing an impact of a data change. The steps of the method900may be performed by the data impact visualization application325shown inFIG.3. Alternatively or additionally, some or all of the steps of the method900may be performed by one or more other computing devices. Steps of the method900may be modified, omitted, and/or performed in other orders, and/or other steps added. A user, such as the user345, may interact with a user interface provided by the data impact visualization application325and displayed on the client device340of the user345. The user345may, within the user interface, select to display a visualization of a set of business applications and/or services provided by the applications310. For example, the user selected set of services may include “Art,” “Bus,” “Birds,” and “Best.” At step905, a server (e.g., the data impact visualization application325executing on server305b) may, based on receiving a user request to display a visualization of the selected services “Art,” “Bus,” “Birds,” and “Best,” may retrieve one or more of the data lineages330that correspond to the data tables315and reports320associated with the selected services. The retrieved data lineages may comprise intermediate data lineages between one or more source data resources and one or more target data resources. At step910, server (e.g., the data impact visualization application325executing on server305b) may, based on the retrieved data lineages, generate a visualization of the data resources associated with the user selected services.FIG.10Ashows a visualization of the user selected services within a user interface1000. As shown inFIG.10A, the visualization may indicate the names of the one or more business applications and services (referred to as services1005), such as “Art,” “Bus,” “Birds,” and “Best.” The visualization may indicate the data tables315and the reports320that are associated with each of the services1005, such as the D1 tables1010, the D2A tables1015, the D2B tables1020, and the reports1025. The visualization may indicate the interrelationships between one or more source and target data resources associated with each of the services1005. The target data resources may be reports, other tables and/or table columns. As an example, the data resources associated with the service “Art” include a D1 table1010ART_RM_WRK_LOC and a D2A table1015RM_WORK_LOC_LOG. This indicates that that one or more original source data related to the service “Art” is located in ART_RM_WRK_LOC table and one or more data in the AVY_RM_WRK_LOC table is referenced by the D2A table1015RM_WORK_LOC_LOG.FIG.10Ashows that there are no D2B tables1020or reports1025associated with the service “Art.” As another example, the data resources associated with the service “Best” include several D1 tables1010such as BEST_LS_1099_STGG, BEST_LS_AC_CHRG_OFF_ACCT, BEST_LS_AC_CHRG_OFF_ACCT_HI, BEST_LS_AC_CHRG_GL_FEED, BEST_LS_AC_CHRG_OFF_LRI_HIST, BEST_LS_AC_CHRG_OFF_RCVRY, BEST_LS_ACCT_ADR, and BEST_LS_ACCT_CHRG_OFF_GL_JR. One or more original source data in each of the D1 tables1010may be referenced by one or more of the D2A tables1015associated with the service “Best.” For example, one or more original source data in the D1 table BEST_LS_1099_STGG may be referenced by a D2A table1015LEGL_1099_STGG_FACT. One or more original source data in another D1 table1010BEST_LS_AC_CHRG_OFF_ACCT may be referenced by three D2A tables such as CHRGOF_ACCT_FACT, CHRGOF_ACCT_HIST_FACT, and CHRGOF_RCVRY_FACT. One or more of the D2A tables1015may be referenced by one or more of the D2B tables1020. For example, one or more data in the D2A table1015CHRGOF_ACCT_FACT may be referenced by the D2B table1020PL_SALES_TAX_RFND. One or more reports1025may be generated based on one or more data in the D2B tables1020. For example, a report1025such as Sales Tax Refund SF may be generated based on the D2B table1020PL_SALES_TAX_RFND. Thus, for the service “Best”, the visualization indicates a data lineage of the report Sales Tax Refund SF from the original source D1 table1010BEST_LS_AC_CHRG_OFF_ACCT to the destination Sales Tax Refund SF report1025. The user interface1000may provide pull-down menus for selecting one or more of the D1 tables1010, the D2A tables1015, the D2B tables1020, and the reports1025. For example, one of the D1 tables1010may be selected via a D1_TB_NM pull-down menu1030, one of the D2A tables may be selected via a D2A_TB_NM pull-down menu1032, one of the D2B tables may be selected via a D2B_TB_NM pull-down menu1034, and one of the reports1025may be selected via a TBL_RPT_NM pull-down menu1040. The user interface1000may display various statistics related to the displayed visualization, such as a total number of the D1 tables1010, the D2A tables1015, the D2B tables1020, and the reports1025associated with the displayed services1005. For example, the user interface1000may comprise a first graphic1050indicating that there are 423 D1 tables1010associated with the services1005. A second graphic1052may indicate that there are 373 D2A tables1015associated with the services1005. A third graphic1054may indicate that there are 137 D2B tables1020. The total number of the D1 tables1010, the D2A tables1015, the D2B tables1020, and/or the reports1025associated with the displayed services1005may be updated based on changes to the displayed visualization. For example, filtering based on a table, column, or report name may update the visualization and thus, the number of data resources indicated by the three graphics1050,1052, and1054. Returning toFIG.9, at step915, the server (e.g., the data impact visualization application325executing on server305b) may receive a user request to visualize, based on a user selection of a data resource, an impact to the user selected set of services. The user request may be based on a user selection of a D2B table “PL_KS_EARLY_PAYOFF_LTR” via the pull-down menu1034inFIG.10A. The user selection of the D2B table “PL_KS_EARLY_PAYOFF_LTR” may represent a corruption of one or more data in the D2B table “PL_KS_EARLY_PAYOFF_LTR,” unavailability of one or more data in the D2B table “PL_KS_EARLY_PAYOFF_LTR,” and/or a proposed modification to one or more data in the D2B table “PL_KS_EARLY_PAYOFF_LTR”. At step920, server (e.g., the data impact visualization application325executing on server305b) may determine, based on the data lineages330, one or more interrelationships between the D2B table “PL_KS_EARLY_PAYOFF_LTR” and other data resources associated with the services1005. Based on the one or more interrelationships between the first data element and the other data resource, the server (e.g., the data impact visualization application325executing on server305b) may determine one or more services and/or data elements affected by the change to the D2B table “PL_KS_EARLY_PAYOFF_LTR”. At step925, server (e.g., the data impact visualization application325executing on server305b) may update, based on the one or more services and data elements that may be affected by the proposed change to the D2B table “PL_KS_EARLY_PAYOFF_LTR,” the visualization displayed in the user interface1000shown inFIG.10A. Turning toFIG.10B, an example of an updated visualization displayed within a user interface1001after the selection of the D2B table “PL_KS_EARLY_PAYOFF_LTR” is shown. The updated visualization may identify the D1 tables1010, the D2A tables1015, the D2B tables1020, and/or the reports1025that depend on or reference the D2B table “PL_KS_EARLY_PAYOFF_LTR.” The updated visualization may include updates to the statistics, such as a total number of the D1 tables1010, the D2A tables1015, the D2B tables1020, and/or the reports1025associated with the displayed services1005. For example, the user interface1000may update the first graphic1050indicating that there are 11 D1 tables1010associated with updated visualization. The second graphic1052may be updated to indicate 10 D2A tables1015. The third graphic1054may be updated to indicate 1 D2B table1020. Thus, prior to implementing a proposed modification to one or more data in the D2B table “PL_KS_EARLY_PAYOFF_LTR,” some of the data resources identified in the updated visualization shown inFIG.10Bmay need to be updated and/or modified to handle the proposed change to the D2B table “PL_KS_EARLY_PAYOFF_LTR.” Otherwise, some of the downstream data resources that consume data related to the proposed change to the D2B table “PL_KS_EARLY_PAYOFF_LTR” may be negatively affected. In some implementations, the user interface provided by the data impact visualization application325may comprise an option to generate and send a notification to one or more users, for example the business users360, associated with the data resources that may be affected by the proposed change to the D2B table “PL_KS_EARLY_PAYOFF_LTR.” FIGS.11A and11Bshow another example of the data impact visualization application325determining and visualizing an impact of a data change according to the method900inFIG.9.FIG.11Ashows a column level visualization of one or more services that may be provided by the data impact visualization application325within a user interface1100. The visualization may display the names of D1 tables1110, D1 columns1111, D2A tables1115, D2A columns1116, D2B tables1120, D2B columns1121, and reports1125. The user interface1100may provide pull-down menus for selecting one or more of the D1 tables1110, D1 columns1111, D2A tables1115, D2A columns1116, D2B tables1120, D2B columns1121, and reports1125. For example, one of the D1 tables1110may be selected via a D1_TB_NM pull-down menu1130, one of the D1 columns1111may be selected via a D1_COL_NM pull-down menu1135, one of the D2A tables1115may be selected via a D2A_TB_NM pull-down menu1132, one of the D2A columns1116may be selected via a D2A_COL_NM pull-down menu1131, one of the D2B tables1120may be selected via a D2A_TB_NM pull-down menu1134, one of the D2B columns1121may be selected via a D2B_COL_NM pull-down menu1133and one of the reports1125may be selected via a TBL_RPT_NM pull-down menu (not shown). The user interface1100may also provide multiple graphics1150,1151,1152,1153,1154, and1155indicating a total number of D1 tables1110, D1 columns1111, D2A tables1115, D2A columns1116, D2B tables1120, and D2B columns1121. A D2B table1120“PL_KS_EARLY_PAYOF_LTR” may be selected via the pull-down menu1134.FIG.11Bshows an updated visualization provided by the data impact visualization application325within the user interface1100based on the selection of the D2B table “PL_KS_EARLY_PAYOF_LTR.” As discussed above, the user interface provided by the data impact visualization application may display various statistics for the data resources related to the displayed visualization, such as a total number of D1 tables, D2A tables, D2B tables, and reports.FIG.12shows a table level visualization comprising various statistics and displayed in a user interface1200after a selection of a D2A table “ACCT_LOAN_DIM.” As shown inFIG.12, the visualization may indicate the names of the one or more business applications and services (referred to as services1205), such as “Art,” “Bus,” “Birds,” and “Best.” The visualization may indicate the data tables315and the reports320that are associated with each of the services1205, such as the D1 tables1210, the D2A tables1215, the D2B tables1220, and the reports1025. The visualization may indicate the interrelationships between one or more source and target data resources associated with each of the services1205. The target data resources may be reports, other tables and/or table columns. The user interface1200also includes pull-down menus1230,1232,1234, and1240that are similar or equivalent to the pull-down menus1030,1032,1034, and1040shown in the user interface1000inFIG.10A. The user interface1200may comprise a first graphic1250indicating a total number of D1 tables1210associated with the displayed visualization after the selection of the D2A table “ACCT_LOAN_DIM.” The user interface1200may comprise a second graphic1252indicating a total number of D2A tables1210associated with the displayed visualization after the selection of the D2A table1215“ACCT_LOAN_DIM.” The user interface1200may comprise a third graphic1254indicating a total number of D2B tables1220associated with the displayed visualization after the selection of the D2A table1215“ACCT_LOAN_DIM.” FIG.13shows a column level visualization comprising various statistics and displayed in a user interface1300after a selection of a D2A table “ACCT_LOAN_DIM.” The visualization may display the names of D1 tables1310, D1 columns1311, D2A tables1315, D2A columns1316, D2B tables1320, D2B columns1321, and reports1325. The user interface1300may provide pull-down menus1350,1351,1352,1353,1354, and1355for selecting one or more of the respective D1 tables1310, D1 columns1311, D2A tables1315, D2A columns1316, D2B tables1320, and D2B columns1321. The user interface1300may comprise a first graphic1350indicating a total number of D1 tables1310associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” The user interface1300may comprise a second graphic1351indicating a total number of D1 columns1311associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” The user interface1300may comprise a third graphic1352indicating a total number of D2A tables1315associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” The user interface1300may comprise a fourth graphic1353indicating a total number of D2A columns1316associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” The user interface1300may comprise a fifth graphic1354indicating a total number of D2B tables1320associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” The user interface1300may comprise a sixth graphic1355indicating a total number of D2B columns1321associated with the displayed visualization after the selection of the D2A table1315“ACCT_LOAN_DIM.” In some implementations, the user interface provided by the data impact visualization application325may provide statistics related to queries (filtering), usage, and modifications of the data resources. For example, the data impact visualization application325may determine a number of users querying a data resource, such as a table or report. The querying may be performed by the business customers or users of a business application or service. The querying may be performed by other applications310or third party applications. The data impact visualization application325may determine a length of time for each query. As an example, the data for a table may indicate that a query related to this table may be 200 ms because of the dependencies between that table and other tables or reports. Based on the number of users, the length of time of each query, and a threshold for a maximum amount of time set for a type of query, the dependencies between data resources may be identified for optimization thereby, reducing cost and improving the overall efficiency of the database. In some implementations, a machine learning algorithm may be trained on the statistics provided by the data impact visualization application325in order to determine recommendations and to discover previously unknown insights and patterns. As discussed above, one or more data lineages may be determined for the data resources, such as the data tables315and the reports320, associated with the business applications and services provided by the applications310. The data lineages are based on the interrelationships among the data resources. For example, one or more of the data tables315may be related by specific fields (table columns). Additionally, one or more reports320may be generated based on one or more of the data tables315. The data lineages315for the data resources associated with a service may be generated based on specifying one or more of the data resources that comprise original source data and one or more of the target data resources.FIG.14shows an example of a data lineage document1400specifying a plurality of original source data resources and a plurality of target data resources.FIG.15shows an example view1500, of data lineages based on processing one or more data lineage documents, such as the data lineage document1400shown inFIG.14.FIG.16shows another example view1600of data lineages generated based on processing one or more data lineage documents, such as the data lineage document shown inFIG.14NG.17shows an example routine for that may be utilized for generating a data lineage based on information in a data lineage document. One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, and the like. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects discussed herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. Various aspects discussed herein may be embodied as a method, a computing device, a system, and/or a computer program product. Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. In particular, any of the various processes described above may be performed in alternative sequences and/or in parallel (on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application. It is therefore to be understood that the present invention may be practiced otherwise than specifically described without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
54,921
11860890
DETAILED DESCRIPTION The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein. Various embodiments disclosed herein provide a system and a method for a system for synchronizing and reconciling data stream in real-time between an edge node and a cloud node. Referring now to the drawings, and more particularly toFIGS.1through5, where similar reference characters denote corresponding features consistently throughout the figures, preferred embodiments are shown. FIG.1is a block diagram that illustrates a system100for synchronizing and reconciling data stream from an edge node102to a cloud node112in real-time according to an embodiment herein. The system100synchronizes the data stream from a plurality of edge nodes102A-N (distributed system) to a central location in the cloud node112. The cloud node may be a cloud server. In some embodiments, the system100maintains a state of the cloud server and monitors the data stream from the plurality of edge nodes102A-N. The system100detects an exact location of the data stream coming from the plurality of edge nodes102A-N. The data stream includes a plurality of data obtained from a plurality of edge nodes102A-N. The system100pushes the plurality of data received from different locations back to the determined central location in the cloud node112in a synchronized manner. A database access and synchronization layer identifies the plurality of data in the central location in the cloud node112and send it back to the requesting edge node. The system100includes a synchronization table in the plurality of edge nodes102A-N to synchronize a location of data in a central location in the cloud node112. The system100includes a memory106and a processor104. The memory106stores a set of instructions. The processor104executes the set of instructions to synchronize the plurality of data from a plurality of edge servers in the plurality of edge nodes102A-N to a central location in the cloud node112. The processor104classifies the plurality of data from the plurality of edge nodes102A-N to generate a data stream in real-time. The processor104classifies the plurality of data from the plurality of edge nodes102A-N to synchronize the plurality of data in a location at the cloud nodes112. In some embodiments, the edge node102A acts as an end-user portal for communication with other nodes in cluster computing. In some embodiments, the edge nodes are at least one of the gateway nodes or edge communication nodes. In some embodiments, the edge node102A includes at least one of but is not limited to computer, mobile phone, tab. The processor104queues the classified data associated with the plurality of edge nodes102A-N. The processor104obtains the exact location of the cloud node112or central server and tags the classified data with (i) a time stamp, (ii) a location, (iii) an edge identifier, and (iv) an internet protocol address. In some embodiments, the classified data is tagged to identify the location of the classified data in the cloud node112. In some embodiments, the processor104queues the data from different locations of the plurality of edge nodes102A-N. The information of the central location is located as metadata with every application. The metadata includes information that comprises application information, a central repository, and gets updated with respect to the frequency required for synchronizing to maintain records. The processor104synchronizes the data based on the frequency. The database in a local location may be bulky when the plurality of data is stored in the local location. In some embodiments, the local location is a plurality of edge server locations. The operation may be compute-heavy if the system continuously synchronizes the data to the database in the central location. The processor104creates a frequency count to automatically synchronize the plurality of data from the plurality of edge nodes102A-N to the cloud node112regularly. The processor104determines a frequency based on the quantity of data from the plurality of cloud nodes102A-N. In some embodiments, detection of the database is saved as metadata. The frequency of updating the information may be auto-decided by the processor104. The processor104detects a first pre-defined location in the central location in the cloud node112and a type of database in the plurality of cloud nodes to synchronize the data that is obtained from the plurality of edge nodes102A-N. In some embodiments, once the database in the cloud node112is detected, the Input/Output reads on the file system are monitored and the data input is sent to the central location server in the cloud node112and the data is synchronized. The processor104receives a request signal from the plurality of edge nodes102A-N for synchronizing the classified data from the plurality of cloud nodes to the plurality of edge nodes when any location fails to transfer the classified data. The processor104maintains the data received from the plurality of edge nodes102A-N in a plurality of database locations in the cloud node112by replicating the classified data in the plurality of cloud nodes112A-N. In some embodiments, the data is replicated to ensure fault tolerance. In some embodiments, volume-based replication of the database makes it easy to clone the same data in the plurality of edge nodes102A-N in the plurality of locations. The same data is maintained in the plurality of edge nodes102A-N in the plurality of locations. In some embodiments, in case one database in the cloud node112is shut/down or fails during any transaction, the other database in the cloud node112may store data. The processor104receives a response signal from the plurality of cloud nodes112A-N in the central location when the data is stored in the cloud node112in the central location. The processor updates the location of the classified data in the cloud node112to the synchronization table in the plurality of edge nodes102A-N. The response signal is sent back to the calling edge node location to update the synchronization table in the edge node once the data is synchronized in the cloud node112. In some embodiments, the plurality of synchronization tables is present in the plurality of edge nodes102A-N individually. The synchronization table is updated to denote that the synchronization is completed once the data is synchronized in the central location in the cloud node112. If the synchronization table is not updated then the data gets synchronized in the next cycle. The replication layer determines the next cycle if the synchronization table is not updated. The individual synchronization tables are present on the plurality of edge node locations. In some embodiments, the detection and synchronization are done in at least one of real-time or scheduled. The processor104synchronizes the data from the plurality of the second locations in the cloud node112to the plurality of edge nodes102A-N without duplicating the data based on the synchronization table that is updated. The processor104synchronizes the data from the plurality of databases in the cloud node112to the plurality of edge nodes102A-N when the request signal is received from the plurality of edge nodes102A-N. The plurality of data synchronization may be done from the plurality of edge nodes102A-N to the cloud node without additional code from an end-user perspective. A legacy application can be deployed on edge without any extra effort. FIG.2is a block diagram that illustrates a process of synchronizing and reconciling data stream from the edge node to the cloud node in real-time ofFIG.1according to some embodiments herein. The plurality of data from the plurality of edge nodes102A-N is synchronized with the cloud node112using a network108. In some embodiments, the plurality of edge nodes102A-N includes at least one of but is not limited to computer, mobile phone, tablet. In some embodiments, the plurality of edge nodes102A-N is a computer that acts as an end-user portal for communication with other nodes in cluster computing. The database classifying layer204classifies the data from the plurality of edge nodes102A-N of the data stream in real-time to synchronize with an exact location at a plurality of databases in the cloud node112. In some embodiments, the database classifying layer204detects the database in the cloud node112and data type. The database queuing agent206queues the classified data associated with the plurality of edge nodes102A-N and tags the classified data with (i) a time stamp, (ii) a location, (iii) an edge identifier, and (iv) an internet protocol address. The database synchronization layer208synchronizes the classified data from the plurality of edge nodes102A-N to the cloud node112based on the frequency count. The database synchronization layer208synchronizes the plurality of data to a pre-defined location and a type of database in the cloud node112. In some embodiments, the frequency count is automatically determined based on (i) an amount of data, (ii) a location of the plurality of edge nodes, and (iii) a type of application running in the plurality of edge nodes102A-N. The database replication layer210replicates the data and ensures fault tolerance. In some embodiments, volume-based replication of the database makes it easy to clone the same data in the plurality of edge nodes102A-N. In some embodiments, the node in cloud computing is a connection point, either a redistribution point or an endpoint for data transmissions in general. The cloud node112stores the plurality of data from the plurality of edge nodes102A-N. The cloud node is connected with the database synchronization layer208to synchronize the location of the data in the plurality of databases in the cloud node112to the plurality of synchronization tables202A-N in the plurality of edge nodes102A-N. The plurality of synchronization tables202A-N in the plurality of edge nodes102A-N include the exact location of the plurality of data in the cloud node112. The processor104synchronizes the data from the plurality of databases in the cloud node112to the plurality of edge nodes102A-N without duplicating the data based on the synchronization table that is updated. In some embodiments, the response signal from the cloud node112comprises a location of the data in the plurality of databases in the cloud node112. In some embodiments, when the synchronization table is not synchronized then the data from the plurality of edge nodes102A-N is synchronized in the next cycle. A cycle is a time period computed to synchronize the data and may be pre-determined by the user. In some embodiments, the synchronization table stored in each edge node comprises metadata of the edge node. In some embodiments, the metadata comprises the location of each cloud node, frequency count. The size of the metadata is in a range of 1 kilobyte (Kb) to 10 Kb. FIG.3is an interaction diagram that illustrates a method for synchronizing and reconciling data stream from an edge node to a cloud node in a real-time according to an embodiment herein. At step302, providing a plurality of data to the processor from the plurality of edge nodes102A-N. The method300includes classifying data for the data stream that is obtained from a plurality of edge nodes102A-N using the processor104. At step304, queuing a classified data associated with the plurality of edge nodes102A-N using the processor104. At step306, tagging the classified data with (i) a time stamp, (ii) a location, (iii) an edge identifier, and (iv) an internet protocol address using the processor104. At step308, determining a frequency count to synchronize the classified data associated with a real-time execution at the plurality of edge nodes102A-N with the plurality of cloud nodes112A-N using the processor104. At step310, synchronizing the classified data associated with the plurality of edge nodes102A-N in the plurality of cloud nodes112A-N by detecting a pre-defined location using the processor104and a type of database in the plurality of cloud nodes112A-N. At step312, synchronizing the data obtained from the plurality of edge nodes102A-N using the processor104. At step314, receiving a request signal from the plurality of edge nodes102A-N by the processor104when the any location in the plurality of cloud nodes112A-N fails to transfer the data. At step316, receiving-a response signal from the plurality of cloud nodes112A-N using the processor104. At step318, updating the location of the data in the plurality of cloud nodes112A-N to a synchronization table in the edge node by the processor104. At step320, synchronizing the data from the plurality of cloud nodes112A-N to the plurality of edge nodes102A-N without duplicating the data based on the synchronization table by the processor104. FIGS.4A-4Bare flow diagrams that illustrate a method for synchronizing and reconciling data stream from the edge node to the cloud node in real-time according to an embodiment herein. At step402, data of the data stream that is obtained from a plurality of edge nodes is classified in real-time to synchronize in a location at a plurality of cloud nodes. At step404, the classified data associated with the plurality of edge nodes are queued and the classified data is tagged with (i) a time stamp, (ii) a location, (iii) an edge identifier, and (iv) an internet protocol address. At step406, a frequency count is automatically determined to synchronize the classified data associated with a real-time execution at the plurality of edge nodes with the plurality of cloud nodes regularly. At step408, a pre-defined location and a type of database in the plurality of cloud nodes and the classified data is stored in the first location in the plurality of cloud nodes. At step410, a request signal from the plurality of edge nodes is received and the classified data in the plurality of cloud nodes is replicated from the plurality of edge nodes in a plurality of locations in the plurality of cloud nodes. At step412, a response signal from the plurality of cloud nodes is received and the location of the data in the plurality of cloud nodes is updated to a synchronization table in the plurality of edge nodes. At step414, the data from the plurality of cloud nodes are synchronized to the plurality of edge nodes without duplicating the data based on the synchronization table. A representative hardware environment for practicing the embodiments herein is depicted inFIG.5, with reference toFIGS.1through4A and4B. This schematic drawing illustrates a hardware configuration of a server/computer system/user device in the plurality of edge node102A-N in accordance with the embodiments herein. The user device includes at least one processing device10and a cryptographic processor11. The special-purpose CPU and the cryptographic processor (CP)11may be interconnected via system bus14to various devices such as a random-access memory (RAM)15, read-only memory (ROM)16, and an input/output (I/O) adapter17. The I/O adapter17can connect to peripheral devices, such as disk units12and tape drives13, or other program storage devices that are readable by the system. The user device can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein. The user device further includes a user interface adapter20that connects a keyboard18, mouse19, speaker25, microphone23, and/or other user interface devices such as a touch screen device (not shown) to the bus14to gather user input. Additionally, a communication adapter21connects the bus14to a data processing network26, and a display adapter22connects the bus14to a display device24, which provides a graphical user interface (GUI)30of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example. Further, a transceiver27, a signal comparator28, and a signal converter29may be connected with the bus14for processing, transmission, receipt, comparison, and conversion of electric or electronic signals. The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications without departing from the generic concept, and, therefore, such adaptations and modifications should be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
17,586
11860891
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION System Overview The Figures (FIGS.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is disclosed. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. FIG.1is a high level block diagram of a system environment for a centralized database management system130, in accordance with an embodiment. The system environment100shown byFIG.1includes one or more clients105, such as client105A and client105B, which may be collectively referred to as clients105, a network110, and a centralized database management system130. In alternative configurations, different and/or additional components may be included in the system environment100. The network110represents the communication pathways between the client105and centralized database management system130. In one embodiment, the network110is the Internet. The network110can also utilize dedicated or private communications links that are not necessarily part of the Internet. In one embodiment, the network110uses standard communications technologies and/or protocols. Thus, the network110can include links using technologies such as Ethernet, Wi-Fi (802.11), integrated services digital network (ISDN), digital subscriber line (DSL), asynchronous transfer mode (ATM), etc. Similarly, the networking protocols used on the network110can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc. In one embodiment, at least some of the links use mobile networking technologies, including general packet radio service (GPRS), enhanced data GSM environment (EDGE), long term evolution (LTE), code division multiple access2000(CDMA2000), and/or wide-band CDMA (WCDMA). The data exchanged over the network110can be represented using technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), the wireless access protocol (WAP), the short message service (SMS) etc. In addition, all or some of the links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), Secure HTTP and/or virtual private networks (VPNs). In another embodiment, the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above. In one embodiment, client105may be a database system that stores and/or manages data tables. While two clients105A and105B are illustrated inFIG.1, in practice any number of multiple clients105may communicate with the centralized database management system130in the environment100. Each database may be a relational database that provides searchable access to a plurality of data tables. Each of the plurality of tables comprises a collection of records stored in the database, and each record includes a unique primary key that provides searchable access to each specific record stored on the database. In some embodiments, the data table may not include unique primary keys. Each table may further include a plurality of data fields for storing different types of data, such as integers, floats, Booleans, chars, arrays, strings and more. In one embodiment, each database may implement a database management system (DBMS) that allows each database to execute database related instructions independently. For example, the DBMS for a database may provide for the independent creation of an invertible bloom filter for the plurality of data tables stored on the primary database. The DBMS for a database may also transform a row of a table into a row representation based on instructions received from the centralized database management system130. Moreover, the DBMS for databases may provide functions for independent insertion or deletion of records within each of the data table for data synchronization with other databases. Each data table may be associated with a set of metadata. The metadata may include information on the database type of database, the maximum value of the primary key of the records within the data table, and the number of records currently stored within the first table. Metadata may further include information associated with database schema, which may include information related to how data is constructed, such as how data is divided into database tables in the case of relational databases. Database schema information may contain information on each column (i.e., each data field) defined with in the table, such as type for each field, size for each field, relationships, views, indexes, types, links, directories, etc. The centralized database management system130may manage and perform data synchronization between one or more data tables stored across multiple clients such as105A and105B. The centralized database management system130may be any processor-based computing system capable of generating structured query language (SQL) type instructions or any other relational database management system instructions. The centralized database management system130may transmit and receive responses to these instructions from clients105over the data network110. The centralized database management system130may perform functionalities for managing data synchronization between clients105, such as determining size for invertible bloom filters, estimating the number of different records, generating and sending instructions to clients105for generating row representations and generating invertible bloom filters, performing operations such as subtraction on invertible bloom filters, decoding invertible bloom filters, and generating instructions to clients105for performing operations that synchronize the databases. The centralized database management system130may determine and send instructions to clients105for updating the respective database so that a destination database is in synchronization with a source database. Further details with regard to the functionalities performed by the centralized database management system130are discussed below in conjunction withFIG.4. Encoding Data Using Invertible Bloom Filters FIG.2illustrates an exemplary embodiment for encoding data210using an invertible bloom filter230. InFIG.2, data210may be an array of elements211,212, and213. While only three elements are illustrated inFIG.2, data210may include any number of elements. Each element may be stored as a type of data, such as a tuple that includes a key-value pair. The invertible bloom filter230may be initialized with 8 cells such as cells231-238. The illustrated invertible bloom filter230may use one or more hash functions such as the three different hash functions220such as hash functions221,222, and223to generate hash keys for each element211-213, where each hash function may generate a hash key for each element. For example, to encode element211into the invertible bloom filter230, element S1is hashed into three hash keys Hk1224, Hk2225, and Hk3226, using the three hash functions221,222, and223. Each hash function may generate a different hash key. For example, passing value of S1211into hash function221may result in a hash key Hk1224, which maps S1into cell234of an invertible bloom filter table240. The invertible bloom filter table240is part of the invertible bloom filter230and is maintained by the invertible bloom filter230for storing information associated with each element mapped to a respective index. Similarly, S1211is further hashed using hash function222and223, mapping element S1into cells232and237respectively. An exemplary embodiment of the invertible bloom filter table240is discussed in greater detail inFIG.3. FIG.3illustrates one embodiment of an exemplary invertible bloom filter table240. The invertible bloom filter table240may be initialized as a table with a fixed size (e.g., fixed number of columns.) The invertible bloom filter table240may include one or more of the following fields: count, idSum and hashSum. The count keeps track of the number of elements mapped to the respective index and is incremented by 1 each time an element is mapped to the index. The filed idSum keeps track of sum (addition or exclusive-or operation) of inserted elements. Each time an element is mapped to a respective index, idSum is updated by adding (or XOR) the element. The field hashSum keeps track of sum (addition or exclusive-or operation) of hash key for the inserted elements. Each time an element is mapped to a cell, hashSum is updated by adding (or XOR) the hash key of the element. In some embodiments, the invertible bloom filter table240may include additional fields such as a valueSum field that keeps track of sum of values of inserted elements, if each element corresponds to a key-value pair. As illustrated inFIG.3the invertible bloom filter table240is of size eight since the invertible bloom filter table240has 8 cells (e.g., cells231-238) and the table may be initialized with null values. To encode element S1into the invertible bloom filter230, the element S1is mapped to indices232,234and237, based on hash functions. Each field of the invertible bloom filter table242including count, idSum, and hashSum is updated as illustrated in invertible bloom filter table242, where count for each mapped cell increments by 1, idSum is updated by XOR the mapped element, and hashSum is updated by XOR the hash key of the mapped element. FIG.4illustrates an exemplary embodiment of the centralized database management system130. The centralized database management system130may include a data store410that stores retrieved metadata and other data such as previous versions of invertible bloom filters, a size estimating module420that determines a size for invertible bloom filters, an IBF encoding module430that generates invertible bloom filters, an IBF subtracting module440that performs subtractions on invertible bloom filters, an IBF decoding module450that decodes an invertible bloom filter, and a database synchronization module460that generates instructions for synchronizing databases. Data store410may store retrieved metadata information associated with databases. In some embodiments, data store410may also store other data such as invertible bloom filters that were generated previously and may be retrieved in subsequent steps of the synchronization process. Data store410may also include historical data associated with previously performed synchronizations, such as historical number of different elements, or historical number of updates within a period of time. The historical data stored in the data store410may be used to estimate number of differences by the size estimating module420which is discussed in greater detail below. The size estimating module420determines a size for invertible bloom filters based on an estimated number of different records. The size estimating module420may estimate number of different records using various methods, such as using a constant size, using historical data, through an updating process or through a strata estimator. The different methods may be used independently from each other or may be used in conjunction with other methods. In one embodiment, the size estimating module420may determine a size based on metadata (e.g., the size is determined to be a percentage or correlated with the number of rows in the table). The different methods for determining size are discussed in detail in accordance withFIG.5, which includes a size estimating module420includes a constant size module510, a historical size module520, a size updating module530, and a strata estimator540. The constant size510module may assign a constant size to an invertible bloom filter. The constant size may be a number that does not depend on other factors such as size of a data table. In one embodiment, the constant size may be pre-determined (e.g., by a human). The constant size may be a number that is much greater (e.g., by convention or common sense) than an estimated number of different records between databases to ensure that invertible bloom filters function properly with a larger successful rate during an invertible bloom filter decoding process. The constant size may be an arbitrarily big number that is highly unlikely to result in an issue when generating the invertible bloom filters. However, using a large invertible bloom filter may result in waste in space and create inefficiencies. To refine the size, the determined constant size may also be adjusted by the size updating module530responsive to observations of number of differences. The decoding process for an invertible bloom filter is discussed in accordance with IBF decoding module450. The historical size module520determines size based on historical data including historical numbers of changes in records. The historical size module520may train and use a machine learning model for predicting the estimated number of differences based on historical data stored in the data store410. In one embodiment, the historical size module520may train a machine learning model to predict the number of different records between a source database and a destination database. The training data may further include time intervals associated with the estimated number of different records. In one embodiment, the historical size module520may also train a machine learning model to predict the number of updates occurred to a source database within a time interval (or within various time intervals). The historical size module520may determine a size for invertible bloom filters based on the estimated number of updates. In one embodiment, the machine learning model may be a supervised or unsupervised machine learning model that is trained based on features extracted from historically observed differences and other information such as time interval, time of the day, time of the year, size of data tables, etc. The size updating module530may update a determined size based on observed data associated with synchronizations performed afterwards. In one embodiment, the size updating module530may receive data associated with a synchronization process and, responsive to observing that the number of differences is significantly smaller that the determined size, the module530may determine to reduce the initially determined size. As an example, the size estimating module420size may initially determine the size to be a constant that is large enough that ensures proper functioning of the invertible bloom filter, such as a size of 500,000. After performing one synchronization, 10 differences may be observed. The size updating module530may reduce the size to 50,000. Responsive to one more observation of 10 differences from another synchronization, the size updating module530may further reduce the size to 5,000. The iterative process may be terminated until a predetermine criteria (such as a minimum size threshold) is achieved. In one embodiment, the size updating module530may also determine a size for a backup invertible bloom filter, which is activated responsive to the original invertible bloom filter is approaching capacity limit. In one embodiment, the size updating module530may implement a resizable invertible bloom filter. The size updating module530may generate a resizable invertible bloom filter at a first snapshot of a source database. In one embodiment, the size updating module530may determine a maximum size for the first snapshot. The size updating module530may also determine a set of smaller sizes that the resizable invertible bloom filter may be shrunken to (e.g., a set of possible sizes that are predetermined). The size updating module530may determine a size for a second snapshot of the source database. The size updating module530may try to encode the snapshot into a size that is smaller than the maximum size. The size updating module530may request a second invertible bloom filter of the smaller size from the source database. Responsive to the smaller size invertible bloom filter failing to be decoded by the113F decoding module450, the size updating module530may retry the operation of encoding the second snapshot using a bigger size available from the set of possible sizes. The process is repeated iteratively until the maximum possible size is reached. In one embodiment, the size estimating module420may use the strata estimator540for estimating the number of differences. The strata estimator540may first divide all elements in the source data table and the destination data table into different levels of partitions, each partition containing different numbers of elements. The strata estimator540may encode each partition into an invertible bloom filter for each data table. The strata estimator540may then attempt to decode the pair of invertible bloom filters at each level for the two databases. If the invertible bloom filters for a level of partitions are successfully decoded, then the strata estimator540may add a count to the estimate, where the count is proportional to the number of elements recovered from the decoding process. Further details with regard to a decoding process is discussed below in accordance with the IBF decoding module450. Continuing with the discussion ofFIG.4, the IBF encoding module430encodes a data table into an invertible bloom filter. The IBF encoding module430may also generate and send instructions to databases for encoding a data table into an invertible bloom filter. Although the IBF encoding module430is illustrated to be included in the centralized database management system130, clients105may also perform the functionalities described here in accordance with the IBF encoding module430. In one embodiment, the IBF encoding module430may use a SQL query for generating an IBF for a data table in a database environment. The SQL query takes a data table as input, and outputs an encoded IBF. The IBF encoding module430may also use other database languages (such as XQuery, XML, etc.) that are capable of managing transactions associated with data records within a database environment for encoding a data table into invertible bloom filters.FIG.6illustrates an exemplary embodiment of the IBF encoding module430, which includes a row representation transforming module610that transforms rows in data table into row representations, a hash function generating module620that determines hash functions for the invertible bloom filters, and an IBF generating module630that uses determined hash function to generate invertible bloom filters and invertible bloom filter tables. Functionalities for each module is discussed in detail below. Row representation transforming module610transforms each row of a data table into a row representation that is used for encoding invertible bloom filters. Each row of a table may be referred to as a data record or an element. Each data record may include multiple fields with different types of data. In one embodiment, the row representation transforming module610may transform a row into a checksum or a tuple. The tuple may be a key-value pair, with the key being the primary key of the row, and checksum encoded based on data in the rest of the fields of the data record. In one embodiment, row representation transforming module610may convert a row into a tuple with multiple elements, where some elements of the tuple are directly encoded from raw data. Examples of transformed row representations are illustrated inFIG.7. FIG.7depicts an exemplary raw data table710and exemplary transformed row representations for rows in the data table710. In some embodiments, the data table may also include system columns. The data table710may include three records with IDs (or primary key) being 1, 2 and 3. Each record is associated with fields such as email, age, whether the respective employee is paid (field: Paid?), and a time when the record is created (field: Time Created). Each field may be further associated with a data type that the data is stored as. For example, email may be stored as a string, age may be stored as an integer, whether the employee is paid may be stored as a Boolean, and Time Created may be stored as an integer. In a first embodiment as illustrated in 720, each row of the table710may be converted into a checksum, which are then used to be encoded into an invertible bloom filter. In the embodiment illustrated in table730, the row representation transforming module610may transform each row of table710into a two-element tuple, with a primary key and checksum, where the checksum is encoded based on the data fields for each record. Encoding each row into a two-element tuple representation with primary key may be efficient when an element is identified as a different record. With a primary key associated with the checksum, the different record may be identified in a data table more efficiently by locating the record using the primary key. In some embodiments, the field primary key is not required, and each row is transformed into a one-element representation. In the embodiment illustrated in table740, the row representation transforming module610may transform each row of table710into a multi-element tuple, with a primary key, and raw data from the data table710. In one embodiment, raw data that may be encoded as part of a row representation are data that can be stored as fixed length, such as a fixed size integer, Boolean, or time. For example, the row with ID 1 includes information associated with fields email, age, paid? and time created, among which, age, paid?, and time created may be encoded as raw data into the row representation as illustrated in table740, because these fields may be formatted as fixed-length data across all records. In one embodiment, row representation may also include timestamps such as modification timestamp and/or creation timestamp. On the other hand, emails may be encoded in the row representation after it is translated to a checksum that is of fixed length across all data records. The examples used here are for illustration purposes only. The row representation transforming module610may encode any type of raw data into the row representations if the data field meets certain criteria (e.g., capable of being formatted into a certain size). Continuing with the discussion ofFIG.6, the hash function generating module620determines one or more hash functions for mapping row representations to invertible bloom filters. If the one or more data elements determined to be used to compare the first and second tables is the primary key alone, then the invertible bloom filter database may include at least an idSum field, a hashSum field, and a count field. In one embodiment, such as for a table without primary keys, the one or more elements determined to be used to compare the first and the second tables may be any one of the data elements. Moreover, the invertible bloom field hash function is an integer hash function. Alternatively, if the one or more data elements determined to be used to compare the first and second tables is a combination of the primary key and a timestamp, then the invertible bloom filter database schema may include at least a first id sum field, a second id sum field, a hash sum field, and a count field. Moreover, the invertible bloom filter hash function is a two-word vector hash function where the first word is the integer hash function of the primary key and the second word is the integer epoch timestamp value of modification timestamp. Alternatively, if the one or more data elements determined to be used to compare the first and second tables is a combination of the primary key and one or more data elements, then the invertible bloom filter database schema may include at least a first id sum field, a second id sum field, a hash sum field, and a count field. Moreover, the invertible bloom filter hash function is a two-word vector hash function where the first word is the integer hash function of the primary key and the second word is a checksum value of the one or more data elements. In any scenario, the determined hash function is a function constructed solely of basic mathematical operations and bitwise operations. This constraint ensures successful implementation of the selected hash function on the databases the database management systems and the centralized database management system130. The IBF generating module630generates invertible bloom filters based on information generated by the module mentioned above, including a determined size for the invertible bloom filters, determined hash functions, and transformed row representations. The IBF generating module630may use a SQL query to generate the invertible bloom filters. In one embodiment, the IBF generating module630may send instructions (e.g., a SQL query including information for generating invertible bloom filters) to each database involved in the synchronization, and each database may run the SQL query that encodes a data table into an invertible bloom filter, where the invertible bloom filter is of the determined size. For a data synchronization process performed on a source data table and a destination data table, the size of the invertible bloom filter for the source data table is the same as the size of the invertible bloom filter for the destination data table. After the IBF encoding module430generates and sends instructions to the clients105for generating invertible bloom filters, each client105may encode a data table into an invertible bloom filter and sends the encoded invertible bloom filter back to the centralized database management system130, where the IBF subtracting module440may perform subtraction operation on the received invertible bloom filters to identify differences, which is discussed in greater detail below. Referring back toFIG.4, the IBF subtracting module440generates a third invertible bloom filter by performing a subtraction operation on two invertible bloom filters generated by each of the source and the destination databases. The resulting third invertible bloom filter contains information regarding different elements between the first and the second bloom filters.FIG.8is a high-level illustration for subtracting two invertible bloom filters. InFIG.8, Set A830and set B840may each comprise a plurality of row representations generated by row representation transforming module610for two data tables. The row representations for each set may also be referred to as set members. Sets A830and B840may have some common members A∩B860, and some different members such as set members in A but not in B, illustrated as A\B850, and set members in B but not in A, illustrated as MA870. The different members may be collectively referred to as AΔB. To identify different set members, i.e., AΔB, the centralized database management system130may identify A\B and B\A by subtracting IBF B820encoded based on set B840from IBF A810encoded based on set A830. In one embodiment, the subtraction operation may be performed via an XOR (exclusive-OR) operation between the set A830and the set B840. An XOR operation may cancel out any common elements between set A830and set B840, leaving only the elements that are different, i.e., AΔB. Further details illustrated with a concrete example are discussed inFIG.9. FIG.9illustrates an exemplary embodiment for subtracting a second invertible bloom filter910from a first invertible bloom filter910, which results in a third invertible bloom filter930. InFIG.9, invertible bloom filter910is generated based on a first set including set members v1 and v2, where v1 is mapped to indices231and232, and v2 is mapped to indices232and234. Invertible bloom filter920is generated based on a second set including set members v1 and v3, where v1 is mapped to indices231and232, and v2 is mapped to indices232and233. The common element between the two sets is v1 and the different elements are v2 and v3. The IBF subtracting module440may subtract invertible bloom filter920from invertible bloom filter910by performing arithmetic subtraction or XOR operation for each cell of the two invertible bloom filters. For the count field, an arithmetic operator subtraction may be applied, resulting in a count of −1 for index233in the third invertible bloom filter930, which indicates that the respective element is in the invertible bloom filter920and not in the invertible bloom filter910. The count field for index234is 1, which may indicate that a respective element is in the invertible bloom filter910and not in the invertible bloom filter920. For the field idSum and hashSum, an XOR operation may be applied to compute a sum taking into account of each mapped element. For example, idSum for index231is v1 for both the invertible bloom filters910and920. The IBF subtracting module440performs an XOR operation on the two cells, that is, v1 XOR v1=0. Similarly, for index232, performing an XOR operation on v1 ⊕v2 (idSum from invertible bloom filter910) and v1⊕v3 (idSum from invertible bloom filter920) cancels v1 and preserves v2 and v3, resulting in v2 ⊕v3 (idSum for invertible bloom filter930with index232). The third invertible bloom filter resulting from the subtraction operation performed by the IBF subtracting module440is decoded by the IBF decoding module450discussed below. Referring back toFIG.4, IBF decoding module450may decode the invertible bloom filter resulted from the subtraction operation performed by the IBF subtracting module440. The resulted invertible bloom filter may also be referred to herein as the third invertible bloom filter. The IBF decoding module450may scan the third invertible bloom filter for pure cells, where pure cells are cells within the third invertible bloom filter table whose Count field is equal to 1 or −1 and whose hashSum field is equal to a value that is valid for the corresponding idSum field. A hashSum field's validity may be determined by calculating a hash value using the idSum field values and comparing this calculated value to the value stored in the HashSum field. For each pure cell within the third invertible bloom filter table, if the corresponding Count field is equal to 1, then the IBF decoding module450may add the cell to a first listing that includes those cells included in the first table and not in the second table. Alternatively, if the corresponding Count field is equal to −1, then the cell is added to a second listing that includes those cells included in the second table and not in the first table. In an alternative embodiment, for invertible bloom filters that include a checksum, the IBF encoding module430may leave out the hashSum field without computing hash values using the idSum field. The IBF decoding module450may check purity by checking that the Count field is 1 or −1 and then compute the invertible bloom filter hash functions on the idSum fields to find the indices of cells that the element would be inserted into. Then the IBF decoding module450may check if the current cell's index matches one of the computed cell indices. Once all the pure cells within the third invertible bloom filter table have been added to either the first listing and the second listing, the first and second listings are compared to identify those entries with the same primary key. The identified entries represent those cells in both the first and second tables but have updates in fields. The elements in the first listing and the second listing represent differences between the first table and the second table, and based on the identified differences, the database synchronization module460may further generated instructions for the databases to perform for the synchronization process. The database synchronization module460may generate instructions to databases and complete the synchronization process by sending instructions to database management system for updating the data tables. In one embodiment, the database synchronization module460may generate instructions based on the identified different element, where the instructions may include adding the element, removing the element, or updating the element. The instructions may be generated and sent to the source data table and/or the destination data table based on different goals. In the embodiment where each row representation is a two-element tuple with a key and a checksum, if a record is identified to have been updated in the source data table, the database synchronization module460may need to retrieve the respective record with raw data for all fields from the source data table, and send the data to the destination data table, where one or more different fields are updated based on the source data table. In the embodiment where each row representation is encoded with some elements being the raw data taken from each row, if a record is identified to have been updated in the source data table, the database synchronization module460may compare the row representation from the source data table with the row representation from the destination data table and identify one or more elements in the tuple that need to be updated, instead of retrieving the entire record of raw data from a database. Synchronization Between a Source Database and a Destination Database FIG.10illustrates one exemplary embodiment for the centralized database management system130to synchronize a source database1010and a destination database1020. The centralized database management system130may first retrieve metadata information from the source database1010and destination database1020for determining a size for invertible bloom filters and determining a formatting for encoding the invertible bloom filters. The centralized database management system130may send instructions to each of the source database1010and the destination database1020for encoding Invertible Bloom Filter A1030and Invertible Bloom Filter B1040. Each of the source database1010and destination database1020runs a SQL query that transforms each row of a table into a row representation and generates Invertible Bloom Filter A1030and Invertible Bloom Filter B1040, respectively. The centralized database management system130may retrieve the Invertible Bloom Filter A1030and the Invertible Bloom Filter A1040and perform a subtraction operation that generates an Invertible Bloom Filter C1050. The centralized database management system130may decode the Invertible Bloom Filter C1050and identify any elements that are not in synchronization between the source database1010and the destination database1020. The centralized database management system130may send the identified elements to the source database1010and/or the destination database1020for data reconciliation, which results in an updated source database1070and an updated destination database1080. Synchronization Based on Snapshots of a Source Databases FIG.11illustrates an exemplary process for updating a destination database1120based on snapshots of a source database1110. The term “snapshot” as used herein may refer to information including data and metadata associated with the database at a point in time. The term “snapshot” as used herein may refer to a copy of the data and metadata of the database, or the original data and metadata stored in the database. Snapshot may refer to the original database at a point in time or may refer to a copy of the database at a point in time. In the embodiment illustrated inFIG.11, destination database1120may be in synchronization with the source database1110at timestamp A. However, the source database1110may have updates during the time interval between a timestamp A and timestamp B, and a destination database1120may need to also perform the updates such that the destination database1120and the source data base1110are in synchronization. The size estimating module420of the centralized database management system130may first determine a size for invertible bloom filter based on an estimated number of different records between timestamp A and timestamp B for the source data base1110. In one embodiment, the size estimating module420may not be able to use a strata estimator540to determine the size, because the source database1110is already updated. The size estimating module420may initialize the size as a constant size510that is way larger than the number of potential updates. After observing several results from data synchronization processes, the size updating module530may update the size to improve efficiency. The centralized database management system130may send instructions including the determined size for invertible bloom filters to the source database1110. The source database1110, based on instructions from the centralized database management system130may generate a first Invertible Bloom Filter A1130based on the source database1110snapshotted at timestamp A. In one embodiment, the first Invertible Bloom Filter A1130may be stored to the data store410of the centralized database management system130. At timestamp B, the centralized database management system130or the destination database1120may determine that the destination database1120may include outdated data, where the determination may be based on the length of the time interval. The centralized database management system130may send instructions to the source database1110to generate a second Invertible Bloom Filter B1140based on the source database1110snapshotted at timestamp B. The source database1110may encode a second Invertible Bloom Filter B1140based on the instructions and send the second Invertible Bloom Filter B1140back to the centralized database management system130. The IBF subtracting module440of the centralized database management system130may perform a subtraction operation for the first Invertible Bloom Filter A1130and the second Invertible Bloom Filter B1140, which generates an Invertible Bloom Filter C1150. The IBF decoding module450may decode the Invertible Bloom Filter C1150and generates a decoded Invertible Bloom Filter C1160. The centralized database management system130may identify updated elements between the source database1110snapshotted at timestamp A and timestamp B and sends the identified updates to the destination database1120. The destination database1120may update (e.g., delete, add, update) respective records and becomes an updated destination database1170. In one embodiment, the source database1110and/or the destination database1120may include confidential or sensitive data that are not accessible to external servers or database management systems, which makes data synchronization across different databases challenging. The embodiment illustrated inFIG.11provides a solution for the challenge. Because the source database1110encodes the first Invertible Bloom Filter B1140and the second Invertible Bloom Filter B1150locally based on instructions received from centralized database management system130, the centralized database management system130does not need to access the raw data stored in the source database1110to identify different or updated elements. The centralized database management system130may receive invertible bloom filters that contain information encoded as checksums and perform subtraction operation on the invertible bloom filters, which results in a third invertible bloom filter containing information for the updates. In one embodiment, the source database1110may be associated with multiple destination databases1120that need to synchronize with the source database1110. The embodiment as illustrated inFIG.11may generate a set of instructions that is applicable to multiple destination databases1120that need to be updated. The centralized database management system130may only rely on information associated with the source database1110for generating instructions that identifies updates during a time interval, and the generated instructions may be sent to multiple destination databases1120for data synchronization. In alternative embodiments, the centralized database system130may also create snapshots for situation such as multiple sources synchronizing to one destination, one source synchronizing to multiple destinations, or multiple sources synchronizing to multiple destinations. FIG.12illustrates an exemplary process that centralized database management system130manages a synchronization process between a source database and a destination data base. The process starts with the centralized database management system130receiving1210afirst set of metadata for a source data table that comprises a first plurality of rows and receiving1220a second set of metadata for a destination data table that comprises a second plurality of rows. The size estimating module420may determine1230a size for both a first and a second invertible bloom filters based on an estimated number of elements that are different between the source data table and the destination data table. The centralized database management system130may send instructions to the source data table and the destination data table including size and instructions for generating row representations to the source database and the destination database. The centralized database management system130may retrieve1240a first invertible bloom filter for the source data table, the first invertible bloom filter being of the determined size. The centralized database management system130may retrieve1250a second invertible bloom filter for the destination data table, the second invertible bloom filter being the determined size. The IBF subtracting module440may generate1260a third invertible bloom filter by subtracting the second invertible bloom filter from the first invertible bloom filter, the third invertible bloom filter comprising information associated with an element that is different between the source and the destination data table. The IBF decoding module450may identify1270the different element by decoding the third invertible bloom filter. The database synchronization module460may generate and send instructions including instructions to perform an operation that synchronizes the first data table with the second data table based on the identified different element. FIG.13illustrates an exemplary process that centralized database management system130manages a synchronization process between a source database and a destination database by identifying difference between two snapshots of the source database. The process starts with the centralized database management system130obtaining1310a first invertible bloom filter for a source data table based on a first snapshot of the source data table, where the first snapshot includes information of the source data table captures at the first point in time. The centralized database management system130may store the first invertible bloom filter in data store410. The centralized database management system130may obtain 1330 a second invertible bloom filter for the source data table based on a second snapshot of the source data table, the second snapshot including information of the data table captured at a second point in time later than the first point in time. The centralized database management system130may determine1340whether a destination database has outdated information relative to that of the first point in time by performing the following steps including retrieving1350the first invertible bloom filter from the data store410and generating1360a third invertible bloom filter by subtracting the second invertible bloom filter from the first invertible bloom filter, the third invertible bloom filter comprising information associated with a change between the first snapshot and the second snapshot. The IBF decoding module450may identify the change during the time interval between the first point in time and second point in time by decoding the third invertible bloom filter. The database synchronization module460may send instructions to the destination database, where the instructions comprise information to perform an operation that synchronizes the destination data table with the source data table based on the identified change. Additional Configuration Considerations Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).) The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. It should be understood that these terms are not intended as synonyms for each other. For example, some embodiments may be described using the term “connected” to indicate that two or more elements are in direct physical or electrical contact with each other. In another example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for improving training data of a machine learning model through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined herein.
56,807
11860892
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION The techniques described herein may implement offline index builds for database tables. Data sets may be distributed across one or more locations in a storage system, in some embodiments. In this way, clients can access and independently update different portions of the data set at the one or more locations in the storage system, in some embodiments. The arrangement of the data set may be optimal for some access requests (e.g., queries based on indexed fields or values in a table). However, to optimally process other access requests (e.g., queries based on non-indexed fields or values in a table), portions of the data set (or the entire data set) may be replicated in one or more other locations (e.g., a different storage nodes, systems, or hosts) in a different arrangement, subset, or format that is more performant for performing the other type of access request, in some embodiments. Instead of relying upon the resources of a source storage location for a data set to create new a new replica of a data set, such as a projected data subset like a secondary index as discussed below, offline techniques that index or otherwise determine which portions of a source data set to replicate to the new replica using other resources, such as a separate system component or node, may be implemented. In this way, a majority of data that has to be replicated to the new replica can replicated away from the source storage location, reducing the burden of transferring data on source storage location resources (e.g., storage nodes as discussed below) to perform other operations, such as client application requests to read or write to the data set. Additionally, offline techniques may reduce the state and/or other tracking information that is maintained by source storage location resources. Offline techniques may also reduce the complexity of failures at source storage location resources and provide support for optimizations that increase the performance of creating the replica of the data by allowing for parallel creation techniques, among others. FIG.1is a logical block diagram illustrating offline index builds for database tables, according to some embodiments. Source data set112may be a database table (or tables), or other set, collection, or grouping of data item(s)114that may be also stored in a second location, such as data store130, as projected data subset132. For example, as discussed in detail below with regard toFIGS.2-5, source data set112may be one or more database tables and projected data subset152may be a secondary index. Updates102may accepted and performed at data store110that are directed to source data set112, which may be various types of actions, modifications, or changes to source data set112(e.g., insert new item(s) (or attributes of items), modify item(s), delete items (or attributes of items)). These updates may be performed in some ordering at data store110. For example, updates102may be performed in a FIFO ordering where each update is performed as it is received. To create a new replica of a source data set, an “offline” copy of source data set112may be used. For example, data store120may be another data storage system (or set of resources) which may store source data set copy122, including item(s)124. In at least some embodiments, source data set copy122may be a snapshot or other version of source data set112associated with a particular point in time (e.g., the time at which the copy122is created). Thus, item(s)124may or may not be consistent with item(s)114(e.g., including additional or fewer items). Source data set copy122may be used to create projected data subset132by evaluating item(s)124according to schema150to replicate those items106that are specified by or otherwise satisfy the schema. For example, items with certain attribute values may be replicated (e.g., a location attribute) that are specified by a schema (e.g., a secondary index that orders item(s)114by location instead of by customer identifier) whereas other attribute values (or items) may not be replicated (e.g., items with a particularly specified location attribute, such as a postal code, may be replicated whereas items with different postal codes may not be replicated). The replicated items116may sent, written, or otherwise stored to data store130for inclusion in projected data subset132. In this way, in scenarios where a large projected data subset132is created, a large majority of data can be replicated from data store120(which may not be “online” and accepting/performing access requests to source data set copy122, unlike data store110which may be accepting access requests to source data set112, such as updates102). In order to keep projected data subset132consistent with a source data set112, some of updates102may be replicated to data store130to update projected data subset132according to schema150for the projected data subset132. For example, as noted above items with certain attribute values may be replicated (e.g., a location attribute) that are specified by a schema whereas other attribute values (or items) may not be replicated. Thus, only some updates102may be replicated in some scenarios (though all or none of received updates may be replicated according to whether the schema150for the projected data subset132includes the items affected by the updates). Version comparison for projected item selection140may handle conflicts from replicated updates104and the replicated items106from “offline” copy of source data set122with items124. For example, a timestamp, sequence number or other value may be assigned to replicated updates104and replicated items106when received (e.g., at data store110) and created (e.g., at data store120, such as when source data set copy122was created), when determined to be propagated or using some other assignment technique. Such values may be a version for the update which may be used in a condition supplied by the conditional operation to data store130. If the condition is satisfied, then the operation may be performed. For instance, a replicated item106could conflict with a replicated update104to that same item. If the replicated item106were received after the replicated update104for the item, the older version of replicated item106could potentially overwrite a newer version of the item described in replicated update104if not for the version comparison140performed at data store130. Atomicity of conditional operations may, in some embodiments, prevent a different request or operation from modifying a condition evaluated to be satisfied (or not) (e.g., by modifying an item version156) between when the condition is evaluated and the update is applied as part of the conditional operation. Thus, if an update has a version condition that to be satisfied must be a version later than a version associated with an item to which the update is applied, that condition check can prevent out of order updates with respect to replicated items106(or other updates) from overwriting or otherwise becoming visible to client applications that access projected data subset132. For example, item version(s)136may be stored as system attributes or values, in some embodiments, which are not visible to client applications of data store130. Instead, conditional operations received as part of propagation may utilize the item version(s)136as the value to which update versions are compared. Please note that previous descriptions of a data store, data set, and conditional propagation are not intended to be limiting, but are merely provided as logical examples. This specification begins with a general description of a provider network that may implement a database service that may implement offline index build. Then various examples of a database service are discussed, including different components/modules, or arrangements of components/module, that may be employed as part of implementing the database service, in some embodiments. A number of different methods and techniques to implement offline index build for databases are then discussed, some of which are illustrated in accompanying flowcharts. Finally, a description of an example computing system upon which the various components, modules, systems, devices, and/or nodes may be implemented is provided. Various examples are provided throughout the specification. FIG.2is a logical block diagram illustrating a provider network offering a database service that may implement offline index builds for database tables, according to some embodiments. Provider network200may be a private or closed system, in some embodiments, or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to clients270, in another embodiment. In some embodiments, provider network200may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system1000described below with regard toFIG.8), needed to implement and distribute the infrastructure and storage services offered by the provider network200. In some embodiments, provider network200may implement various computing resources or services, such as database service210(e.g., a non-relational (NoSQL) database, relational database service or other database service that may utilize collections of items (e.g., tables that include items)), and other services (not illustrated), such as data flow processing service, and/or other large scale data processing techniques), data storage services (e.g., an object storage service, block-based storage service, or data storage service that may store different types of data for centralized access), virtual compute services, and/or any other type of network-based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services). In various embodiments, the components illustrated inFIG.2may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components ofFIG.2may be implemented by a system that includes a number of computing nodes (or simply, nodes), in some embodiments, each of which may be similar to the computer system embodiment illustrated inFIG.8and described below. In some embodiments, the functionality of a given system or service component (e.g., a component of database service210) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one data store component). Database service210may implement various types of distributed database services, in some embodiments, for storing, accessing, and updating data in tables hosted in key-value database. Such services may be enterprise-class database systems that are highly scalable and extensible. In some embodiments, access requests (e.g., requests to get/obtain items, put/insert items, delete items, update or modify items, scan multiple items) may be directed to a table in database service210that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. In some embodiments, clients/subscribers may submit requests in a number of ways, e.g., interactively via graphical user interface (e.g., a console) or a programmatic interface to the database system. In some embodiments, database service210may provide a RESTful programmatic interface in order to submit access requests (e.g., to get, insert, delete, or scan data). In some embodiments, a query language (e.g., Structured Query Language (SQL) may be used to specify access requests. In some embodiments, clients270may encompass any type of client configurable to submit network-based requests to provider network200via network260, including requests for database service210(e.g., to access item(s) in a table or secondary index in database service210). For example, in some embodiments a given client270may include a suitable version of a web browser, or may include a plug-in module or other type of code module that executes as an extension to or within an execution environment provided by a web browser. Alternatively in a different embodiment, a client270may encompass an application such as a database client/application (or user interface thereof), a media application, an office application or any other application that may make use of a database in database service210to store and/or access the data to implement various applications. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client270may be an application that interacts directly with provider network200, in some embodiments. In some embodiments, client270may generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. Note that in some embodiments, clients of database service210may be implemented within provider network200(e.g., applications hosted on a virtual compute service). In some embodiments, clients of database service210may be implemented on resources within provider network200(not illustrated). For example, a client application may be hosted on a virtual machine or other computing resources implemented as part of another provider network service that may send access requests to database service210via an internal network (not illustrated). In some embodiments, a client270may provide access to provider network200to other applications in a manner that is transparent to those applications. For example, client270may integrate with a database on database service210. In such an embodiment, applications may not need to be modified to make use of a service model that utilizes database service210. Instead, the details of interfacing to the database service210may be coordinated by client270. Client(s)270may convey network-based services requests to and receive responses from provider network200via network260, in some embodiments. In some embodiments, network260may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients270and provider network200. For example, network260may encompass the various telecommunications networks and service providers that collectively implement the Internet. In some embodiments, network260may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client270and provider network200may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network260may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client(s)270and the Internet as well as between the Internet and provider network200. It is noted that in some embodiments, client(s)270may communicate with provider network200using a private network rather than the public Internet. Database service210may implement request routing nodes250, in some embodiments. Request routing nodes250may receive and parse client access requests, in various embodiments in order to determine various features of the request, to parse, authenticate, throttle and/or dispatch access requests, among other things, in some embodiments. Database service210may implement propagation nodes290, discussed in detail below with regard toFIGS.3-5, which may handle propagation sessions with storage nodes, manage hot partitions, retry logic, checkpointing, and various other operations to implement propagation of updates to a secondary index. In some embodiments, database service210may implement control plane220to implement one or more administrative components, such as automated admin instances or nodes (which may provide a variety of visibility and/or control functions). In various embodiments, control plane220may direct the performance of different types of control plane operations among the nodes, systems, or devices implementing database service210, in some embodiments. Control plane220may provide visibility and control to system administrators via administrator console226, in some embodiment. Administrator console226may allow system administrators to interact directly with database service210(and/or the underlying system). In some embodiments, the administrator console226may be the primary point of visibility and control for database service210(e.g., for configuration or reconfiguration by system administrators). For example, the administrator console may be implemented as a relatively thin client that provides display and control functionally to system administrators and/or other privileged users, and through which system status indicators, metadata, and/or operating parameters may be observed and/or updated. Control plane220may provide an interface or access to information stored about one or more detected control plane events, such as data backup or other management operations for a table, at database service210, in some embodiments. Storage node management224may provide resource allocation, in some embodiments, for storing additional data in table submitted to database service210. For instance, control plane220may communicate with processing nodes to initiate the performance of various control plane operations, such as moves of table partitions, splits of table partitions, update tables, delete tables, create secondary indexes, etc. . . . . In some embodiments, control plane220may include a node recovery feature or component that handles failure events for storage nodes230, propagation nodes290and request routing nodes250(e.g., adding new nodes, removing failing or underperforming nodes, deactivating or decommissioning underutilized nodes, etc). Various durability, resiliency, control, or other operations may be directed by control plane220. For example, storage node management224may detect split, copy, or move events for partitions at storage nodes in order to ensure that the storage nodes maintain satisfy a minimum performance level for performing access requests. For instance, in various embodiments, there may be situations in which a partition (or a replica thereof) may need to be copied, e.g., from one storage node to another. For example, if there are three replicas of a particular partition, each hosted on a different physical or logical machine, and one of the machines fails, the replica hosted on that machine may need to be replaced by a new copy of the partition on another machine. In another example, if a particular machine that hosts multiple partitions of one or more tables experiences heavy traffic, one of the heavily accessed partitions may be moved (using a copy operation) to a machine that is experiencing less traffic in an attempt to more evenly distribute the system workload and improve performance. In some embodiments, storage node management224may perform partition moves using a physical copying mechanism (e.g., a physical file system mechanism, such as a file copy mechanism) that copies an entire partition from one machine to another, rather than copying a snapshot of the partition data row by. While the partition is being copied, write operations targeting the partition may be logged. During the copy operation, any logged write operations may be applied to the partition by a catch-up process at periodic intervals (e.g., at a series of checkpoints). Once the entire partition has been copied to the destination machine, any remaining logged write operations (i.e. any write operations performed since the last checkpoint) may be performed on the destination partition by a final catch-up process. Therefore, the data in the destination partition may be consistent following the completion of the partition move, in some embodiments. In this way, storage node management224can move partitions amongst storage nodes230while the partitions being moved are still “live” and able to accept access requests. In some embodiments, the partition moving process described above may be employed in partition splitting operations by storage node management224in response to the detection of a partition split event. For example, a partition may be split because it is large, e.g., when it becomes too big to fit on one machine or storage device and/or in order to keep the partition size small enough to quickly rebuild the partitions hosted on a single machine (using a large number of parallel processes) in the event of a machine failure. A partition may also be split when it becomes too “hot” (i.e. when it experiences a much greater than average amount of traffic as compared to other partitions). For example, if the workload changes suddenly and/or dramatically for a given partition, the system may be configured to react quickly to the change. In some embodiments, the partition splitting process described herein may be transparent to applications and clients/users, which may allow the data storage service to be scaled automatically (i.e. without requiring client/user intervention or initiation). In some embodiments, each database partition234may be identified by a partition ID, which may be a unique number (e.g., a GUID) assigned at the time the partition is created. A partition234may also have a version number that is incremented each time the partition goes through a reconfiguration (e.g., in response to adding or removing replicas, but not necessarily in response to a master failover). When a partition is split, two new partitions may be created, each of which may have a respective new partition ID, and the original partition ID may no longer be used, in some embodiments. In some embodiments, a partition may be split by the system using a split tool or process in response to changing conditions. Split or move events may be detected by storage node management224in various ways. For example, partition size and heat, where heat may be tracked by internally measured metrics (such as IOPS), externally measured metrics (such as latency), and/or other factors may be evaluated with respect to various performance thresholds. System anomalies may also trigger split or move events (e.g., network partitions that disrupt communications between replicas of a partition in a replica group, in some embodiments. Storage node management224may detect storage node failures, or provide other anomaly control, in some embodiments. If the partition replica hosted on the storage node on which a fault or failure was detected was the master for its replica group, a new master may be elected for the replica group (e.g., from amongst remaining storage nodes in the replica group). Storage node management224may initiate creation of a replacement partition replica while the source partition replica is live (i.e. while one or more of the replicas of the partition continue to accept and service requests directed to the partition), in some embodiments. In various embodiments, the partition replica on the faulty storage node may be used as the source partition replica, or another replica for same partition (on a working machine) may be used as the source partition replica, e.g., depending type and/or severity of the detected fault. Control plane220may implement table/index creation and management222to manage the creation (or deletion) of database tables and/or secondary indexes hosed in database service210, in some embodiments. For example, a request to create a secondary index may be submitted via administrator console226(or other database service210interface) which may initiate performance of a workflow to generate appropriate system metadata (e.g., a table identifier that is unique with respect to all other tables in database service210, secondary index performance or configuration parameters, and/or various other operations for creating a secondary index as discussed below). Backup management228may handle or manage the creation of backup requests to make copies as of a version or point-in-time of a database, as backup partitions242in storage service240, which as discussed above with regard toFIG.1and below with regard toFIGS.3-7may be used to perform an offline build of a replicated data set like a secondary index. In some embodiments, database service210may also implement a plurality of storage nodes230, each of which may manage one or more partitions of a database table or secondary index on behalf of clients/users or on behalf of database service210which may be stored in database storage234(on storage devices attached to storage nodes230or in network storage accessible to storage nodes230). Storage nodes230may implement item request processing232, in some embodiments. Item request processing232may perform various operations (e.g., read/get, write/update/modify/change, insert/add, or delete/remove) to access individual items stored in tables in database service210, in some embodiments. In some embodiments, item request processing232may support operations performed as part of a transaction, including techniques such as locking items in a transaction and/or ordering requests to operate on an item as part of transaction along with other requests according to timestamps (e.g., timestamp ordering) so that storage nodes230can accept or reject the transaction-related requests. In some embodiments, item request processing232may maintain database partitions234according to a database model (e.g., a non-relational, NoSQL, or other key-value database model). In some embodiments, item request processing232may perform operations to update, store, and/or send an update replication log to propagation node(s)290, as discussed below with regard toFIG.3. In some embodiments, database service210may provide functionality for creating, accessing, and/or managing tables or secondary indexes at nodes within a multi-tenant environment. For example, database partitions234may store table item(s) from multiple tables, indexes, or other data stored on behalf of different clients, applications, users, accounts or non-related entities, in some embodiments. In addition to dividing or otherwise distributing data (e.g., database tables) across storage nodes230in separate partitions, storage nodes230may also be used in multiple different arrangements for providing resiliency and/or durability of data as part of larger collections or groups of resources. A replica group, for example, may be composed of a number of storage nodes maintaining a replica of particular portion of data (e.g., a partition) for the database service210, as discussed below with regard toFIG.3. Moreover, different replica groups may utilize overlapping nodes, where a storage node230may be a member of multiple replica groups, maintaining replicas for each of those groups whose other storage node230members differ from the other replica groups. Different models, schemas or formats for storing data for database tables in database service210may be implemented, in some embodiments. For example, in some embodiments, non-relational, NoSQL, semi-structured, or other key-value data formats may be implemented. In at least some embodiments, the data model may include tables containing items that have one or more attributes. In such embodiments, each table maintained on behalf of a client/user may include one or more items, and each item may include a collection of one or more attributes. The attributes of an item may be a collection of one or more name-value pairs, in any order, in some embodiments. In some embodiments, each attribute in an item may have a name, a type, and a value. In some embodiments, the items may be managed by assigning each item a primary key value (which may include one or more attribute values), and this primary key value may also be used to uniquely identify the item. In some embodiments, a large number of attributes may be defined across the items in a table, but each item may contain a sparse set of these attributes (with the particular attributes specified for one item being unrelated to the attributes of another item in the same table), and all of the attributes may be optional except for the primary key attribute(s) and version attributes, in some embodiments. In some embodiments, the tables maintained by the database service210(and the underlying storage system) may have no pre-defined schema other than their reliance on the primary key. Metadata or other system data for tables may also be stored as part of database partitions using similar partitioning schemes and using similar indexes, in some embodiments. Database service210may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs. The control plane APIs provided by database service210(and/or the underlying system) may be used to manipulate table-level entities, such as tables and indexes and/or to re-configure various tables These APIs may be called relatively infrequently (when compared to data plane APIs). In some embodiments, the control plane APIs provided by the service may be used to create tables or secondary indexes for tables at separate storage nodes, import tables, export tables, delete tables or secondary indexes, explore tables or secondary indexes (e.g., to generate various performance reports or skew reports), modify table configurations or operating parameter for tables or secondary indexes, and/or describe tables or secondary indexes, and create and/or associate functions with tables. In some embodiments, control plane APIs that perform updates to table-level entries may invoke asynchronous workflows to perform a requested operation. Methods that request “description” information (e.g., via a describeTables API) may simply return the current known state of the tables or secondary indexes maintained by the service on behalf of a client/user. The data plane APIs provided by database service210(and/or the underlying system) may be used to perform item-level operations, such as requests for individual items or for multiple items in one or more tables table, such as queries, batch operations, and/or scans. The APIs provided by the service described herein may support request and response parameters encoded in one or more industry-standard or proprietary data exchange formats, in different embodiments. For example, in various embodiments, requests and responses may adhere to a human-readable (e.g., text-based) data interchange standard, (e.g., JavaScript Object Notation, or JSON), or may be represented using a binary encoding (which, in some cases, may be more compact than a text-based representation). In various embodiments, the system may supply default values (e.g., system-wide, user-specific, or account-specific default values) for one or more of the input parameters of the APIs described herein. Database service210may include support for some or all of the following operations on data maintained in a table (or index) by the service on behalf of a storage service client: perform a transaction (inclusive of one or more operations on one or more items in one or more tables), put (or store) an item, get (or retrieve) one or more items having a specified primary key, delete an item, update the attributes in a single item, query for items using an index, and scan (e.g., list items) over the whole table, optionally filtering the items returned, or conditional variations on the operations described above that are atomically performed (e.g., conditional put, conditional get, conditional delete, conditional update, etc.). For example, the database service210(and/or underlying system) described herein may provide various data plane APIs for performing item-level operations, such as a TransactItems API, PutItem API, a GetItem (or GetItems) API, a DeleteItem API, and/or an UpdateItem API, as well as one or more index-based seek/traversal operations across multiple items in a table, such as a Query API and/or a Scan API. Storage service240may be file, object-based, or other type of storage service that may be used to store partition snapshots242as backups. Storage service240may implement striping, sharding, or other data distribution techniques so that different portions of a partition backup242are stored across multiple locations (e.g., at separate nodes). In various embodiments, storage nodes230may implement partition backup processing233to store partition snapshots242(e.g., by storing a copy of a partition234as of a point-in-time as a snapshot object242in storage service240. In at least some embodiments, update logs244(e.g., created by updates for database partitions234by item request processing232) may be stored as objects in storage service240. FIG.3is a logical block diagram illustrating interactions to perform offline index builds for database tables in a database service, according to some embodiments. Table index creation management222may receive are request to create a secondary index (or multiple ones) as indicated341. Index management222may send a request to create an index snapshot at secondary index storage nodes343to backup management228. Backup management228may perform an operation to create an index snapshot according to creation timestamp345to storage service240. For example, a creation timestamp, as discussed below with regard toFIG.4, may occur after timestamp ordering is enabled for a source database table. In some embodiments, creation of the snapshot may include taking an already created snapshot and applying a log of updates also stored in storage service240(as discussed above) to bring the snapshot up to a state consistent with creation timestamp. In some embodiments, backup management228may create the index snapshot by applying the schema when creating the snapshot (e.g., arranging, excluding, or other operations as specified by the schema) so that the created index snapshot is a version of the secondary index consistent with the creation timestamp. In other embodiments, as noted below, backup management228may evaluate a created index snapshot to then determine what items to replicate. Backup management228may then replicate items obtained from the snapshot using conditional operations347to storage nodes for secondary index320that satisfy a schema for the secondary index. For example, backup management228may scan the created snapshot and evaluate each item with respect to the schema by issuing reads, scans, queries, or other various other access requests with respect to the items of the snapshot in storage service240. Storage node(s) for secondary index320may be assigned to the secondary index by table/index creation/management222(not illustrated), in some embodiments. Index creation management may start replication from log replication timestamp349to propagators310. As discussed below with regard toFIG.4, log replication timestamp may occur (in time) before the creation timestamp to create an overlap for the updates replicated from the update log and the version of the source database table in the created index snapshot. Propagators310may initiate a log stream from the log replication timestamp351from storage nodes330for the source table (which may send updates as a stream of log records). For example, storage nodes for source table330may determine what updates in an update log occur on or after the log replication timestamp and send them to propagation node(s)310. Propagators310may replicate updates from with conditional operations353to storage nodes320that satisfy the schema for the secondary index. Some updates in the log stream, for instance, may not be specified for inclusion in the secondary index according to the schema and thus may be ignored or dropped. Backup management228may provide an indication of completion from the snapshot355to table/index creation/management222, in various embodiments. For example, backup management228may determine that no more items are to be replicated from the snapshot and in response send completion of the creation of the secondary index from snapshot. Table/index creation/management222may provide an indication that the secondary index is created357to a client in response to the creation from snapshot355, in some embodiments. FIG.4is a logical block diagram illustrating a timeline of timestamps for performing offline index builds for database tables, according to some embodiments. A source database table may be updated over time, as indicated410. In order to ensure that all updates are replicated to a secondary index using a snapshot, different timestamps may be used to create an overlap that may prevent an in-flight update from being left out of a secondary index. For example, timestamp ordering may be enabled at time412. This may occur when the source database table is created, when the secondary index is created, or in response to some other request to enable timestamp ordering. Prior to timestamp ordering enabled412, updates may be received without an assigned timestamp such that version comparisons could not be performed, in some embodiments. Snapshot creation timestamp may be at416. Snapshot creation timestamp416may be associated with a time that a secondary index creation request is received, in various embodiments. Log replication timestamp may be at414. In various embodiments, log replication timestamp414may be determined (e.g., by table/index creation/management222) to provide a minimum amount of overlap (e.g., 10 seconds, 1 hour, etc.) in time so that replication of updates from the log stream may occur from420onwards should also be included in the snapshot created according to snapshot creation timestamp416. FIG.5is a logical block diagram illustrating example interactions to create multiple secondary indexes offline and in parallel, according to some embodiments. Backup management228may create a snapshot at snapshot creation timestamp510in storage service240, as discussed above. In some embodiments, this snapshot may not be specific to storing the items of any one secondary index, but instead may be a snapshot of an entire source database table (or partition of a database table). Backup management228may access the timestamp to replicate540items to their different storage nodes to create different secondary indexes522according to different schemas in parallel. For example, Backup management228may replicate items540a(e.g., by location attribute) according to schema530a(e.g., which specifies the location attribute) to storage nodes520ato store as part of partition(s)522afor the secondary index. Whereas backup management228may replicate items540b(e.g., by category attribute) according to schema530a(e.g., which specifies the category attribute) to storage nodes520bto store as part of partition(s)522bfor the secondary index and backup management228may replicate items540c(e.g., ordered by date attribute instead of user identifier) according to schema530c(e.g., which specifies the ordering by date attribute) to storage nodes520cto store as part of partition(s)522cfor the secondary index. In this way, a single snapshot510can be used to create multiple secondary indexes. Propagation nodes, such as propagation nodes550a,550b, and550c, may respectively propagate log stream updates560a,560b, and560cto update (conditionally as described above with regard toFIG.3) secondary index partition(s)522a,522b, and522crespectively. In some embodiments, a single propagation node may be assigned to a source table (or source partition of a table) and may perform the propagation of log stream updates560to each of the different secondary index partition(s)522a,522b, and522c(not illustrated). In at least some embodiments, a metric or other indicator of the progress of a secondary index offline build may be provided to a user via metrics collection/monitoring service of a provider network200or via an interface for database service210(e.g., by table/index creation/management222). For example, the number of partitions to fill in the secondary index, as represented by K, may be used to determine this metric by calculating the amount of data, K*10 partition amount (e.g., 10 GB), the speed at which partitions can be filled, K*write bandwidth (e.g., 1,000 KB/s), to determine the total time, data divided by speed. In some embodiments, the progress metric may be determined from an amount of time elapsed relative to the estimated total time. The examples of a database that implements offline index builds for database tables as discussed inFIGS.2-5above have been given in regard to a database service (e.g., relational database, document database, non-relational database, etc.). However, various other types of database systems or storage systems can advantageously implement offline builds for projected data subsets, in other embodiments.FIG.6is a high-level flowchart illustrating various methods and techniques to implement offline builds for projected data subsets, according to some embodiments. These techniques, as well as the techniques discussed with regard toFIG.7, may be implemented using components or systems as described above with regard toFIGS.2-5, as well as other types of databases or storage systems, and thus the following discussion is not intended to be limiting as to the other types of systems that may implement the described techniques. As indicated at610, a request may be received to create a second data set from a first data set stored in a first data store, the second data set being created according to as schema that projects a subset of data from the first data set to the second data set, in some embodiments. For example, as discussed above with regard toFIG.1, a schema for a projected data subset may provide a different arrangement or other ordering of items stored in a source data set, such as a secondary index discussed above. In some embodiments, the same number of items may be replicated, but only a subset of attributes may be replicated according to the schema (e.g., a source table with 5 columns may be replicated to a projected data subset that only includes 2 columns). The request may be formatted according to an interface for the first data store (e.g., an API, command line interface, GUI, etc.) and may specify or identify the schema as well as the source data set, the first data set. In some embodiments, the destination for the replicated data subset, the third data store may be specified as part of the request and/or the second data store (e.g., the data storage service or other data store separate from the first data store) that stores the copy of the first data set. As indicated at620, the second data set maybe created from a copy of the first data set stored in a third data store, in some embodiments. For example, the copy may be created (e.g., as a snapshot) in response to the request to create the second data set, in some embodiments. In some embodiments, the copy may exist before the request to create the second data set. The first data set may be available for updates while the second data set is being created, in various embodiments. Therefore, techniques for handling conflicts between updates that may be applicable to the second data set that are received while the second data set is being created from the copy and the copy of the first data set itself may be handled. For example, as indicated at630, updates performed to the first data set may be replicated to the second data set according to the schema, in some embodiments. As indicated at640, items from the copy of the first data set in the third data store may be replicated to the second data set according to the schema, as indicated at640, in various embodiments. If no conflict between a replicated item and replicated updates is detected, then replication may continue, as indicated at650. For example, as discussed above, replication may involve performing conditional updates using version identifiers. If a version identifier in a conditional request does not satisfy the condition, then the request may fail. For example, as indicated at660, a version identifier (e.g., a timestamp or other indication of ordering that versions of the item should be made visible at the second data set) for the replicated update may be compared with a version identifier for the replicated item. If the replicated update occurred after creation of the item, as indicated by the version identifier comparison, then the replicated update may be selected to store in the second data set, as indicated at672(e.g., as either the value to retain or to overwrite an existing value). Similarly, if the replicated update occurred before creation of the item, as indicated by the version identifier comparison, then the replicated item may be selected to store in the second data set, as indicated at674(e.g., as either the value to retain or to overwrite an existing value). Replication may continue until replication of items form the copy is complete. Then, as indicated by the positive exit from680, replication of updates alone may continue, as indicated at690. For example, the update log stream may still be replicated by a propagation node even if no more items are replicated from a snapshot in a data storage service. FIG.7is a high-level flowchart illustrating various methods and techniques to initialize propagation of updates at a new propagation node, according to some embodiments. As indicated at710, a request to create a secondary index for a database table may be received, in various embodiments. As indicated at720, a determination may be made as to whether timestamp ordering is enabled. For example, system or table configuration data may indicate whether or not timestamp ordering is enabled. If not, then timestamp ordering may be enabled for the database table, as indicated at722, in various embodiments. For example, timestamps may begin to be assigned to updates that are received for the database table, in some embodiments. As indicated at730, storage node(s) for the secondary index may be allocated, in some embodiments. For example, a control plane or other component of a database system may identify storage nodes with capacity to store a partition of the secondary index to be created (e.g., either with other partitions for other tables or secondary indexes in multi-tenant fashion or as dedicated storage node for the secondary index that does not store other partitions). As indicated at740, a snapshot creation timestamp and log replication timestamp may be determined for the secondary index, in various embodiments. For example, the creation timestamp may be a timestamp associated with the creation request of the secondary index and the replication timestamp may be a timestamp that occurs some amount of time prior to the creation timestamp (e.g., according to a specified or fixed overlap period to ensure that no inflight updates are not included in the secondary index). As indicated at750, a snapshot of the database table may be created according to the snapshot creation timestamp, in some embodiments. For example, as discussed above a snapshot earlier than the snapshot creation timestamp may be updated from log records of updates that occur up to the snapshot creation timestamp in order to create the snapshot. In some embodiments, the snapshot of the database table may be an entire copy of the database table or a copy formatted according to a schema for the secondary index. As indicated at760, item(s) from the snapshot may be replicated according to a schema for the secondary index, in some embodiments. For instance, items may be evaluated with respect to the schema and/or formatted or ordered according to the schema when replicated (e.g., sent via conditional operation requests) to the allocated storage nodes. As indicated at770, replication from the replication timestamp may be started in an update log for the database table, in some embodiments. For example, a propagation node may send a request to storage nodes that store the database table to begin replication of the update log in streaming fashion to the propagation node starting from the replication timestamp. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in some embodiments, the methods may be implemented by a computer system (e.g., a computer system as inFIG.8) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein (e.g., the functionality of various servers and other components that implement the distributed systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Embodiments to implement offline builds for projected data subsets as described herein may be executed on one or more computer systems, which may interact with various other devices. One such computer system is illustrated byFIG.8. In different embodiments, computer system1000may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or compute node, computing device or electronic device. In the illustrated embodiment, computer system1000includes one or more processors1010coupled to a system memory1020via an input/output (I/O) interface1030. Computer system1000further includes a network interface1040coupled to I/O interface1030, and one or more input/output devices1050, such as cursor control device, keyboard, and display(s). Display(s) may include standard computer monitor(s) and/or other display systems, technologies or devices, in some embodiments. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system1000, while in other embodiments multiple such systems, or multiple nodes making up computer system1000, may host different portions or instances of embodiments. For example, in some embodiments some elements may be implemented via one or more nodes of computer system1000that are distinct from those nodes implementing other elements. In various embodiments, computer system1000may be a uniprocessor system including one processor1010, or a multiprocessor system including several processors1010(e.g., two, four, eight, or another suitable number). Processors1010may be any suitable processor capable of executing instructions, in some embodiments. For example, in various embodiments, processors1010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors1010may commonly, but not necessarily, implement the same ISA. In some embodiments, at least one processor1010may be a graphics processing unit. A graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device, in some embodiments. Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms. For example, a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU). In various embodiments, graphics rendering may, at least in part, be implemented by program instructions for execution on one of, or parallel execution on two or more of, such GPUs. The GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s), in some embodiments. System memory1020may store program instructions1025and/or data accessible by processor1010to implement associating a function with a table in a database system, in some embodiments. In various embodiments, system memory1020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those described above to perform offline builds for projected data subsets are shown stored within system memory1020as program instructions1025and data storage1035, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory1020or computer system1000. A computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system1000via I/O interface1030. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface1040, in some embodiments. In some embodiments, I/O interface1030may be coordinate I/O traffic between processor1010, system memory1020, and any peripheral devices in the device, including network interface1040or other peripheral interfaces, such as input/output devices1050. In some embodiments, I/O interface1030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory1020) into a format suitable for use by another component (e.g., processor1010). In some embodiments, I/O interface1030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface1030may be split into two or more separate components, such as a north bridge and a south bridge, for example. In addition, in some embodiments some or all of the functionality of I/O interface1030, such as an interface to system memory1020, may be incorporated directly into processor1010. Network interface1040may allow data to be exchanged between computer system1000and other devices attached to a network, such as other computer systems, or between nodes of computer system1000, in some embodiments. In various embodiments, network interface1040may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. Input/output devices1050may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system1000, in some embodiments. Multiple input/output devices1050may be present in computer system1000or may be distributed on various nodes of computer system1000, in some embodiments. In some embodiments, similar input/output devices may be separate from computer system1000and may interact with one or more nodes of computer system1000through a wired or wireless connection, such as over network interface1040. As shown inFIG.8, memory1020may include program instructions1025, that implement the various embodiments of the systems as described herein, and data store1035, comprising various data accessible by program instructions1025, in some embodiments. In some embodiments, program instructions1025may include software elements of embodiments as described herein and as illustrated in the Figures. Data storage1035may include data that may be used in embodiments. In other embodiments, other or different software elements and data may be included. Those skilled in the art will appreciate that computer system1000is merely illustrative and is not intended to limit the scope of the embodiments as described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including a computer, personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, network device, internet appliance, PDA, wireless phones, pagers, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. Computer system1000may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-readable medium separate from computer system1000may be transmitted to computer system1000via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. This computer readable storage medium may be non-transitory. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent example embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
62,902
11860893
DETAILED DESCRIPTION In order to make the objectives, technical solutions and advantages of the present disclosure clearer, the present disclosure will be further described in detail below through the accompanying drawings and embodiments. However, it should be understood that the specific embodiments described herein are only used to explain the present disclosure, and not to limit the scope of the present disclosure. In addition, in the following description, descriptions of well-known structures and well-known art are omitted to avoid unnecessarily obscuring the concepts of the present disclosure. The present disclosure provides an input/output proxy method for a mimic Redis database, including a mimic Redis database, a mimic input/output proxy container and an arbiter, wherein the mimic Redis database includes at least three database containers running Redis server processes; the mimic input/output proxy container includes a pseudo server module, an arbiter interaction module and no less than three database interaction modules; the arbiter interaction module is connected with the arbiter; the database interaction module is connected with the database container; referring toFIG.1, the method specifically includes the following steps: S1, initialization phase: the pseudo server module, the arbiter interaction module and the database interaction modules run as independent processes, and data transmission channels for inter-process communication are established between the pseudo server module and the arbiter interaction module and the database interaction modules; Specifically, referring toFIG.2, in this embodiment, the input/output proxy process or container interfaces with three Redis database containers having heterogeneous base images (e.g., database container1is built based on the Debian base image, database container2is built based on the Ubuntu base image, database container3is built based on the Redhat base image), and the three containers each run one Redis server process, thus forming a mimic Redis database with dynamic heterogeneous redundancy features. The mimic input/output proxy also runs in a container that runs five processes, and the processes are managed by Supervisord; the business programs of the pseudo server module, the arbiter interaction module and the database interaction modules are respectively executed in the processes. The image of the container in which the input/output proxy runs in this embodiment is built based on Dockerfile, and the modules of the input/output proxy are coded with pure C and can be deployed on any server on which Linux software such as Docker and GCC is installed. Each of the heterogeneous Redis databases shown inFIG.2only establishes an interaction connection with only one client—that is, the database interaction module in the input/output proxy container, and in this embodiment, the database interaction module acts as a separate process implementing its associated business logic as a Redis client via a hiRedis interface. Thus, the number of the database interaction modules in the input/output proxy process or container should be consistent with the number of dynamically heterogeneous Redis servers, and the consistency can be controlled by compiling macros. At the same time, to establish a connection between the Redis client in the database interaction module and the heterogeneous Redis server shown inFIG.2, the Redis client needs to obtain information, such as the port number and password, of the relevant server. In this embodiment, the server information is also configured into the input parameters of the hiRedis interface in the compilation phase by compiling the macros. During the initialization phase of the processes of the mimic input/output proxy container, an inter-process communication (IPC) channel between the processes should be completed for subsequent data transmission. In this embodiment, Unix domain sockets are adopted to implement the data transmission channels of the inter-process communication. Specifically, referring toFIG.3, one socket descriptor array fd[4] is maintained in the pseudo server process, and all four sockets are added to the pollfd structure array pfd[4] by 10 multiplexing technology for listening for messages of the four sockets in a non-blocking manner. S2, the pseudo server module receives a connection establishment request initiated by a user, creates a socket bound to the user and establishes a connection while maintaining the state information of the user connection; wherein the state information includes, but is not limited to, input buffer length, input buffer address, output buffer length, output buffer address, and database ID currently operated by the user. In this embodiment, the pseudo server module also establishes connections and interacts data with users via network inter-process communication technology, each connection corresponds to one socket descriptor, and the pseudo server module also adds these socket descriptors to the polling list of the 10 multiplexing technology for listening for requests initiated by respective users in a non-blocking manner. In this embodiment, the pseudo server module manages the state information of each user connection through a dictionary data structure. Specifically, referring toFIG.4, the key of the dictionary is the user connection socket descriptor, and the value of the dictionary is the state information structure pointer. In the above implementation, when an 10 multiplexing technology finds that a certain socket descriptor is readable, the corresponding state information can be quickly found from the dictionary data structure through the socket descriptor. S3, the pseudo server module receives a command request initiated by the user, updates the state information of the user connection, obtains the type of the command request according to a definition mode of the type of the command request, and starts a response mode corresponding to the type of the command request; the pseudo server performs a corresponding operation according to the response mode; in a feasible embodiment, the definition mode of the type of the command request includes pre-compilation definition and runtime definition; the pre-compilation definition specifically includes manually setting command types of various Redis interaction commands in the code development stage; the runtime definition includes adding a command option when a user sends a command request, and the pseudo server module determines the type of the command request according to the command option; in the pseudo server module, all Redis database interaction commands are also stored in one dictionary data structure, the key of the dictionary is the command name, and the value of the dictionary is the response mode. When the pseudo server receives the Redis command sent by the user, and obtains the command name after parsing the Redis command, the corresponding response mode can be retrieved directly in the dictionary data structure. The dictionary storing the Redis database interaction commands is initialized by traversing a structure array that is then initialized directly during code defining. Specifically, referring toFIG.5, the code developer can directly set the value of the cmdtype item during defining the RedisCommand array, so as to manually set the command type of various Redis interaction commands in the code development stage. The code developer can set the type of the Redis database interaction command according to the actual application scenario, for example, the purpose of a simple “PING” command is only to get the “PONG” reply of the database server, so the “PING” command can be set to be executed in a normal mode; whereas the “SET” command is closely related to the data in the database, so the “SET” command is set to be executed in a mimic mode. The type of the Redis command may also be set at runtime, in this embodiment, specifically, when the Redis command sent by the user is parsed, it is detected whether there is a “-imitationcmd” term, if so, it is indicated that the user customizes the command type of the command, and then response in the mimic mode or response in the normal mode is confirmed according to whether the value of the term is y or n; when the “-imitationcmd” term is default, then response is performed according to the type defined before compiling; in a feasible embodiment, the response mode includes a normal mode and a mimic mode; S31, in the normal mode, the pseudo server selects one of the database interaction modules based on the credit score mechanism, and enables the user to be indirectly connected with a database container running a Redis server process through the database interaction module; the credit score mechanism has a variety of mature solutions, in this embodiment, a simple manner is adopted, that is, the database interaction module in the database interaction process3shown inFIG.2is manually set to have the highest credit score (because the database interaction process3interfaces with the Redis database of the lastest version), in the normal mode, the pseudo server module forwards the received user command request to the Redis database in the database container3directly through the database interaction process3for response, and returns the response result directly to the user; the normal mode is mainly used in cases where fast response is required or the amount of data is large, such as “SYNC” and “PSYNC” commands. S32, in the mimic mode, the pseudo server distributes the command request initiated by the user to no less than three database interaction modules at the same time, after obtaining the response results of the database interaction modules, the pseudo server module sends the response results to the arbiter for arbitration via the arbiter interaction module, and after obtaining an arbitration result, the pseudo server module responds to the user. S4, the pseudo server module performs dynamic data synchronization, specifically including the steps: S41, the pseudo server module maintains a public key space, wherein the public key space stores a series of key-value pairs, keys in the key-value pairs are in one-to-one correspondence to keys in the actual Redis database, and values in the key-value pairs store the synchronization state; in this embodiment, specifically, referring toFIG.6, the pseudo server process in the mimic input/output proxy container maintains a key-value space, wherein the key space is consistent with the database containers1,2and3. Since the key values in the database containers1,2and3are all issued by the pseudo server process upon receiving the user's command, the pseudo server process can maintain a key space that is the same as all database containers. S42, the pseudo server module maintains an iterator of a public key space, wherein the iterator is configured to traverse key-value pairs in the public key-value space; the iterator may be implemented in a variety of ways, and in this embodiment, the functional requirements can be met by adopting an iterator of the dictionary data structure in the Redis native code. S43, the pseudo server module periodically calls the iterator to obtain key-value pairs in the public key space and determines whether to perform a synchronization action according to the synchronization state stored in the values in the key-value pairs; the judgment standard of determining whether to perform synchronization according to the synchronization state in S43 is to employ a random credit attenuation mechanism, and the random credit attenuation mechanism specifically includes the following steps: S431, when a key-value pair is newly added to the public key space maintained by the pseudo server module, a random number generator is called to generate a random number, and the random number is stored in the synchronization state corresponding to the keys; S432, the pseudo server module obtains the key-value pair to be processed in the public key space through the iterator and attenuates the random number stored in the synchronization state according to a specific step size; S433, the keys to be processed are synchronized if the value of the attenuated random number is less than a set threshold, a random number is regenerated after synchronization is completed, and the random number is stored in the synchronization state; S434, the pseudo server module calls the iterator to obtain the next key-value pair to be processed if the value of the attenuated random number is not less than the set threshold. In this embodiment, the random number is an integer in the range interval of [0, 9], the set threshold is 2, and the value of each attenuation is 1. Assuming that the random values of KEYA, KEYB and KEYC in the initial state are 1, 3 and 5, respectively, and that the iterator of the dictionary performs cyclic iteration in the order of KEYA→KEYB→KEYC→KEYA . . . , then only KEYA satisfies the condition of being less than the threshold after the first cycle, synchronization of KEYA is started, and the random value is reacquired; KEYB and KEYC do not satisfy the condition of being less than the threshold, thus are decreased by one respectively and then enter the next cycle, and so on. Since synchronization is a job that consumes resources and affects performance, the functional requirements and resource overhead of synchronization can be effectively balanced by the random attenuation mechanism described. S44, the pseudo server module obtains data values, corresponding to the keys needing to be synchronized, in the databases through a plurality of database interaction modules respectively, the pseudo server module sends the corresponding data values in the databases to the arbiter for arbitration through the arbiter interaction module and obtains an arbitration result, and the pseudo server module updates the corresponding data values in the databases according to the arbitration result to complete the data synchronization. After the keys needing to be synchronized are obtained through the above process, the pseudo server module obtains the values, corresponding to the keys, in the databases through the database interaction modules and then sends the values corresponding to the keys to the arbiter for arbitration through the arbiter interaction module, and if the arbiter returns a result that the data is valid and does not need to be updated, then iteration of the next key is performed. If the result returned by the arbiter is that the data needs to be updated, specifically, referring toFIG.6, the value corresponding to KEYB in database container3is different from the value corresponding to KEYB in other databases, thus an accurate result is obtained after arbitration; the pseudo server module then updates the value corresponding to the KEYB key through the database interaction interface to complete the synchronization. According to the above-mentioned input-output proxy method for a mimic Redis database, through the pseudo server module, it is ensured that the interface of the Redis database is consistent with the external interface of the native Redis, so that it is convenient to implant the Redis database into arbitrary Redis application scenarios; the isolation of the modules inside is realized by independent processes, thus facilitating independent development, maintenance and expansion; and the synchronization function is integrated into the input/output proxy to achieve resource reuse; for the synchronization function, the random credit attenuation mechanism is cleverly utilized to ensure the synchronization function while taking into account the saving of resources. In general, the input/output proxy method for a mimic Redis database is implemented. Corresponding to the previously described embodiments of the input/output proxy method for a mimic Redis database, the present disclosure also provides embodiments of the input/output proxy apparatus for a mimic Redis database. Referring toFIG.7, an embodiment of the present disclosure provides an input/output proxy apparatus for a mimic Redis database, including a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors, when executing the executable codes, implement the input/output proxy method for a mimic Redis database in the above embodiment. Embodiments of the input/output proxy apparatus for a mimic Redis database of the present disclosure may be any device with data processing capability, which may be a device or apparatus such as a computer. Apparatus embodiments may be implemented through software, or through hardware or through a combination of hardware and software. Taking implementation through software as an example, an apparatus in a logical sense is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of any device with data processing capability where the apparatus is located. From the perspective of the hardware level, as shown inFIG.7which is a structure diagram of hardware of any device with data processing capability where the input/output proxy apparatus for a mimic Redis database according to the present disclosure is located, in addition to the processor, the memory, a network interface, and a non-volatile memory shown inFIG.7, any device with data processing capability where the apparatus in the embodiments is located may also include other hardware, typically depending on the actual functionality of the any device with data processing capability, which will not be repeated. For the implementation process of the functions and roles of the various units in the above-described apparatus, please refer to the implementation process of the corresponding steps in the above-described method, which will not be repeated here. For the apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial descriptions of the method embodiments for related parts. The apparatus embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present disclosure. Those of ordinary skill in the art can understand and implement the embodiments without creative effort. An embodiment of the present disclosure further provides a computer readable storage medium storing a program, wherein the program, when executed by a processor, implements the input/output proxy method for a mimic Redis database in the above embodiment. The computer-readable storage medium may be an internal storage unit of any device with data processing capability described in any of the foregoing embodiments, such as a hard disk or a memory. The computer-readable storage medium may also be an external storage device of any device with data processing capability, such as a plug-in hard disk, a smart memory card (SMC), an SD card and a flash memory card equipped on the device. Further, the computer-readable storage medium may also include both an internal storage unit of any device with data processing capability and an external storage device. The computer-readable storage medium is used to store the computer program and other programs and data required by the device with data processing capability, and may also be used to temporarily store data that has been output or will be output. The above description is merely preferred embodiments of the present disclosure, and is not intended to limit the present disclosure, and any modifications, equivalent replacements or improvements made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.
20,162
11860894
While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DESCRIPTION Aspects of the present disclosure relate to providing data replication, while more particular aspects of the present disclosure relate to detecting and then replicating changes to application data tables using a default level of logging for system tables that does not include continuous capturing of before and after images of system tables. While the present disclosure is not necessarily limited to such applications, various aspects of the disclosure may be appreciated through a discussion of various examples using this context. There are many type of database management systems (DBMS) that a computing environment uses to store and organize data, such as data for/of an application. For example, one common type of DBMS is a relational DBMS (RDBMS), which stores data in tables or the like with rows and columns. In an RDBMS, the structure of the data within the tables (i.e., the location of each data point in relation to the rest of the data) provides meaning (i.e., metadata) for the data within the tables (e.g., such that data of a single row is related according to a variable, and data of a single column is related according to a different variable). As would be understood by one of ordinary skill in the art, within an RDBMS there may be data as generated or modified by a user for an application as stored in application data tables, and the application data tables may be supported by system tables that may relate to one or more application data tables. As discussed herein, changes to the application data tables as described herein may include some or all data manipulation language (DML) changes, and changes to the system table may include some or all data definition language (DDL) changes. Modern computing environments may provide mechanisms for providing data replication, such that data of the database (e.g., data of the system tables and application data tables) is stored at more than one location (e.g., at more than one node, disk, site, etc.). For example, a conventional environment may utilize a roll-forward recovery log to provide data replication. Though one of ordinary skill in the art would understand aspects of this disclosure to relate to any database environment that utilizes a roll-forward recovery log for data replication, environments that utilize RDBMS are primarily discussed herein for purposes of clarity. To provide the initializing points for each operation of a roll-forward recovery log, a mechanism of the conventional environment may allow elements of the conventional environment (e.g., where an element of the conventional environment includes an application) to request “full logging” of the system tables rather than a “default” amount of logging that an RDBMS (e.g., where the default amount of logging requires less logging than full logging, and wherein the default logging includes the amount of logging that an RDBMS executes unless specified otherwise). A default amount of logging for tables may not include capturing and/or storing the before image of an update operation, capturing and/or storing an image for a delete operation, and may only include capturing and/or storing a partial after image for an update operation. With this default amount of logging, it may be difficult or impossible for a conventional system to provide reliable and robust data replication using a roll-forward recovery log. Conversely, full logging as described herein and executed by conventional database environments to provide data replication includes capturing and storing system table row states before and/or after a respective change, these states referred to as a “before image” and an “after image” of the respective row. Using these before images and after images as gained from full logging, a conventional replication system is configured to determine which column values changed. Upon identifying the exact row and column, a conventional replication system uses the command from the roll-forward recovery log to recreate the respective values at the second (or third, or fourth, etc.) replication site. In this way, changes to system table data may be replicated immediately upon detecting an update to the original. In many conventional systems, this process of full logging is functionally continuous, such that before images and after images must be automatically and consistently captured as part of the replication process after every single operation. However, storing each of these images with full logging may decrease operational performance and/or increase disk utilization of a respective conventional RDBMS. Such decreased performance and/or increased disk utilization may be especially impactful for time-critical system table changes. Further, problems that result from this constant full-logging may be expensive and time-consuming to fix. Additionally, if a replication of an application data table is requested without full logging of a system table, accounting for any such DDL change without logging requires extensive and error-prone manual intervention and correction by human database administrators. Aspects of this disclosure may solve or otherwise address some or all of these problems. For example, aspects of the disclosure may eliminate the need for full logging of all changes to the system tables of the RDBMS (e.g., where changes to the system tables include any DDL changes), instead enabling a computing environment to replicate all changes to application data tables for one or more applications from default logging of system tables and application data tables of an RDBMS environment. One or more computing components (these computing components including or otherwise making using of a processing unit executing instructions stored on a memory) may provide this functionality, these one or more computing components hereinafter referred to collectively as a controller. This controller may be integrated into one or more components of a computing environment that utilizes an RDBMS and a roll-forward recovery log to provide this functionality. Further, as will be understood by one of ordinary skill in the art, aspects of this disclosure may be able to replicate changes to default values and/or constraints of the RDBMS using substantially only the default logging of system tables as described herein. Though replicating changes to the application tables is predominantly discussed throughout for purposes of illustration, it is to be understood that the techniques described herein are also largely applicable to replicating these changes to the default values and/or constraints, such that these changes to default values and/or constraints may be replicated without full logging of the system tables. For example,FIG.1depicts environment100in which controller110works with RDBMS120to replicate system data from one or more system tables140in order to subsequently replicate application data from one or more application data tables150. RDBMS120may be used to manage (e.g., create, retrieve, update, or delete) data as requested by user applications130A,130B,130C (collectively “user applications130”). Commands to thusly generate/modify/delete/etc. data as sent from user applications130may be reflected in roll-forward recovery log160. There may be any number of system tables140, application data tables150, and/or roll-forward recovery logs160for any number of different user applications130(e.g., whether different instances of a single user application130and/or different user applications entirely) as are supported by environment100. Controller110, RDBMS120, user applications130, system tables140, application data tables150, roll-forward recovery log160, and before image repository170may communicate over network180. Network180may include a computing network over which computing messages may be sent and/or received. For example, network180may include the Internet, a local area network (LAN), a wide area network (WAN), a wireless network such as a wireless LAN (WLAN), or the like. Network180may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device (e.g., controller110, RDBMS120, etc.) may receive messages and/or instructions from and/or through network180and forward the messages and/or instructions for storage or execution or the like to a respective memory or processor of the respective computing/processing device. Though network180is depicted as a single entity inFIG.1for purposes of illustration, in other examples network180may include a plurality of private and/or public networks over which controller110may detect changes to application data tables using only default logging as described herein. For example, user applications130and RDBMS120may communicate together over a first part of network180, and RDBMS120, controller110, system tables140, application tables150, and roll-forward recovery logs160communicate together over a second part of network180, while controller110, roll-forward recovery logs160, and before image repository170communicate together over a third part of network, etc. Though each of controller110, system table140, application data tables150, recovery log, and RDBMS120are depicted as discrete entities for purposes of illustration, one of ordinary skill in the art would understand that in other examples components of environment100may be organized differently and be consistent with aspects of the disclosure. For example, one or all of system table140, roll-forward recovery log160, and application data tables150may be stored on a single computing device, or the like. Where controller110is a standalone computing device, controller110may be similar to computing system200ofFIG.2that includes a processor communicatively coupled to a memory that includes instructions that, when executed by the processor, cause controller110to execute one or more operations described below. Similarly, in other examples the computing device of RDBMS120(which may be a computing device similar to computing system200) may include a first version of system table140and/or application tables150for user applications130(e.g., while recovery log160is still stored on a separate standalone computing device). Controller110may determine a before image of the respective system table140entries that relate to the respective application data tables150for one or more respective user application130. For example, controller110may detect a request from user application130A for data replication, in response to which controller110may determine the before image of rows and columns of system table140that correspond to the respective application data table150for user application130A. Controller110may determine this before image without direct access (e.g., where not having direct access includes controller110being physically remote from the database that stores application data table150and controller110does not have write access to application data table150) to system table140, RDBMS120, and/or application data table150data files. For example, controller110may query system table140for entries related to respective application data tables150on which table structure change detection is desired. Such a query may essentially request data of all of the field values for each entry of system table140for each application data table150being replicated. Controller110may query system table140using structured query language (SQL) or the like. Controller110may use code generation as understood by one of ordinary skill in the art to encode the values from system table140returned from the query into one or more before images172that reflect each respective entry (e.g., data points of indicated rows and columns of system table140), and may store the generated before image172in before image repository170. For example, controller110may generate (using code generation) before image172such that what is generated is substantially similar to a before image that would have been captured as part of full logging of system table140for a conventional RDBMS replication system. Controller110may generate and store each before image172such that each before image172indicates a corresponding point within roll-forward recovery log160at which before image172was generated (e.g., and therein a point at which before image172was accurate at that point in roll-forward recovery log160). In some examples, controller110may generate a respective before image172for each row of system table140that relates to the respective application data table150. Controller110may query system table140and use code generation to generate one or more before images172at a rate that is far less than the continuous rate of logging that is utilized by conventional RDBMS replication systems. For example, controller110may only query system table140once at an initial startup. If at any point controller110detects a potential lapse in the data replication process, controller110may fully reinitialize the replication system in much the same way as an initial startup (e.g., such that controller110may query the relevant tables using SQL). Controller110may store all values gathered and generated from this query at a location external to RDBMS120. Once this before image is generated, controller110may identify any updates and/or deletes to system table140. For example, controller110may crawl through roll-forward recovery log160to identify updates and/or deletes to system table140. Once identified, controller110may update the stored before image172to reflect this update and/or delete to system table140. For example, being as deletes to system table140as stored in roll-forward recovery log160may only include an identifier and no row image, controller110may use this identifier to determine which code generated row image the delete applies to. Once controller110determines the respective row, controller110may gather before image172that corresponds to this row from before image repository170. Once the respective before image172is gathered, controller110may use code generation to apply the delete operation to before image172to generate a new before image172, saving this new before image172into before image repository170. Controller110may further identify the point within roll-forward recovery log160that this newly generated before image172applies to, and store the new before image172to correlate to this point. Similarly, for a detected update operation to system table140, controller110may find the identifier from the partial image stored within the update in the roll-forward recovery log160. This partial image may be logged by RDBMS120as part of the default logging as described above. Once controller110has this identifier, controller110may use it to identify the respective before image172that corresponds to this respective row of the, and overlay the logged partial after image on the respective previously generated before image172. Controller110may use code generation to turn this overlaid image into a new before image172for system table140, again saving the respective point in the roll-forward recovery log160that it corresponds to. In this way, controller110may maintain a complete and accurate history of system table140entries without requiring the full logging of system table140. As described above, controller110may include or be part of a computing system that includes a processor configured to execute instructions stored on a memory to execute the techniques described herein.FIG.2is a conceptual box diagram of such computing system200. While computing system200is depicted as a single entity (e.g., within a single housing) for the purposes of illustration, in other examples, controller110may comprise two or more discrete physical systems (e.g., within two or more discrete housings). Computing system200may include interface210, processor220, and memory230. Computing system200may include any number or amount of interface(s)210, processor(s)220, and/or memory(s)230. Computing system200may include components that enable controller110to communicate with (e.g., send data to and receive and utilize data transmitted by) devices that are external to controller110. For example, computing system200may include interface210that is configured to enable controller110and components within computing system200(e.g., such as processor220) to communicate with entities external to computing system200. Specifically, interface210may be configured to enable controller110to communicate with RDBMS120, system table140, application data table150, roll-forward recovery log160, before image repository170, or the like. Interface210may include one or more network interface cards, such as Ethernet cards and/or any other types of interface devices that can send and receive information. Any suitable number of interfaces may be used to perform the described functions according to particular needs. As discussed herein, controller110may be configured to cause computing device110to generate before images172of system table140without full logging. Controller110may utilize processor220to thusly generate these before images. Processor220may include, for example, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or equivalent discrete or integrated logic circuits. Two or more of processor220may be configured to work together to manage functionality. Processor220may generate images of system table140according to instructions232stored on memory230of controller110. Memory230may include a computer-readable storage medium or computer-readable storage device. In some examples, memory230may include one or more of a short-term memory or a long-term memory. Memory230may include, for example, random access memories (RAM), dynamic random-access memories (DRAM), static random-access memories (SRAM), magnetic hard discs, optical discs, floppy discs, flash memories, forms of electrically programmable memories (EPROM), electrically erasable and programmable memories (EEPROM), or the like. In some examples, processor220may enable application data table150replication by generate before images of system table140without full logging as described herein according to instructions232of one or more applications (e.g., software applications) stored in memory230of computing system200. In addition to instructions232, in some examples gathered or predetermined data or techniques or the like as used by processor220to generate before images of system table140may be stored within memory230. For example, memory230may include information described above as described from system table140, application data table150, roll-forward recovery log160, and/or before image repository170. For example, controller110may be the primary location of before image repository170, and/or controller110may store local copies of data from system table140, application data table150, or roll-forward recovery log160. For example, controller110may store replication data234with which application data table150may be replicated. Replication data234may include image data236, which may include before images172that were generated by controller110as described herein. Image data236may also include respective points in roll-forward recovery, which may be stored in roll-forward recovery data242. Replication data234may also include query data238, which may be the results that were returned by querying system table140to generate an initial before image172. Replication data234may also include change data240that indicates updates and/or deletes or the like to system table140with which controller110generated new before images. In some examples, controller110may discard query data238and change data240upon generating the respective before image172, though in other examples controller110may keep query data238, change data240, and/or all before images172so that a record of application data table150information over time may be tracked. Using these components, controller110may enable replicating application data table150information via generating before images of system tables140without full logging. For example, controller110may provide this functionality according to flowchart300depicted inFIG.3. Flowchart300ofFIG.3is discussed with relation toFIG.1for purposes of illustration, though it is to be understood that other systems may be used to execute flowchart300ofFIG.3in other examples. Further, in some examples controller110may execute a different method than flowchart300ofFIG.3, or controller110may execute a similar method with more or less steps in a different order, or the like. Controller110may detect a request to replicate data of application data tables150(302). Controller110may receive this request from one or more user applications130. In response to this request, controller110may send a query to system table140(304). This query may be an SQL query. Controller110may send this query to only those portions of system table140that relate to the respective application data tables150. Controller110may structure the query to return values of system table140that relate to application data tables150. Further, as will be understood by one of ordinary skill in the art, in some examples controller110may additionally or alternatively structure the query to gather default values and/or constraints of RDBMS120. Controller110may use the result from the query to generate before image172of system table140(306). Controller110may use code generation to generate before image172. Controller110may store before image172in a location that is external to RDBMS120, such as in before image repository170. Controller110may generate one or more before images172to include application data table150values. Alternatively, or additionally, controller110may generate one or more before images172to include default values and/or constraints of RDBMS120. Controller110may monitor for changes to system table140(308). For example, controller110may crawl through or scrape roll-forward recovery log160for data that indicates changes to system table140. Controller110may monitor for whether or not delete operations that relate to system table140entity values are detected (310). If controller110detects a delete operation, controller110may generate a new before image172for system table140that reflects this delete operation. For example, controller110may find an identifier of the delete operation. Using this delete operation, controller110may identify a row of system table140that the delete operation corresponds to. Once this row is identified, controller110may use code generation to generate a new before image that reflects this delete operation. Controller110may further correlate this new before image with a point in roll-forward recovery log160at which point this before image is current and accurate. If controller110determines that the changes to system table140are not a delete operation, controller110may determine whether or not the changes to system table140are an update operation (314). If no update operation is detected, controller110continues monitoring (308). If controller110does detect an update operation, controller110may gather the partial after image from the default logging on the update operation. Using this partial after image controller110identifies an identifier of the update, from which controller110further identifies a row of system table140to which the update relates to. Once controller110identifies this row using the partial after image, controller110uses code generation to generate a new before image172(316). Controller110may generate this before image by overlaying the partial after image over the before image that controller110previously generated for the identified row. As with the delete operation, controller110may further correlate this new before image with a point in roll-forward recovery log160at which point this before image is current and accurate. As depicted, controller110may continue with this pseudo-continuous cycle of monitoring for system table140changes in order to replicate application data table150changes (and/or default values and constraints) in a loop308-316. This loop may enable controller110to provide this data replication without full logging of system tables140. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-situation data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
32,869
11860895
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. Various components may be described as “configured to” perform a task or tasks. In such contexts, “configured to” is a broad recitation generally meaning “having structure that” performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently performing that task (e.g., a computer system may be configured to perform operations even when the operations are not currently being performed). In some contexts, “configured to” may be a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation. As such, the component can be configured to perform the task even when the component is not currently on. In general, the circuitry that forms the structure corresponding to “configured to” may include hardware circuits. Various components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f), interpretation for that component. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION Various embodiments of selectively replicating changes to hierarchical data structures are described herein.FIG.1is a logical block diagram illustrating selectively replicating changes to hierarchical data structures, according to some embodiments. A hierarchical data structure, such as hierarchical data structure130may be stored as part of a distributed data store, in some embodiments. A hierarchical data structure may include one or multiple objects organized according to one or multiple links that provide relationships, paths or other form of hierarchy between objects, in some embodiments. In this way, the relationship of objects, and data values associated or stored as part of objects, can be modeled and maintained in the data structure. For example, an organization chart indicating the reporting structure between company managers and employees can be modeled in a hierarchical data structure that indicates the relationship between employees, and may include data specific to each employee (e.g., name, assignment, years of service, etc.), in one embodiment. Respective copies or replicas hierarchical data structures can be maintained on multiple different storage hosts in a single distributed data store, as well as at other distributed data stores. For example, hierarchal data structure130ahas replicas130band130cin distributed data stores120band120crespectively. In at least some embodiments, each distributed data store may be removed from other distributed data stores. In some embodiments, each distributed data store may be implemented in a separate private network. Connections between networks110may be made via public network connections (e.g., over the Internet), in some embodiments. Different parts of the same hierarchical data structure may be owned or mastered in different distributed data stores. For example, mastered objects140ain distributed data store120adiffer from those mastered objects140band140cdistributed data stores120band120crespectively. Each distributed data store may receive requests to update objects,150, in the hierarchical data structure130, which may be performed for those objects that are mastered in the distributed data store120. Selective replication of updates to objects made at different replicas of the hierarchical data structure130may be performed, in various embodiments. The replication settings, policy, or other attributes of individual objects within a hierarchical data structure may be defined, in some embodiments (e.g., by a client or default system value). In this way, the objects and/or updates to objects that need to be shared with another replica of the hierarchical data structure130in order support the operation of systems that access the other replica can be managed. Consider a scenario where an application that manages employee data maintained in a hierarchical data structure is implemented to access different copies of the hierarchical data structure for different geographic regions. Each geographic region may have specific privacy laws or regulations for employee data which limit the type of information that may be shared outside of the geographic region. In order to satisfy these regulations, some employee data may not be replicated to other copies of the employee data maintained in different data stores so that the privacy laws or regulations are satisfied. Defined replication attributes for objects within the employee data may allow for replication of allowed data to be performed without replicating data that is only available in limited locations. As illustrated inFIG.1, different updates for different objects may be replicated to different data stores. For example, some objects (or updates objects), such as objects140band140cmay be replicated to distributed data store120aby providing 160b and 160c those updates to distributed data store120afrom distributed data stores120band120c, according to replication settings or permissions for objects140band140c. However, some objects or updates may not be replicated. For example, while distributed data store120bcan provide a client of distributed data store120bwith the ability to view and read objects mastered in distributed data store120c(objects140c), distributed data store120ccannot provide clients of distributed data store120cwith the ability to view or read objects mastered in distributed data store120b(objects140b). Different replication settings, permissions, or attributes for different objects in a hierarchical data structure may allow the management and visibility of data in the same hierarchical data structure to be different in different replicas at different distributed data stores. Please note,FIG.1is provided as a logical illustration of a hierarchical data structures, distributed data stores, private networks and providing updates, and is not intended to be limiting as to the physical arrangement, size, or number of components, modules, or devices, implementing such features. The specification first describes an example of directory storage service that performs selective updates to hierarchical data structures at the directory storage service, according to various embodiments. The example directory storage service may store hierarchical data structures for many different clients, in various embodiments. Included in the description of the example directory storage service are various aspects of the example directory storage service along with the various interactions between the directory storage service and clients. The specification then describes a flowchart of various embodiments of methods for performing selective updates to hierarchical data structures. Next, the specification describes an example system that may implement the disclosed techniques. Various examples are provided throughout the specification. FIG.2is a block diagram illustrating a provider network that implements a directory storage service including a hierarchical data store that selectively replicates changes to hierarchical data structures, according to some embodiments. A provider network may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to clients210. Provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system2000described below with regard toFIG.12) and regions, like provider network regions200and202and may implement the same services in different regions (e.g., directory storage service220), needed to implement and distribute the infrastructure and services offered by the provider network. In some embodiments, regions may be private networks that isolate faults from other regions. Connections between regions may be over network260(which may be a public network like the Internet). In some embodiments, a provider network may implement a directory storage service220to store hierarchical data structures for access, archive storage service270, and/or any other type of network based services280(which may include other computing resources or services, such as a virtual compute service and storage services, such as object storage services, block-based storage services, data warehouse storage service, or any other types of storage, processing, analysis, communication, event handling, visualization, and security services). Clients210may access these various services offered by provider network200via network260. Likewise network-based services may themselves communicate and/or make use of one another to provide different services. For example, various ones of other service(s)280may store, access, and/or rely upon hierarchical data structures stored in directory storage service220. In various embodiments, the components illustrated inFIG.2may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components ofFIG.2may be implemented by a system that includes a number of computing nodes (or simply, nodes), each of which may be similar to the computer system embodiment illustrated inFIG.12and described below. In various embodiments, the functionality of a given service system component (e.g., a component of the database service or a component of the storage service) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one database service system component). Directory storage service220may store, manage, and maintain hierarchical data structures, such as a directory structure discussed below with regard toFIG.4, stored at various ones of hierarchy storage host(s)240(in single tenant or multi-tenant fashion). Clients of directory storage service220may operate on any subset or portion of the hierarchical data structure with transactional semantics and/or may perform path-based traversals of hierarchical data structures. Such features allow clients to access hierarchical data structures in many ways. For instance, clients may utilize transactional access requests to perform multiple operations concurrently, affecting different portions (e.g., nodes) of the hierarchical directory structure (e.g., reading parts of the hierarchical directory structure, adding a node, and indexing some of the node's attributes, while imposing the requirement that the resulting updates of the operations within the transaction are isolated, consistent, atomic and durably stored). In various embodiments, directory storage service220may implement routing layer232to direct access requests from internal or external clients to the appropriate hierarchical storage host(s)240. For example, routing layer232may implement a fleet of routing nodes that maintain mapping information which identifies the locations of a hierarchical data structures on hierarchy storage host(s)240. When an access request is received, routing layer nodes may then determine which one of the hierarchy storage host(s) that hosts the hierarchical data structure identified in the access request to send the access request. Consider a scenario where hierarchical data structures may be replicated across multiple different hierarchy storage hosts240as part of a replica group, such as illustrated inFIG.5discussed below. Routing232may implement various load balancing schemes to direct requests from different clients to different hierarchy storage hosts within the replica group, so that no single hierarchy storage host becomes overburdened. Moreover, as hierarchy storage hosts240may utilize tokens to maintain state across different access requests sent by clients so that different hierarchy storage host(s)240may handle each request from the client, routing232need not track which hierarchy storage host is communicating with which client. Control plane234may implement various control functions to manage the hierarchy storage host(s)240and other components of directory storage service220that provide storage of hierarchical data structures, such as directing creation and placement of new hierarchical data structures on hierarchy storage host(s)240, storage scaling, heat management, node repair and/or replacement. For example, various placement schemes may utilize techniques such as consistent hashing (e.g., based on hashing an identifier for individual hierarchical data structures) to identify hierarchy storage host(s) to store versions of the hierarchical data structure, or randomly mapping hierarchical data structures to a number hierarchy storage host(s)240that form a replica set. To provide heat management, for example, control plane234may collect hierarchy storage host(s)240metrics published by each host. Each host may have various thresholds for performance characteristics, such as memory utilization, CPU utilization, disk utilization, and request-rate capacity. When a hierarchy storage host reports metrics that exceed a threshold (or multiple thresholds), control plane234may determine and perform a scaling events for hierarchy storage e hosts. For example control plane235may direct the migration of one or more hierarchical data structures to different hierarchy storage hosts. Similarly, control plane234may detect when certain hierarchy storage hosts are unable to keep up with access requests directed to a particular replica group for a hierarchical data structure and may provision additional hierarchy storage host(s) to horizontally scale the replica group to better meet the access request demand. Hierarchy storage host(s)240may maintain and handle access to hierarchical storage hosts in directory storage service220.FIG.3is a block diagram illustrating a hierarchy storage host, according to some embodiments. Hierarchy storage host300may implement request handler310to process access requests and pass along appropriate instructions or requests to other components, such as storage engine340, transaction log interface350or archive interface360. For example, access request handler310may interpret various requests formatted according to a programmatic interface. Access requests may include various ones of the requests described in the aforementioned figures as well as other types of requests, such as various access requests to create, update, attach, detach, delete and query nodes in a hierarchical data structure, and access requests to define, populate, discover, and query a local index (which may be strongly consistent and maintained as part of or separately from the hierarchical data structure) on hierarchical data structure node attributes. In various embodiments, storage engine340may be a storage engine configured to interact with structure or format of data as it is stored in current hierarchical data structure store320and historical hierarchical data structure store330(e.g., a key-value storage engine for data maintained in key-value storage format, relational data storage engine for data maintained in a relational storage format, etc.), which may be maintained according to the models discussed below with regard toFIG.4. In some embodiments, current hierarchical data structure store320may be partially or completely implemented in memory or other quick access storage devices, such as random access memory devices (RAM), as well as utilizing persistent block-based storage devices to store historical hierarchical data structure330, including magnetic disk or solid state drives. In some embodiments, caching techniques may be implemented so that frequently accessed portions of data, such as frequently access portions of current hierarchical data structures are maintained in memory components whereas other portions are maintained in block-based persistent storage components. Hierarchy storage host300may operate multi-tenant storage for hierarchical data structures so that different hierarchical data structures maintained on behalf of different clients, accounts, customers, and the like may be maintained in current hierarchical data structure store320and current hierarchical data structure store330. For example, hierarchy storage host300may participate in different replica groups with different hierarchy storage hosts for the different hierarchical data structures stored at hierarchy storage host300. Transaction log interface350may provide capabilities to interact with (e.g., validate transactions) with respect to the logs corresponding to hierarchical data structures stored in transaction log storage250for the hierarchical data structures. Similarly, archive interface360may be implemented to retrieve archived transactions or snapshots to service an access request for historical changes to the hierarchical data structure, a historical query, or other access requests that require a version of the hierarchical data structure that is older than that maintained in historical hierarchical data structure store. Turning back toFIG.2, transaction log storage250may provide a fault tolerant, high performance, durable, log publishing service. Transaction log storage250may be used as a commit log underlying strongly consistent distributed applications such as databases, key-value stores, and lock managers, and as illustrated inFIG.2directory storage service220providing hierarchical data storage. Transaction log storage250may provide strong consistency guarantees and support constraints between committed records, to enable features like deduplication, sequencing, and read-write conflict detection. For example, for different requests, transaction log storage250may determine whether or not to commit changes to hierarchical data structures (e.g., write requests and other modifications) by examining a proposed transaction for conflicts with other committed transactions. Such a feature may provide a fine-grained locking model over the hierarchical data structure (e.g., only those portions of the hierarchical data structure affected by a conflict between transactions may be locked). Transaction log storage may maintain a separate log or chain of log records for each hierarchical data structure, serving as an authoritative definition of the changes to the state hierarchical data structure over time. Transactions may be ordered according to transaction sequence numbers, which may be monotonically increasing to reference the state of a hierarchical data structure at individual points in time. Note that in some embodiments, transaction log storage250may be a separate network-based storage service implemented as part of provider network250external to directory storage service220. Archival worker(s)236may utilize transactions stored for different hierarchical data structures stored in respective transaction logs in transaction log storage250to generate and store snapshots of the hierarchical data structure at different points in time in archive storage service270. For example, archival management may determine when snapshots of a hierarchical data structure should be captured, provision appropriate storage locations in archive storage service270, and direct archive worker nodes (not illustrated) to perform the read, write, and other operations to generate and place the snapshots in archive storage service270. Similarly, archival worker(s)236may direct the copying and storage of individual log records/transactions and/or groups of log records and transactions to be stored as part of an archived transaction log for hierarchical data structures in archive storage service270. Cross region replication290may selectively replicate updates to objects in a directory structure to directories stored in directory service220in other regions202, as discussed below inFIGS.5-11. Generally speaking, clients210may encompass any type of client configurable to submit network-based services requests to provider network200via network260, including requests for directory services (e.g., a request to create or modify a hierarchical data structure to be stored in directory storage service220, etc.). For example, a given client210may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client210may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of persistent storage resources to store and/or access one or more hierarchical data structures to perform techniques like organization management, identity management, or rights/authorization management. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client210may be an application configured to interact directly with network-based services platform200. In some embodiments, client210may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In some embodiments, a client210may be configured to provide access to network-based services to other applications in a manner that is transparent to those applications. For example, client210may be configured to integrate with an operating system or file system to provide storage in accordance with a suitable variant of the storage models described herein. However, the operating system or file system may present a different storage interface to applications, such as a conventional file system hierarchy of files, directories and/or folders. In such an embodiment, applications may not need to be modified to make use of the storage system service model. Instead, the details of interfacing to provider network200may be coordinated by client210and the operating system or file system on behalf of applications executing within the operating system environment. Clients210may convey network-based services requests (e.g., access requests directed to hierarchical data structures in directory storage service220) to and receive responses from network-based services platform200via network260. In various embodiments, network260may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients210and platform200. For example, network260may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network260may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client210and network-based services platform200may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network260may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client210and the Internet as well as between the Internet and network-based services platform200. It is noted that in some embodiments, clients210may communicate with network-based services platform200using a private network rather than the public Internet. Different types of hierarchical data structures may be stored, managed, and or represented in different ways.FIG.4is a block diagram illustrating one example of a data model for a hierarchal data store that provides hierarchical data structures, according to some embodiments. A directory, for example may be a hierarchical data structure, such as directory structures410aor410n, and may be represented with circles or squares in the graph depicted ofFIG.4(e.g., objects400,401,402,403,404,405,406,407, and421). An object may have a globally unique identifier (GUID), zero or more attributes (key, value pairs), and zero or more links to other objects. In some embodiments, a directory may be one type of object which has zero or more child links to other objects, either directories or resources. Directory objects may have zero or one parent directory object, implying that directory objects and inks define a tree structure, in some embodiments. InFIG.4, object401may be an example of a directory object. Resource objects may be leaf objects in a directory structure410. A resource object may have a unique external Id (e.g., client specified) and client-defined attributes. Resource objects can have more than one parent object (which would allow for some hierarchical data structures to be configured as a Directed Acyclic Graph (DAG). Object405inFIG.4may be an example of a resource object and it has two parents (objects402and403). Some objects may be remotely mastered, as illustrated inFIG.4, while other objects may be locally mastered. Locally mastered objects may be updated or changed at the storage host that stores the directory, while remotely mastered objects may not be updated or changed at the storage host where the object is remotely mastered. Links can be established, as discussed below with regard toFIG.8to attach or detach objects that are between objects mastered in different regions. In some embodiments, multiple types of resource objects may be implemented. For example, in some embodiments, policy objects may be a type of resource object with two user-defined attributes: a policy type and policy document (e.g., describing a policy applied to applicable objects). For example, object406inFIG.4may be an example of a policy resource object. Another type of resource object may be an index resource object. For example, an index resource object be an index on various attributes values of objects in the child objects and other descendant objects of the directory object to which the index object is attached. For example, if resource object407is an index object, then index object407may provide an index object for the attributes of child objects402and403as well as descendant objects404,405, and406. In some embodiments, a link may be a directed edge between two objects defining a relationship between the two objects. There may be many types of links, such as client visible link types and another link type for internal operation implementation. In some embodiments, a child link type may create a parent—child relationship between the objects it connects. For example, child link ‘bb’ connects object401and object403. Child links may define the hierarchies of directory structures410. Child links may be named in order to define the path of the object that the link points to. Another type of client visible link may be an attachment link. An attachment link may apply a resource object, such as a policy resource object or index resource object, to another resource object or directory object. Attachment links may not define the hierarchical structures of directory structures410. For example, attachment link ‘xx’ applies the policy attribute stored in policy resource object406to directory object402. Objects can have multiple attachments. In some embodiments, some attachment restrictions may be enforced, such as a restriction that not more than one policy resource object of any given policy type can be attached to a same object. A non-client visible type of link or implied link type may also be implemented in some embodiments, a reverse link. Reverse links may be used for optimizing traversal of directory structures410for common operations like resource object look-ups (e.g., policy lookups). Directory storage service220may maintain reverse links in the opposite direction of child and attachment links. In various embodiments, objects in directory structures410can be identified and found by the pathnames that describe how to reach the object starting from a logical root object, starting with the link labeled “I” and following the child links separated by path separator “I” until reaching the desired object. For example, object405can be identified using the path: “/directory410a/aa/dd”. As some objects may be children of multiple directory objects, multiple paths may identify an object. For example, the following path can also be used to identify object405: “/directoryA/bb/ee”. As directory structures410may be a collection of objects whose boundary is defined by the hierarchy of those objects in the collection (e.g., the resulting hierarchical data structure, such as the tree or DAG created by the links between objects). In this way, directory structures410may represent separate, independent, or partially independent, organizations. To store the illustrated directory structures in a hierarchical data structure store, the described objects, links attributes, and the like may be modeled after a Resource Description Framework (RDF) data, in some embodiments. To maintain multiple versions of the hierarchical data structures, versioning information may also be included to express how the data has changed over time. RDF data may be structured as (Subject, Predicate, Object) tuples. When including additional versioning information this structure may become: (Subject, Predicate, Object, Version, PreviousVersion). To represent the hierarchical data structures based on RDF, there may be multiple types of RDF predicates. In some embodiments, one type of RDF predicates may represent links of the hierarchical data structure and another type of RDF predicates may represent attributes of the to hierarchical data structure. Different types of predicates may represent the hierarchical data structure differently. Link predicates may be between two objects, whereas attribute predicates may be between an object and a value. Since a single object might participate in several predicates of the same type, but with different values, predicates may begin with a common prefix and end in some additional type or naming information to aid in lookups. For example, the version entry in a tuple of a predicate may be the logical timestamp (e.g., transaction sequence number) at which the link or attribute was created, as all changes to a hierarchical data structure may utilize the transaction resolution process provided by transaction log storage250and may be assigned an ordered logical timestamp by transaction log storage250. As noted above inFIG.3, storage hosts may maintain a current version of a hierarchical data structure and past versions of a hierarchical data structure. In at least some embodiments, different respective tables may be maintained for each hierarchical data structure, one table that stores the data for the current version and another table that stores immutable records for the previous versions. In various embodiments a current version table or previous versions table may be accessed to perform various operations for a hierarchical data structure. For example an access request may specify a query: “Find all children for Object whose ID is GUID_401select GUID_401.child.* from CurrentVersion” or a query: “Find all policies for a resource object who's ID is GUID_405along all paths to the root. To service such queries, a depth first traversal may be executed along the parent links. At each object along the path to the root, the following internal queries may be executed: internal query 1: “Find if the object has policies: select GUID_405.link.HasPolicy.* from CurrentVersion;” internal query 2: “If the object has policies returned in internal query 1, use the value from the link to get the policy document value from the policy object: select GUID_406.link.PolicyDoc from CurrentVersion,” internal query 3: “Find all parents for current object and perform internal queries 1-3 for each parent object until reaching the root of the directory structure. Please note that previous examples are not intended to be limiting as to the format, structure, syntax, or other ways in which queries may be expressed or processed with respect to hierarchical data structures. FIG.5is a block diagram illustrating the use of a separate transaction log store to provide consistent storage for versioned hierarchical data structures, according to some embodiments. Multiple clients, such as clients510a,510b, and510cmay perform various access requests to a hierarchical data structure concurrently, such as various write requests512a,512b,512c. In a least some embodiments, replica group520may include multiple storage hosts, such as hierarchy storage host522a,522b, and522cthat maintain versions of the hierarchical data structure that are available for servicing various access requests from clients510. For example, clients510may submit different write requests512to hierarchy storage hosts522according to a routing schema which may direct access requests from each client to a different storage host in replica group520according to a load balancing scheme. Upon receiving the request, each hierarchy storage host522may perform various operations upon a current version of the hierarchical data structure at the storage host, then offer the writes524to transaction log storage210for commitment to directory structure log530including various information such as the affected or accessed data by performing the write request, the write request itself, and a transaction sequence number of other indication identifying the point-in-time of the current version of the hierarchical data structure at the storage host522. Indications of commitment526or conflict may be provided to the respective storage hosts522. For those writes that are committed, the directory structure log may be read and committed writes applied532to the respective versions of the hierarchical data structure maintained at storage hosts522. In some embodiments, archival worker(s)550may also read the directory structure log530to retrieve writes534for transmission as archived transactions or snapshots. Archival worker(s)550may then periodically or aperiodically update542an archived log540in archive storage service270and generate and send new snapshots552to be maintained as part of archived snapshots550. In this way, the hierarchical data structure can be recreated at any point-in-time, for example by loading a snapshot onto a storage host and applying transactions from archived log540to reach a certain transaction sequence number so that the version of the hierarchical data structure at the storage number is consistent with a specified point-in-time. Archival worker may generate filtered snapshots for performing scaling events to move some directories from storage host to another. Cross region replication290may retrieve writes562from transaction log storage250in order to generate update events as discussed below for replication to eligible directories, as discussed below with regard toFIGS.6and7. Cross region replication290may also retrieve metadata564from replica group520in order to generate the update events (e.g., eligible regions). Cross region replication290may offer remote writes566to transaction log storage250for received updates from other regions, in some embodiments. In other embodiments, cross region replication290may offer updates to replica group520directly (e.g., to the replica group that stores the hierarchical data structure to which the update applies). FIG.6is a block diagram illustrating a pull-based cross region replications service for directories, according to some embodiments. A replication operation may involve a source region610and a destination region620separated by a region boundary601(e.g., network boundaries, physical or geographic boundaries, logical boundaries, or other fault tolerance boundary such that the failure of one region does not cause a failure of another region). Cross region replication290may implement components in the separate provider network regions to perform the illustrated functionalities so that the directory storage service in each provider network region can send and accept updates from other regions to a directory structure that is replicated at multiple regions. Cross region replication290may implement update event generation worker(s) to accept committed write(s)612and obtain/generate object metadata614in order to generate update events. Update events may describe one or more updates for objects in the directory, in various embodiments. For example, update data event generation worker(s)630may store, as part of event data, the event type (e.g., create container, create directory, create resource, attach node, detach node, regional attach node, regional detach node, delete node, put node attributes, delete node attributes, create policy, attach policy, detach policy, or delete policy), source update or transaction in the log (e.g., a log record or bundle of log records identifier), source region, source directory, identity of destination regions to receive the event, and/or any other information, including signatures or values used to check data integrity upon receipt of the data in different locations. Update event generation worker(s)630may then store632the update events to event store640. In some embodiments, the events may be bundled or grouped together in storage as part of event data644. Event cursor642may indicate the latest event ids to be applied for a directory. Inbound event polling worker(s)660may implement polling behavior to get event cursors from retrieval host(s)650in source region610. Retrieval host(s)650may respond to received requests, such as requests to get event cursors646and provide them to inbound polling event worker(s)660. Inbound event polling worker(s)660may then decide based on the event id in the cursors whether any new updates for a directory are available. If so, inbound event polling worker(s)660may request the events652from retrieval host(s)650, which may get the events648from event store640and return the events654to inbound event polling worker(s)660. Inbound event polling worker(s)660may then offer updates described by the received events as updates to the transaction log622for objects in the directory structure. In some embodiments, inbound event polling worker(s) may offer the updates622to storage hosts in destination region620for processing (e.g., offering to the transaction log for the directory structure in destination region620). Inbound event polling worker(s) may perform various techniques, such as those discussed below with regard toFIG.10, to prevent the application of duplicate updates, or the application of updates out of order (e.g., by blocking, or stalling updates that should not, or should not yet, be applied). In contrast with the pull-based replication techniques discussed above, push-based replication techniques may be implemented to selectively update hierarchical data structures, in some embodiments.FIG.7is a block diagram illustrating a push-based cross region replications service for directories, according to some embodiments. As noted above, a replication operation may involve a source region710and a destination region720. Cross region replication290may implement components in the separate provider network regions to perform the illustrated functionalities so that the directory storage service in the provider network region can send and accept updates from other regions to a directory structure. As discussed above, cross region replication290may implement update event generation worker(s) to accept committed write(s)712and object metadata714in order to generate update events. Update event generation worker(s)730may then store732the update events into event store740, which may maintain event cursor742and event data744(as discussed above with regard toFIG.6). Cross region replication290may implement event push worker(s)750, to get event cursors/data746, receive event cursors/data748and push the events and cursor752to data stream storage760. Data stream storage760may be a data stream service, component, or other store that allows clients to input data according to a client-specified ordering so that the data can be retrieved from the stream by other clients according to the same ordering, in some embodiments. Data stream storage760may be implemented as part of stream management service (e.g., another network-based service280in the provider network) that may provide programmatic interfaces (e.g., application programming interfaces (APIs), web pages or web sites, graphical user interfaces, or command-line tools) to enable the creation, configuration and deletion of data streams, as well as the submission, storage and retrieval of stream data records in some embodiments. Inbound event stream worker(s)770may retrieve event(s)762from the stream760and offer them as updates to the transaction log722for the object in the directory structure. Clients, users, or other stakeholders may leverage selective replication to present and manage different data in the hierarchical data structure differently depending on the distributed data store. For example, data subject to certain regulatory controls can be managed in accordance with those regulatory controls applicable to one geographic location while regulations applicable to a distributed data store in another region may be managed according to the different regulations.FIG.8illustrates interactions between a client and a hierarchy storage node to manage selective replication of objects in a hierarchical data structure according to region, according to some embodiments. Client810may be a client like client210discussed above with regard toFIG.2. Client810may interact with directory storage service220according to interface800, which may be a programmatic interface (e.g., Application Programming Interface (API)), command line interface, or graphical user interface, in some embodiments. Hierarchy storage host820may handle requests received via interface800, in some embodiments. Client810may manage the replication of data objects to different regions of the directory storage service offered by provider network200. For example, a request840may be a request to get a list of regions specified for replication of a particular object. In some embodiments, the request may specify a default region list (e.g., all, or regions with a particular characteristic (e.g., US, Europe, Asia, etc.). The requested regions may be returned, as indicated at840. Other management requests, to add or remove regions for an object may also be performed along with receiving the appropriate acknowledgement. Hierarchy storage host820may access or update the attributes or metadata for the object to process the management request840. Client810may send a request850to attach or detach an object that is locally mastered to a node that is mastered in another region to hierarchy storage host820. Hierarchy storage host820may send a request to the remote region852to accomplish the attachment, which may be replicated to a remote hierarchy storage host via cross region replication290. Remote hierarchy storage host830, in the region that masters the object that is to be attached to, may receive the request854to attach or detach the node. Remote hierarchy storage host830may accept (or reject) the request and perform the update to the object data or metadata at remote hierarchy storage host830(e.g., by proposing the update as a change to the transaction log in remote region852for the hierarchical data structure). The acknowledgement of the request may be treated as an update event and provided back to the client via cross region replication290. For example, an acknowledgement of the request as an update event854may be made to cross region replication290which may (by the various techniques discussed above) provide the update event that acknowledges the request856back to hierarchy storage host820. As a cross region attachment/detachment request may be performed asynchronously, an acknowledgement858may (or may not) be sent to client810. Alternative notifications (e.g., messaging systems, electronic mail, etc.) may be performed. In some embodiments, cross region attachments/detachments may be acknowledged/displayed/or otherwise indicated or treated as pending by hierarchy storage host820. The directory storage service, access requests, and other techniques discussed inFIGS.2through8provide examples of a distributed data store storing a hierarchical data structure for a client and performing selective replication to other hierarchical data structures in different regions or networks. However, various other types of distributed storage systems may implement selective replication of changes to hierarchical data structures, in other embodiments, which may utilize other numbers or types of components, which may provide distributed data storage.FIG.9is a high-level flowchart illustrating methods and techniques to selectively replicate changes to hierarchical data structures, according to some embodiments. Various different distributed data stores including the embodiments described above may implement the techniques described below. As indicated at910, an update to an object of a hierarchical data structure stored in a distributed data store may be performed that is committed to the object according to a transaction log for the hierarchical data structure, in some embodiments. For example, updates may include requests to create the object, delete the object, modify the object (e.g., attributes or links), or perform any other operation that changes the object. A determination may be made, as indicated at920, as to whether other replicas of the hierarchal data structure stored in other remote distributed data stores are eligible to receive the update to the object, in some embodiments. For example, metadata or other information (e.g., replication permissions or settings) maintained for the object may identify a set of one or more distributed data structures which may be eligible to receive the update. In some embodiments, eligible replicas may be dynamically identified. For example, characteristics of the different distributed data sets (e.g., labels, tags, geographic, locations, or relationships) may be evaluated to select the set of distributed data stores and thus regions which are eligible to apply the update. Note that not all distributed data stores are eligible for updates for the same hierarchical data structure. Eligible distributed data stores may vary from object to object, so that in at least some embodiments a distributed data store that stores a replica of the hierarchical data structure is not eligible and thus does not apply the update to the object. If no distributed data stores are identified, then as indicated by the negative exit from920, replication of the update may not be performed. If some distributed data stores are identified, then as indicated by the positive exit from920, the update to the object may be provided to the identified remote distributed data stores, as indicated at930, in some embodiments. For example, pull-based, techniques, such as those discussed above with regard toFIG.6may be implemented in some embodiments. Push-based, techniques, such as those discussed above with regard toFIG.7may be implemented in some embodiments. Updates to the remote distributed data stores may be made directly to storage systems or hosts that maintain the replica of the hierarchical data structure or may be submitted via a replication mechanism, like cross region replication290discussed above, that ensures that updates are received and applied once, according to a same ordering as at the source replica of the hierarchical data structure. Once provided to the remote distributed data stores, the update may be committed to the respective transaction log at each remote distributed data store for the replicas of the hierarchical data structure to apply the update to the corresponding objects in the eligible replicas (e.g., according to the techniques discussed above with regard toFIG.5). For example, the remote distributed data stores may propose the update for commitment to a transaction log, which may then determine whether the proposed updated conflicts with other committed updates in the transaction log. If the update does not conflict, then the update may be committed. If the update does conflict, then the update may fail as if the update were submitted locally. FIG.10is a high-level flowchart illustrating methods and techniques to accept updates from a remote distributed data store for a hierarchical data structure, according to some embodiments. As indicated at1010, an update to an object in a hierarchical data structure may be obtained from a remote distributed data store that stores a replica of the hierarchal data structure, in various embodiments. For example, a push-based technique, as discussed above with regard toFIG.7or a pull-based technique, as discussed above with regard toFIG.6may be implemented, in different embodiments. In some embodiments, previously applied updates may be determined prior to obtaining the update. For example, a cursor value for the hierarchical data structure indicating a sequence number of other logical ordering that identifies a point in the logical order up to which all updates have been applied may be examined. However obtained, a determination may be made, as indicated at1020, as to whether the update has been applied to the object, in some embodiments. For example, multiple workers, processes, or other components may apply received updates, and thus a same update could be attempted multiple times by different components. By checking to whether the update has been applied, the update may be ignored, as indicated at1022, if already applied. In this way, updates to objects may be replicated as idempotent operations. As indicated at1030, in some embodiments a determination may be made as to whether any prior updates have been received from the remote distributed data store for the hierarchical data structure that have not been applied. For example, updates may be ordered according to a logical ordering (e.g., sequence numbers, logical timestamps, etc.) in order to ensure that replicated changes are seen in the same order at every hierarchical data structure that applies the replicated changes. A determination may be made as to whether any outstanding updates (e.g., in a update status table or other set of metadata maintained for the hierarchical data structure) earlier in the logical ordering remain to be applied, in some embodiments. As indicated at1032, if any prior updates have not been applied, then application of the update may be delayed until the prior updates received from the remote distributed data store have been applied to the hierarchical data structure. For example, the worker or component processing the update may mark or update the update status table or other metadata to indicate that the update has not yet been applied. However, as indicated by the negative exit from1030, if no prior updates remain to be applied, then the update may be offered to the transaction log for application to the hierarchical data structure, as indicated at1040. Different objects in a same hierarchical data structure may be mastered in different regions as a result of selective replication for hierarchical data structures, as noted above.FIG.11is a high-level flowchart illustrating methods and techniques to process access requests at a hierarchical data structure that masters different objects of a hierarchical data structure at different distributed data stores, according to some embodiments. As indicated at1110, an access request for an object in a hierarchical data structure may be received at a distributed data store, in various embodiments. Different types of access requests may be received, as discussed above. If, for instance, the access request is a read request, then as indicated by the positive exit from1120, the access request may be allowed to perform, and thus read the data specified in the read request, as indicated at1140, because hierarchical data structure data that is present in a replica may be read by default (as it would not be replicated to the hierarchical data structure if that hierarchical data structure were not allowed to provide access to it). For other access requests, such as access requests to update, change, add, remove or delete data (objects, attributes, links, etc.), then a determination may be made as to whether the distributed data store has mastery of the object, as indicated at1130. Mastery of objects may, include, for example, the write or update permissions for an object (e.g., the master of an object has exclusive write or update permissions for the object). A determination may be made as to whether the data store has mastery by examining metadata for the object (e.g., metadata that identifies the master explicitly, such as a master data store or region id, or the id of the region or distributed data store that created the object). If the distributed data store does not have mastery of the object, then as indicated at1132, performance of the access request may be blocked at the distributed data store. In some embodiments, information may be provided which identifies the distributed data store that does have mastery for the object in response to a blocked access request. If the distributed data store does have mastery of the object, then as indicated at1140, performance of the access request may be allowed. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as inFIG.12) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the directory storage service and/or storage services/systems described herein). The various methods as illustrated in the figures and described herein represent example embodiments of methods. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. FIG.12is a block diagram illustrating a computer system configured to implement selective updates to hierarchical data structures, according to various embodiments, as well as various other systems, components, services or devices described above. For example, computer system2000may be configured to implement hierarchy storage nodes that maintain versions of hierarchical data structures or components of a transaction log store that maintain transaction logs for hierarchical data structures, in different embodiments. Computer system2000may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device. Computer system2000includes one or more processors2010(any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory2020via an input/output (I/O) interface2030. Computer system2000further includes a network interface2040coupled to I/O interface2030. In various embodiments, computer system2000may be a to uniprocessor system including one processor2010, or a multiprocessor system including several processors2010(e.g., two, four, eight, or another suitable number). Processors2010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors2010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors2010may commonly, but not necessarily, implement the same ISA. The computer system2000also includes one or more network communication devices (e.g., network interface2040) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.). For example, a client application executing on system2000may use network interface2040to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the directory storage systems described herein. In another example, an instance of a server application executing on computer system2000may use network interface2040to communicate with other instances of the server application (or another server application) that may be implemented on other computer systems (e.g., computer systems2090). In the illustrated embodiment, computer system2000also includes one or more persistent storage devices2060and/or one or more I/O devices2080. In various embodiments, persistent storage devices2060may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device. Computer system2000(or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices2060, as desired, and may retrieve the stored instruction and/or data as needed. For example, in some embodiments, computer system2000may host a storage system server node, and persistent storage2060may include the SSDs attached to that server node. Computer system2000includes one or more system memories2020that are configured to store instructions and data accessible by processor(s)2010. In various embodiments, system memories2020may be implemented using any suitable memory technology, (e.g., one or more of cache, static random access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory). System memory2020may contain program instructions2025that are executable by processor(s)2010to implement the methods and techniques described herein. In various embodiments, program instructions2025may be encoded in platform native binary, any interpreted language such as Java™ byte-code, or in any other language such as C/C++, Java™, etc., or in any combination thereof. For example, in the illustrated embodiment, program instructions2025include program instructions executable to implement the functionality of a hierarchy storage nodes that maintain versions of hierarchical data structures or components of a transaction log store that maintain transaction logs for hierarchical data structures, in different embodiments. In some embodiments, program instructions2025may implement multiple separate clients, server nodes, and/or other components. In some embodiments, program instructions2025may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, Solaris™, MacOS™, Windows™, etc. Any or all of program instructions2025may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non-transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/DIRECTORY STORAGE SERVICE220-ROM coupled to computer system2000via I/O interface2030. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system2000as system memory2020or another type of memory. In other embodiments, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface2040. In some embodiments, system memory2020may include data store2045, which may be configured as described herein. For example, the information described herein as being stored by the hierarchy storage nodes or transaction log store described herein may be stored in data store2045or in another portion of system memory2020on one or more nodes, in persistent storage2060, and/or on one or more remote storage devices2070, at different times and in various embodiments. In general, system memory2020(e.g., data store2045within system memory2020), persistent storage2060, and/or remote storage2070may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, database configuration information, and/or any other information usable in implementing the methods and techniques described herein. In one embodiment, I/O interface2030may be configured to coordinate I/O traffic between processor2010, system memory2020and any peripheral devices in the system, including through network interface2040or other peripheral interfaces. In some embodiments, I/O interface2030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory2020) into a format suitable for use by another component (e.g., processor2010). In some embodiments, I/O interface2030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface2030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface2030, such as an interface to system memory2020, may be incorporated directly into processor2010. Network interface2040may be configured to allow data to be exchanged between computer system2000and other devices attached to a network, such as other computer systems2090(which may implement embodiments described herein), for example. In addition, network interface2040may be configured to allow communication between computer system2000and various I/O devices2050and/or remote storage2070. Input/output devices2050may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems2000. Multiple input/output devices2050may be present in computer system2000or may be distributed on various nodes of a distributed system that includes computer system2000. In some embodiments, similar input/output devices may be separate from computer system2000and may interact with one or more nodes of a distributed system that includes computer system2000through a wired or wireless connection, such as over network interface2040. Network interface2040may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface2040may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface2040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In various embodiments, computer system2000may include more, fewer, or different components than those illustrated inFIG.12(e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.) It is noted that any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more network-based services. For example, a database engine head node within the database tier of a database system may present database services and/or other types of data storage services that employ the distributed storage systems described herein to clients as network-based services. In some embodiments, a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A network-based service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the network-based service in a manner prescribed by the description of the network-based service's interface. For example, the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations. In various embodiments, a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network-based services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a network-based services request, a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the network-based service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). In some embodiments, network-based services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques. For example, a network-based service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message. The various methods as illustrated in the figures and described herein represent example embodiments of methods. The methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
71,229
11860896
DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter. Examples of these specific embodiments are illustrated in the accompanying drawings, and specific details are outlined in the following description to provide a thorough understanding of the subject matter. It will be understood that these examples are not intended to limit the scope of the claims to the illustrated embodiments. On the contrary, they are intended to cover such alternatives, modifications, and equivalents as may be included within the scope of the disclosure. In the present disclosure, physical units of data that are stored in a data platform—and that make up the content of, e.g., database tables in customer accounts—are referred to as micro-partitions. In different implementations, a data platform may store metadata in micro-partitions as well. The term “micro-partitions” is distinguished in this disclosure from the term “files,” which, as used herein, refers to data units such as image files (e.g., Joint Photographic Experts Group (JPEG) files, Portable Network Graphics (PNG) files, etc.), video files (e.g., Moving Picture Experts Group (MPEG) files, MPEG-4 (MP4) files, Advanced Video Coding High Definition (AVCHD) files, etc.), Portable Document Format (PDF) files, documents that are formatted to be compatible with one or more word-processing applications, documents that are formatted to be compatible with one or more spreadsheet applications, and/or the like. If stored internal to the data platform, a given file is referred to herein as an “internal file” and may be stored in (or at, or on, etc.) what is referred to herein as an “internal storage location.” If stored external to the data platform, a given file is referred to herein as an “external file” and is referred to as being stored in (or at, or on, etc.) what is referred to herein as an “external storage location.” These terms are further discussed below. Computer-readable files come in several varieties, including unstructured files, semi-structured files, and structured files. These terms may mean different things to different people. As used herein, examples of unstructured files include image files, video files, PDFs, audio files, and the like; examples of semi-structured files include JavaScript Object Notation (JSON) files, eXtensible Markup Language (XML) files, and the like; and examples of structured files include Variant Call Format (VCF) files, Keithley Data File (KDF) files, Hierarchical Data Format version 5 (HDF5) files, and the like. As known to those of skill in the relevant arts, VCF files are often used in the bioinformatics field for storing, e.g., gene-sequence variations, KDF files are often used in the semiconductor industry for storing, e.g., semiconductor-testing data, and HDF5 files are often used in industries such as the aeronautics industry, in that case for storing data such as aircraft-emissions data. Numerous other example unstructured-file types, semi-structured-file types, and structured-file types, as well as example uses thereof, could certainly be listed here as well and will be familiar to those of skill in the relevant arts. Different people of skill in the relevant arts may classify types of files differently among these categories and may use one or more different categories instead of or in addition to one or more of these. Data platforms are widely used for data storage and data access in computing and communication contexts. Concerning architecture, a data platform could be an on-premises data platform, a network-based data platform (e.g., a cloud-based data platform), a combination of the two, and/or include another type of architecture. Concerning the type of data processing, a data platform could implement online analytical processing (OLAP), online transactional processing (OLTP), a combination of the two, and/or another type of data processing. Moreover, a data platform could be or include a relational database management system (RDBMS) and/or one or more other types of database management systems. In a typical implementation, a data platform may include one or more databases that are respectively maintained in association with any number of customer accounts (e.g., accounts of one or more data providers), as well as one or more databases associated with a system account (e.g., an administrative account) of the data platform, one or more other databases used for administrative purposes, and/or one or more other databases that are maintained in association with one or more other organizations and/or for any other purposes. A data platform may also store metadata (e.g., account object metadata) in association with the data platform in general and in association with, for example, particular databases and/or particular customer accounts as well. Users and/or executing processes that are associated with a given customer account may, via one or more types of clients, be able to cause data to be ingested into the database, and may also be able to manipulate the data, add additional data, remove data, run queries against the data, generate views of the data, and so forth. As used herein, the terms “account object metadata” and “account object” are used interchangeably. In an implementation of a data platform, a given database (e.g., a database maintained for a customer account) may reside as an object within, e.g., a customer account, which may also include one or more other objects (e.g., users, roles, grants, shares, warehouses, resource monitors, integrations, network policies, and/or the like). Furthermore, a given object such as a database may itself contain one or more objects such as schemas, tables, materialized views, and/or the like. A given table may be organized as a collection of records (e.g., rows) so that each includes a plurality of attributes (e.g., columns). In some implementations, database data is physically stored across multiple storage units, which may be referred to as files, blocks, partitions, micro-partitions, and/or by one or more other names. In many cases, a database on a data platform serves as a backend for one or more applications that are executing on one or more application servers. Existing account object metadata synchronization techniques between different customer accounts include manual maintenance of different database-related processes to ensure the different account objects are synchronized in all accounts. Such synchronization techniques may be costly and time-consuming as the data provider has to execute multiple show or information schema queries, and then execute commands on the secondary (or target) account to ensure the account object metadata in the secondary account is synchronized with the account object metadata in the primary (or source) account. In this regard, account object metadata synchronization techniques based on replicating only a single database object (e.g., schemas, tables, columns, sequences, and functions underneath a database object) and manually synchronizing the object processes between accounts is associated with inefficiencies. Additionally, if an object in a first database that is being replicated refers to an object in a second database, then a refresh of the first database would fail. If databases are replicated separately, such databases may not be transactionally consistent with each other as each database will be replicated in a certain time difference between databases. In this regard, a replicated account object can be associated with other dependencies (e.g., one or more other account objects), which would also need to be synchronized if the account object is replicated. Aspects of the present disclosure provide techniques for configuration and use of account object metadata replication. More specifically, a replication request (e.g., from a data provider) indicates at least a first account object (e.g., a user account object) as well as a source account (e.g., an account of a data provider) and target accounts (e.g., an account of the data provider or a customer of the data provider such as a data consumer). An object dependency of the at least first account object to at least a second account object is determined. A replication of the at least first and second account objects is performed from the source account to the target account. In this regard, multiple account objects can be replicated based on a single replication request, which allows for the ability to replicate multiple objects (including databases) with point-in-time consistency transactionally. Additional benefits of using account object metadata replication include simplicity in data management, ability to have related objects across different databases (e.g., across different remote deployment accounts of a data provider), ability to replicate account metadata along with data, transactional consistency during replication across multiple databases, and simplified management of replication refreshes. Even though the replication request is described as including at least a first account object, a source account, and target account, the disclosure is not limited in this regard and the replication request can be configured to include other configurations. Additionally, the configurations that are described herein as included in the replication request may be stored in a metadata database and retrieved prior to replication (e.g., based on user-specific identifier in the replication request for a user requesting the replication, or based on other account/device/user identifying information included in the replication request). The various embodiments that are described herein are described with reference where appropriate to one or more of the various figures. An example computing environment with an application connector (e.g., as installed at a client device) configured to perform object replication configuration functions, as well as a compute service manager with an object replication manager (e.g., configured to perform the disclosed account object replication functionalities) are discussed in connection withFIGS.1-3. An example object replication manager is discussed in connection withFIG.4. Additional account object replication configurations are discussed in connection withFIG.5-FIG.10. A more detailed discussion of example computing devices that may be used with the disclosed techniques is provided in connection withFIG.11. FIG.1illustrates an example computing environment100that includes a database system in the example form of a network-based database system102, in accordance with some embodiments of the present disclosure. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components that are not germane to conveying an understanding of the inventive subject matter have been omitted fromFIG.1. However, a skilled artisan will readily recognize that various additional functional components may be included as part of the computing environment100to facilitate additional functionality that is not specifically described herein. In other embodiments, the computing environment may comprise another type of network-based database system or a cloud data platform. For example, in some aspects, the computing environment100may include a cloud computing platform101with the network-based database system102, and a storage platform104(also referred to as a cloud storage platform). The cloud computing platform101provides computing resources and storage resources that may be acquired (purchased) or leased and configured to execute applications and store data. The cloud computing platform101may host a cloud computing service103that facilitates storage of data on the cloud computing platform101(e.g., data management and access) and analysis functions (e.g. SQL queries, analysis), as well as other processing capabilities (e.g., configuring replication group objects as described herein). The cloud computing platform101may include a three-tier architecture: data storage (e.g., storage platforms104and122), an execution platform110(e.g., providing query processing), and a compute service manager108providing cloud services. It is often the case that organizations that are customers of a given data platform also maintain data storage (e.g., a data lake) that is external to the data platform (i.e., one or more external storage locations). For example, a company could be a customer of a particular data platform and also separately maintain storage of any number of files—be they unstructured files, semi-structured files, structured files, and/or files of one or more other types—on, as examples, one or more of their servers and/or on one or more cloud-storage platforms such as AMAZON WEB SERVICES™ (AWS™), MICROSOFT® AZURE®, GOOGLE CLOUD PLATFORM™, and/or the like. The customer's servers and cloud-storage platforms are both examples of what a given customer could use as what is referred to herein as an external storage location. The cloud computing platform101could also use a cloud-storage platform as what is referred to herein as an internal storage location concerning the data platform. From the perspective of the network-based database system102of the cloud computing platform101, one or more files that are stored at one or more storage locations are referred to herein as being organized into one or more of what is referred to herein as either “internal stages” or “external stages.” Internal stages are stages that correspond to data storage at one or more internal storage locations, and where external stages are stages that correspond to data storage at one or more external storage locations. In this regard, external files can be stored in external stages at one or more external storage locations, and internal files can be stored in internal stages at one or more internal storage locations, which can include servers managed and controlled by the same organization (e.g., company) that manages and controls the data platform, and which can instead or in addition include data-storage resources operated by a storage provider (e.g., a cloud-storage platform) that is used by the data platform for its “internal” storage. The internal storage of a data platform is also referred to herein as the “storage platform” of the data platform. It is further noted that a given external file that given customer stores at a given external storage location may or may not be stored in an external stage in the external storage location—i.e., in some data-platform implementations, it is a customer's choice whether to create one or more external stages (e.g., one or more external-stage objects) in the customer's data-platform account as an organizational and functional construct for conveniently interacting via the data platform with one or more external files. As shown, the network-based database system102of the cloud computing platform101is in communication with the cloud storage platforms104and122(e.g., AWS®, Microsoft Azure Blob Storage®, or Google Cloud Storage). The network-based database system102is a network-based system used for reporting and analysis of integrated data from one or more disparate sources including one or more storage locations within the cloud storage platform104. The cloud storage platform104comprises a plurality of computing machines and provides on-demand computer system resources such as data storage and computing power to the network-based database system102. The network-based database system102comprises a compute service manager108, an execution platform110, and one or more metadata databases112. The network-based database system102hosts and provides data reporting and analysis services to multiple client accounts. The compute service manager108coordinates and manages operations of the network-based database system102. The compute service manager108also performs query optimization and compilation as well as managing clusters of computing services that provide compute resources (also referred to as “virtual warehouses”). The compute service manager108can support any number of client accounts such as end-users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager108. The compute service manager108is also in communication with a client device114. The client device114corresponds to a user of one of the multiple client accounts supported by the network-based database system102. A user may utilize the client device114to submit data storage, retrieval, and analysis requests to the compute service manager108. Client device114(also referred to as user device114) may include one or more of a laptop computer, a desktop computer, a mobile phone (e.g., a smartphone), a tablet computer, a cloud-hosted computer, cloud-hosted serverless processes, or other computing processes or devices may be used to access services provided by the cloud computing platform101(e.g., cloud computing service103) by way of a network106, such as the Internet or a private network. In the description below, actions are ascribed to users, particularly consumers and providers. Such actions shall be understood to be performed concerning client device (or devices)114operated by such users. For example, notification to a user may be understood to be a notification transmitted to client device114, input or instruction from a user may be understood to be received by way of the client device114, and interaction with an interface by a user shall be understood to be interaction with the interface on the client device114. In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing such actions by the cloud computing service103in response to an instruction from that user. In some embodiments, the client device114is configured with an application connector128, which may be configured to perform object replication configuration functions130. For example, client device114can be associated with a data provider using the cloud computing service103of the network-based database system102. In some embodiments, object replication configuration functions130include generating a replication request138for communication to the network-based database system102via the network106. For example, replication request138can be communicated to the object replication manager132of the compute service manager108. The object replication manager132is configured to perform replication of one or more account objects and generate replicated objects134based on the replication request138. In some embodiments, the replication request indicates at least a first account object for replication. The indicated at least first account object can be used to determine dependency to at least a second account object. In some aspects, the at least first and second account objects can be associated with a corresponding account object type of a plurality of account object types. In some aspects, the plurality of account object types comprises at least one of the following: a user account object type, a roles account object type, a grant account object type, a warehouse object type, a resource monitor object type, a database account object type, a share account object type, an integration account object type, and network policy account object type. In some aspects, the replication request can be configured to further indicate a source account and a target account for performing the account object replication (e.g., from the source account into the target account). A more detailed description of the account object types is provided in connection withFIG.4. The compute service manager108is also coupled to one or more metadata databases112that store metadata about various functions and aspects associated with the network-based database system102and its users. For example, a metadata database112may include a summary of data stored in remote data storage systems as well as data available from a local cache. Additionally, a metadata database112may include information regarding how data is organized in remote data storage systems (e.g., the cloud storage platform104) and the local caches. Information stored by a metadata database112allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device. In some embodiments, metadata database112is configured to store account object metadata (e.g., account objects used in connection with a replication group object). Additionally, the metadata database112can also store the replicated account objects134. In some embodiments, the replicated account objects134can be stored in storage platform104or cloud-storage platforms122. The compute service manager108is further coupled to the execution platform110, which provides multiple computing resources that execute various data storage and data retrieval tasks. As illustrated inFIG.3, the execution platform110comprises a plurality of compute nodes. The execution platform110is coupled to storage platform104and cloud storage platforms122. The storage platform104comprises multiple data storage devices120-1to120-N. In some embodiments, the data storage devices120-1to120-N are cloud-based storage devices located in one or more geographic locations. For example, the data storage devices120-1to120-N may be part of a public cloud infrastructure or a private cloud infrastructure. The data storage devices120-1to120-N may be hard disk drives (HDDs), solid-state drives (SSDs), storage clusters, Amazon S3™ storage systems, or any other data-storage technology. Additionally, the cloud storage platform104may include distributed file systems (such as Hadoop Distributed File Systems (HDFS)), object storage systems, and the like. In some embodiments, at least one internal stage126may reside on one or more of the data storage devices120-1-120-N, and at least one external stage124may reside on one or more of the cloud storage platforms122. In some embodiments, the compute service manager108includes the object replication manager132. The object replication manager132comprises suitable circuitry, interfaces, logic, and/or code and is configured to perform the disclosed functionalities associated with configuration and use of account object metadata replication. For example, the object replication manager132performs account object replication to generate replicated objects134based on the replication request138. More specifically, the object replication manager132is also configured to perform a replication of the plurality of account objects from a source account of the data provider into at least one target account based on configuration information in the replication request138. Additional functionalities associated with the configuration of account object replication are discussed in connection withFIG.4-FIG.11. In some embodiments, communication links between elements of the computing environment100are implemented via one or more data communication networks. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-Networks) coupled to one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol. The compute service manager108, metadata database(s)112, execution platform110, and storage platform104, are shown inFIG.1as individual discrete components. However, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104may be implemented as a distributed system (e.g., distributed across multiple systems/platforms at multiple geographic locations). Additionally, each of the compute service manager108, metadata database(s)112, execution platform110, and storage platform104can be scaled up or down (independently of one another) depending on changes to the requests received and the changing needs of the network-based database system102. Thus, in the described embodiments, the network-based database system102is dynamic and supports regular changes to meet the current data processing needs. During a typical operation, the network-based database system102processes multiple jobs determined by the compute service manager108. These jobs are scheduled and managed by the compute service manager108to determine when and how to execute the job. For example, the compute service manager108may divide the job into multiple discrete tasks and may determine what data is needed to execute each of the multiple discrete tasks. The compute service manager108may assign each of the multiple discrete tasks to one or more nodes of the execution platform110to process the task. The compute service manager108may determine what data is needed to process a task and further determine which nodes within the execution platform110are best suited to process the task. Some nodes may have already cached the data needed to process the task and, therefore, be a good candidate for processing the task. Metadata stored in a metadata database112assists the compute service manager108in determining which nodes in the execution platform110have already cached at least a portion of the data needed to process the task. One or more nodes in the execution platform110process the task using data cached by the nodes and, if necessary, data retrieved from the cloud storage platform104. It is desirable to retrieve as much data as possible from caches within the execution platform110because the retrieval speed is typically much faster than retrieving data from the cloud storage platform104. As shown inFIG.1, the cloud computing platform101of the computing environment100separates the execution platform110from the storage platform104. In this arrangement, the processing resources and cache resources in the execution platform110operate independently of the data storage devices120-1to120-N in the cloud storage platform104. Thus, the computing resources and cache resources are not restricted to specific data storage devices120-1to120-N. Instead, all computing resources and all cache resources may retrieve data from, and store data to, any of the data storage resources in the cloud storage platform104. FIG.2is a block diagram illustrating components of the compute service manager108, in accordance with some embodiments of the present disclosure. As shown inFIG.2, the compute service manager108includes an access manager202and a credential management system (or key manager)204coupled to an access metadata database206, which is an example of the metadata database(s)112. Access manager202handles authentication and authorization tasks for the systems described herein. The credential management system204facilitates the use of remotely stored credentials to access external resources such as data resources in a remote storage device. As used herein, the remote storage devices may also be referred to as “persistent storage devices” or “shared storage devices.” For example, the credential management system204may create and maintain remote credential store definitions and credential objects (e.g., in the access metadata database206). A remote credential store definition identifies a remote credential store and includes access information to access security credentials from the remote credential store. A credential object identifies one or more security credentials using non-sensitive information (e.g., text strings) that are to be retrieved from a remote credential store for use in accessing an external resource. When a request invoking an external resource is received at run time, the credential management system204and access manager202use information stored in the access metadata database206(e.g., a credential object and a credential store definition) to retrieve security credentials used to access the external resource from a remote credential store. A request processing service208manages received data storage requests and data retrieval requests (e.g., jobs to be performed on database data). For example, the request processing service208may determine the data to process a received query (e.g., a data storage request or data retrieval request). The data may be stored in a cache within the execution platform110or in a data storage device in storage platform104. A management console service210supports access to various systems and processes by administrators and other system managers. Additionally, the management console service210may receive a request to execute a job and monitor the workload on the system. The compute service manager108also includes a job compiler212, a job optimizer214, and a job executor216. The job compiler212parses a job into multiple discrete tasks and generates the execution code for each of the multiple discrete tasks. The job optimizer214determines the best method to execute the multiple discrete tasks based on the data that needs to be processed. Job optimizer214also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the job. The job executor216executes the execution code for jobs received from a queue or determined by the compute service manager108. A job scheduler and coordinator218sends received jobs to the appropriate services or systems for compilation, optimization, and dispatch to the execution platform110. For example, jobs may be prioritized and then processed in that prioritized order. In an embodiment, the job scheduler and coordinator218determines a priority for internal jobs that are scheduled by the compute service manager108with other “outside” jobs such as user queries that may be scheduled by other systems in the database but may utilize the same processing resources in the execution platform110. In some embodiments, the job scheduler and coordinator218identifies or assigns particular nodes in the execution platform110to process particular tasks. A virtual warehouse manager220manages the operation of multiple virtual warehouses implemented in the execution platform110. For example, the virtual warehouse manager220may generate query plans for executing received queries. Additionally, the compute service manager108includes a configuration and metadata manager222, which manages the information related to the data stored in the remote data storage devices and the local buffers (e.g., the buffers in execution platform110). The configuration and metadata manager222uses metadata to determine which data files need to be accessed to retrieve data for processing a particular task or job. A monitor and workload analyzer224oversees processes performed by the compute service manager108and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in the execution platform110. The monitor and workload analyzer224also redistributes tasks, as needed, based on changing workloads throughout the network-based database system102and may further redistribute tasks based on a user (e.g., “external”) query workload that may also be processed by the execution platform110. The configuration and metadata manager222and the monitor and workload analyzer224are coupled to a data storage device226. The data storage device226inFIG.2represents any data storage device within the network-based database system102. For example, data storage device226may represent buffers in execution platform110, storage devices in storage platform104, or any other storage device. As described in embodiments herein, the compute service manager108validates all communication from an execution platform (e.g., the execution platform110) to validate that the content and context of that communication are consistent with the task(s) known to be assigned to the execution platform. For example, an instance of the execution platform executing a query A should not be allowed to request access to data-source D (e.g., data storage device226) that is not relevant to query A. Similarly, a given execution node (e.g., execution node302-1may need to communicate with another execution node (e.g., execution node302-2), and should be disallowed from communicating with a third execution node (e.g., execution node312-1) and any such illicit communication can be recorded (e.g., in a log or other location). Also, the information stored on a given execution node is restricted to data relevant to the current query and any other data is unusable, rendered so by destruction or encryption where the key is unavailable. As previously mentioned, the compute service manager108includes the object replication manager132configured to perform the disclosed functionalities associated with configuration and use of account object replication. For example, the object replication manager132generates replicated objects134based on the replication request138. FIG.3is a block diagram illustrating components of the execution platform110, in accordance with some embodiments of the present disclosure. As shown inFIG.3, the execution platform110includes multiple virtual warehouses, including virtual warehouse1(or301-1), virtual warehouse2(or301-2), and virtual warehouse N (or301-N). Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. The virtual warehouses can execute multiple tasks in parallel by using multiple execution nodes. As discussed herein, the execution platform110can add new virtual warehouses and drop existing virtual warehouses in real-time based on the current processing needs of the systems and users. This flexibility allows the execution platform110to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed. All virtual warehouses can access data from any data storage device (e.g., any storage device in the cloud storage platform104). Although each virtual warehouse shown inFIG.3includes three execution nodes, a particular virtual warehouse may include any number of execution nodes. Further, the number of execution nodes in a virtual warehouse is dynamic, such that new execution nodes are created when additional demand is present, and existing execution nodes are deleted when they are no longer necessary. Each virtual warehouse is capable of accessing any of the data storage devices120-1to120-N shown inFIG.1. Thus, the virtual warehouses are not necessarily assigned to a specific data storage device120-1to120-N and, instead, can access data from any of the data storage devices120-1to120-N within the cloud storage platform104. Similarly, each of the execution nodes shown inFIG.3can access data from any of the data storage devices120-1to120-N. In some embodiments, a particular virtual warehouse or a particular execution node may be temporarily assigned to a specific data storage device, but the virtual warehouse or execution node may later access data from any other data storage device. In the example ofFIG.3, virtual warehouse1includes three execution nodes302-1,302-2, and302-N. Execution node302-1includes a cache304-1and a processor306-1. Execution node302-2includes a cache304-2and a processor306-2. Execution node302-N includes a cache304-N and a processor306-N. Each execution node302-1,302-2, and302-N is associated with processing one or more data storage and/or data retrieval tasks. For example, a virtual warehouse may handle data storage and data retrieval tasks associated with an internal service, such as a clustering service, a materialized view refresh service, a file compaction service, a storage procedure service, or a file upgrade service. In other implementations, a particular virtual warehouse may handle data storage and data retrieval tasks associated with a particular data storage system or a particular category of data. Similar to virtual warehouse1discussed above, virtual warehouse2includes three execution nodes312-1,312-2, and312-N. Execution node312-1includes a cache314-1and a processor316-1. Execution node312-2includes a cache314-2and a processor316-2. Execution node312-N includes a cache314-N and a processor316-N. Additionally, virtual warehouse3includes three execution nodes322-1,322-2, and322-N. Execution node322-1includes a cache324-1and a processor326-1. Execution node322-2includes a cache324-2and a processor326-2. Execution node322-N includes a cache324-N and a processor326-N. In some embodiments, the execution nodes shown inFIG.3are stateless with respect to the data being cached by the execution nodes. For example, these execution nodes do not store or otherwise maintain state information about the execution node or the data being cached by a particular execution node. Thus, in the event of an execution node failure, the failed node can be transparently replaced by another node. Since there is no state information associated with the failed execution node, the new (replacement) execution node can easily replace the failed node without concern for recreating a particular state. Although the execution nodes shown inFIG.3each includes one data cache and one processor, alternative embodiments may include execution nodes containing any number of processors and any number of caches. Additionally, the caches may vary in size among the different execution nodes. The caches shown inFIG.3store, in the local execution node, data that was retrieved from one or more data storage devices in the cloud storage platform104. Thus, the caches reduce or eliminate the bottleneck problems occurring in platforms that consistently retrieve data from remote storage systems. Instead of repeatedly accessing data from the remote storage devices, the systems and methods described herein access data from the caches in the execution nodes, which is significantly faster and avoids the bottleneck problem discussed above. In some embodiments, the caches are implemented using high-speed memory devices that provide fast access to the cached data. Each cache can store data from any of the storage devices in the cloud storage platform104. Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that require significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that require caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that require fast scanning of large amounts of data. In some embodiments, the cache resources and computing resources associated with a particular execution node are determined when the execution node is created, based on the expected tasks to be performed by the execution node. Additionally, the cache resources and computing resources associated with a particular execution node may change over time based on changing tasks performed by the execution node. For example, an execution node may be assigned more processing resources if the tasks performed by the execution node become more processor-intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node require a larger cache capacity. Although virtual warehouses1,2, and N are associated with the same execution platform110, virtual warehouses1, . . . , N may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse1can be implemented by a computing system at a first geographic location, while virtual warehouses2and N are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities. Additionally, each virtual warehouse is shown inFIG.3as having multiple execution nodes. The multiple execution nodes associated with each virtual warehouse may be implemented using multiple computing systems at multiple geographic locations. For example, an instance of virtual warehouse1implements execution nodes302-1and302-2on one computing platform at a geographic location, and execution node302-N at a different computing platform at another geographic location. Selecting particular computing systems to implement an execution node may depend on various factors, such as the level of resources needed for a particular execution node (e.g., processing resource requirements and cache requirements), the resources available at particular computing systems, communication capabilities of networks within a geographic location or between geographic locations, and which computing systems are already implementing other execution nodes in the virtual warehouse. Execution platform110is also fault-tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location. A particular execution platform110may include any number of virtual warehouses. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary. In some embodiments, the virtual warehouses may operate on the same data in the cloud storage platform104, but each virtual warehouse has its execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users. In some embodiments, at least one of the execution nodes of execution platform110(e.g., execution node302-1) can be configured with the replication group manager132. Some example embodiments involve provisioning a remote account of a data provider—a type of account that is referred to herein at times as a “remote-deployment account,” a “remote-deployment account of a data provider,” a “data-provider remote account,” and the like—with one or more replication group objects for purposes of performing replication from a source account into a target account. It is also noted here that the terms “replication” and “refresh” (and similar forms such as “replicating,” “refreshing,” etc.) are used throughout the present disclosure. Generally speaking, “refresh” and its various forms are used to refer to a command or instruction that causes a database to start receiving one-way syncing (e.g., “pushed” updates). The term “replicate” and its various forms are used in a few different ways. In some cases, the “replicate” terms are used as a precursor to the “refresh” terms, where the “replicate” terms refer to the preparatory provisioning (populating, storing, etc.) of account objects, in some cases along with one or task objects as described herein. When used in that manner, the “replicate” terms can be analogized to putting up scaffolding for a building, and the “refresh” terms can be analogized to putting up the building. The “replicate” terms are also used in another way herein—in those cases, the terms are used as a general label for what a data consumer may request (e.g., via their data provider) when the data consumer wishes to have made available to them a local instance of a given database at a given remote-deployment account of their data provider. That is, the data consumer may request “replication” of a given database to a given remote deployment, and a data platform may responsively perform operations such as the more technical “replicate” operations (putting up the scaffolding) using one or more replication group objects and “refresh” operations (building, populating, filling in, etc.) that are also described herein. FIG.4is a block diagram of an object replication manager132used in the network-based database system ofFIG.1, in accordance with some embodiments of the present disclosure. Referring toFIG.4, the object replication manager132includes an account object type (OT) selection manager402, metadata database404, account object replication configuration422, a storage platform selection manager424, a deployment selection manager426, and logging and notification manager428. The account object replication configuration422includes configurations for performing account object metadata replication. For example, the account object replication configuration422includes one or more configurations communicated from a client device via the replication request138. Configurations communicated via the replication request138(which are also illustrated inFIG.6) include a user account object for replication, a source account, a target account, as well as one or more additional objects or other configurations. The storage platform selection manager424comprises suitable circuitry, interfaces, and/or code and is configured to select a storage platform for storing the replicated account object metadata. The selection of the storage platform can be based on a selection of a target account communicated via the replication request. In some embodiments, the selection of the storage platform is performed based on a deployment selection performed by the deployment selection manager426. Example cloud storage deployments are illustrated inFIG.5. The deployment selection manager426comprises suitable circuitry, interfaces, and/or code and is configured to select at least one deployment (e.g., at least one cloud storage deployment) which can be used for storing the replicated account object metadata. In some embodiments, a deployment selection can be provided with the replication request. In other embodiments, a deployment selection can be performed based on an account OT (e.g., one of the account OTs stored by the metadata database404) and a specific deployment used for storing account object metadata associated with the account OT. The account OT selection manager402comprises suitable circuitry, interfaces, and/or code and is configured to determine an object dependency associated with at least one account object communicated with the replication request. In some embodiments, the object dependency can be determined based on configuration information associated with each account OT stored by the metadata database404. After the object dependency is determined, the account OT selection manager402selects one or more additional account objects (e.g., account objects that are dependent on the account object received with the replication request). The account OT selection manager402can then perform replication of the account object received with the replication request as well as the one or more additional account objects determined based on the object dependency. The replication can be performed from a source account (e.g., a source account indicated by the replication request and associated with a first deployment) to at least one target account (e.g., a target account indicated by the replication request or determined based on the object dependency), which target accounts can be associated with at least a second deployment. The logging and notification manager428is configured to log replications of account object metadata as well as to provide notifications of completed replications or error notifications associated with an unfinished replication. The metadata database404includes information (e.g., object formatting, dependencies, and other configurations) on different account object types which can be used by the account OT selection manager402for determining dependencies in connection with account object metadata replication. The metadata database404includes information for the following account object types: user account OT406, roles account OT414, warehouse OT408, resource monitor OT416, database account OT410, share account OT418, integration account OT412, network policy account OT420, and grant account OT413. In some aspects, the metadata database404is the same as the metadata database112inFIG.1. In some embodiments, a user account object of the user account OT406lists users authorized to access the at least one target account (e.g., an account of a data provider or data consumer115) into which replication is performed. In this regard, a user account object is an object that is backing an identity. In some embodiments, each user with access to the network-based database system102can be represented by a user object. A user object stores all of the information about the user, including their login name, password, and defaults (role, virtual warehouse, and namespace). In some embodiments, a roles account object of the roles account OT414configures privileges for the user to access the at least one target account. For example, a certain role is given access to a certain number of objects or operations (e.g., a role has a certain number of privileges), and a user can be assigned a role. In some aspects, a role can be considered as a user group. Additionally, access to an object can be granted to a role (and not directly to a user), and roles can be granted to users. In some aspects, roles can also be assigned to other roles, creating a role hierarchy. In some embodiments, roles are assigned to users to allow them to perform actions required for business functions in their organization. A user can be assigned multiple roles, which allows users to switch roles (i.e., choose which role is active in a current data processing session) to perform different actions using separate sets of privileges. In some aspects, the compute service manager108uses roles to control access to objects in the network-based database system102. In this regard, roles are granted access privileges for objects in the system (e.g., databases, tables, etc.). In some aspects, roles can be granted to users to enable them to create, modify, and use the objects for which the roles have privileges. Roles can be granted to other roles to support defining hierarchical access privileges. In some aspects, a warehouse object of the warehouse OT408indicates compute resources (e.g., at least one virtual warehouse of the execution platform110) for executing a workload associated with one or more databases of the data provider. In some embodiments, a resource monitor object of the resource monitor OT416configures monitoring the usage of compute resources used for executing a workload. For example, a resource monitor object can be used to monitor the usage of a virtual warehouse, and generate a notification if such usage is above a threshold. In some aspects, a database account object of the database account OT410indicates one or more databases of the data provider. In some embodiments, a share account object of the share account OT418is an object that encapsulates information used for sharing a database. A share may include: (a) privileges that grant access to the database and the schema containing the objects to share; (b) the privileges that grant access to the specific objects in the database; and (c) the consumer accounts with which the database and its objects are shared. Once a database is created (e.g., in a consumer account) from a share, all the shared objects are accessible to users in the consumer account. In some embodiments, an integration account object (also referred to as an application programming interface (API) integration) of the integration account OT412is used to provide an interface between the network-based database system102and third-party services. In some aspects, the integration account object stores information about a proxy service (e.g., Hypertext Transfer Protocol Secure, or HTTPS, proxy service), including the following information: (a) the cloud platform provider (e.g., Amazon AWS); (b) the type of proxy service (in aspects when the cloud platform provider offers more than one type of proxy service); (c) the identifier and access credentials for a cloud platform role that has sufficient privileges to use the proxy service (for example, on AWS, the role's ARN (Amazon resource name) serves as the identifier and access credentials; when this cloud user is granted appropriate privileges, this user can be to access resources on the proxy service (an instance of the cloud platform's native HTTPS proxy service, for example, an instance of an Amazon API Gateway)); (d) an API integration object also specifies allowed (and optionally blocked) endpoints and resources on those proxy services. In some embodiments, the integration account object can also be used for creating a notification integration, a security integration, or a storage integration. Creating a notification integration generates a new notification integration in the account or replaces an existing integration. A notification integration is an object that provides an interface between the network-based database system102and a third-party cloud message queuing services. In some aspects, a security integration is an object that provides an interface between the network-based database system102and third-party services. A security integration enables clients that support OAuth to redirect users to an authorization page and generate access tokens (and optionally, refresh tokens) for access to the network-based database system102. In some aspects, a storage integration is an object that stores a generated identity and access management (IAM) entity for external cloud storage, along with an optional set of allowed or blocked storage locations. This option allows users to avoid supplying credentials when creating stages or when loading or unloading data. In some aspects, a network policy object of the network policy account OT420provides options for managing network configurations in a network-based database system. A network policy object can be used to restrict access to an account based on the user's IP address. Effectively, a network policy enables creating an IP allowed list, as well as an IP blocked list if desired. In this regard, account-level network policy management can be performed through a web interface or SQL. In some embodiments, a grant account object of the grant account OT413is an object that is used to represent a grant of a privilege on an object to another object. The object on which the privilege is granted can be referred to as a “securable,” the object that obtains this privilege on the object can be referred to as a “grantee,” and the role performing this operation is called the grantor. In some aspects, the securable can be of any object type (e.g., database, schema, table, role, warehouse, resource monitor, account, etc.). In some aspects, a grantee can be a role or a user, and a grantor can be a role. FIG.5illustrates an example regional-deployment map500for the example database system ofFIG.1, in accordance with some embodiments of the present disclosure. The regional-deployment map500is presented purely by way of example and not limitation, as different numbers and/or boundaries of regions could be demarcated in different implementations. As can be seen inFIG.5, the regional-deployment map500includes three example geographic regions: North American region502, European region504, and Asia Pacific region506. Moreover, various instances of deployments of the network-based database system102are depicted on the regional-deployment map500. A legend508shows symbols used for three different deployments of the network-based database system102, including deployments that are hosted by the cloud-storage platform122A, deployments hosted by the cloud-storage platform122B, and deployments that are hosted by the cloud-storage platform122C. Cloud-storage platforms122A,122B, and122C can be collectively referred to as cloud-storage platforms122, which are also illustrated inFIG.1. In some embodiments, replication of account object metadata configured based on the disclosed techniques can be used in disaster recovery (DR) and global data sharing use cases associated with source accounts (e.g., accounts of a data provider) and target accounts (e.g., accounts of a data provider or a dealer consumer) located in different deployments. FIG.6illustrates an example multi-deployment arrangement600using account object replication from a source account into a target account, in accordance with some embodiments of the present disclosure. The example multi-deployment arrangement600includes a primary deployment602of the network-based database system102and a remote deployment620of the network-based database system102. In an example scenario, a data provider (e.g., the data provider associated with client device114) has a primary-deployment (source) account604at the primary deployment602, and a remote-deployment (target) account622at the remote deployment620. The remote deployment620also includes a remote-data-consumer account626that is associated with the data consumer115. In some embodiments, the primary deployment602and the remote deployment620may be located in the same or different geographic regions. In some embodiments, the primary deployment account604of the primary deployment602can receive a replication request138for processing. The replication request138can include configuration information for configuring the account object replication, including at least a first user account object610, source account information612, target account information614, and additional configuration information616. In some embodiments, the at least first user account objects610identified for replication in the replication request138can be a user account object of the user account OT406. The source code information612and target account information614can identify source account604and target account622respectively. The additional configuration information616can identify additional account objects for replication or additional configurations associated with replication of the at least first user account object610identified by the replication request138. Example additional configuration information616can include one or more allowed databases associated with the at least first user account object610for replication, scheduling information to schedule execution of the replication or to configure periodic replication, or other configuration information. After receiving the replication request138, the object replication manager132at the primary deployment602can determine object dependencies608associated with the at least first user account object610. Based on the object dependencies608, the object replication manager132determines account objects606for replication, which includes the at least first user account object610as well as additional account objects that have dependencies with the at least first user account object610. The object replication manager132performs replication618of account objects606into target account622at the remote deployment620, generating replicated account objects624associated with same object dependencies608. FIG.7illustrates diagram700of example replication sequences of different account objects, in accordance with some embodiments of the present disclosure. As mentioned above, an account object that can be replicated based on a replication request can include account-entity domains such as users, roles, warehouses, databases, etc., and optionally include/exclude certain account domains, and also specific databases, schemas, and tables. This enables a near-zero knob experience for simple use cases for data providers or data consumers who want to replicate their entire account, and also enables advanced use cases such as filtering out certain databases, schemas, and tables for cost control, or independent replication/failover for databases that belong to different business units of a data provider or a data consumer. Referring toFIG.7, the object replication manager132can configure multiple replications of account objects, such as the account objects illustrated as replication sequences702and704inFIG.7. In this regard, when the account object dependencies are determined for a replication sequence, the account objects (including associated databases) of the replication sequence can be replicated together as dependent/related account objects. More specifically, the object replication manager132can configure replication of the following account object metadata within replication sequence702: user account objects U1and U2associated with corresponding roles account objects R1and R2; roles account objects R1and R2with additional roles account objects R3, R4, and R5; roles R4and R5are associated with databases DB1and DB2as well as virtual warehouse VW1via different grants. Since DB1and DB2have cross-database references (or database dependencies), both databases can be included in the same account object replication. Roles R1-R5, databases DB1, DB2, and virtual warehouse VW1are associated with grants G1, G2, G3, G4, G7, G8, G9, and G10(as illustrated inFIG.7). In some embodiments, database dependencies can be verified upon a refresh command and a notification can be provided to the client device communicating the replication request. Similarly, replication sequence704includes database DB3which is associated with roles R3and R4via grants G5and G6, which account objects can also be replicated together. FIG.8andFIG.9illustrate example usage scenarios for account object metadata replication in connection with disaster recovery (DR) and data sharing, in accordance with some embodiments of the present disclosure. Referring toFIG.8, use case800illustrates replication of account objects (e.g., using disclosed account object metadata replication techniques) from a source (or primary) account802of a data provider into other data provider accounts804,806, and808, with all accounts being deployed at different geographic locations. For example, one or more account objects from the source account802can be configured and replicated to target account806for use as a failover account object during DR. In the event of a detected network failure event, DR can be initiated by promoting the target account806to a primary account (an example DR configuration is illustrated inFIG.9). As illustrated inFIG.8, account objects replication from the source account802to target accounts804and808is used for global data sharing and generating read replicas of account objects using account object replication. FIG.9illustrates a DR event900where network outage is detected in the North location902where source account802is deployed. Since source account802was previously replicated into target account806, then target account806can be promoted to a primary/source account which can initiate account object replication into accounts804and808for purposes of global data sharing. In some embodiments, database replication can be used in DR scenarios or for data sharing. For DR, a main (or primary) deployment region can failover to a new deployment region that runs all the workloads of the main region (where the workloads of the main region can be replicated into the new deployment region using account object replication). The new deployment region can be promoted to a primary region, and workloads can be executed from the primary region. For DR, the target account used for object replication from the source account can be allowed for promotion from a secondary to a primary account designation so that it can be used during failover in a DR scenario. For purposes of data sharing, the specified target account is allowed only for a secondary account designation and cannot be used for failover. FIG.10is a flow diagram illustrating operations of a database system in performing a method1000for replicating account object metadata, in accordance with some embodiments of the present disclosure. Method1000may be embodied in computer-readable instructions for execution by one or more hardware components (e.g., one or more processors) such that the operations of the method1000may be performed by components of the network-based database system102, such as a network node (e.g., object replication manager132executing on a network node of the compute service manager108) or computing device (e.g., client device114) which may be implemented as machine1100ofFIG.11and may be configured with an application connector performing the disclosed functions. Accordingly, method1000is described below, by way of example with reference thereto. However, it shall be appreciated that method1000may be deployed on various other hardware configurations and is not intended to be limited to deployment within the network-based database system102. At operation1002, a replication request received from a client device of a data provider is decoded. The object replication manager132decodes replication request138, received from client device114via network106. The replication request138indicates at least a first account object (e.g., account object610), a source account (e.g., source account information612), and a target account (e.g., target account information614) of the data provider. At operation1004, an object dependency of the at least first account object to at least a second account object of the data provider is determined. For example and aboutFIG.6, the object replication manager132determines account object dependencies608associated with the at least first account object indicated by the replication request138. At operation1006, a replication of the at least first account object and the at least second account object is performed from the source account (e.g., source account604) into the target account (e.g., target account622) of the data provider. In some embodiments, the at least first account object and the at least second account object are associated with a corresponding account object type of a plurality of account object types (e.g., one of account object types406-420). The plurality of account object types comprises at least one of: (a) a user account object type406(e.g., a user account object of the user account object type406lists a user authorized to access the target account); (b) a roles account object type414(e.g., a roles account object of the roles account object type414configures privileges for accessing an account object, the privileges associated with a role of the user); (c) a grant account object type413(e.g., a grant account object of the grant account object type413configures a grant of at least one of the privileges for accessing the account object to at least another object); (d) a warehouse object type408(e.g., a warehouse object of the warehouse object type408indicates compute resources for executing a workload associated with one or more databases); (e) a resource monitor object type416(e.g., a resource monitor object of the resource monitor object type416configures monitoring usage of the compute resources); (f) a database account object type410(e.g., a database account object of the database account object type410indicates the one or more databases); (g) a share account object type418(e.g., a share account object of the share account object type418configures sharing of the one or more databases from the source account to at least another account of the user); (h) an integration account object type412(e.g., an integration account object of the integration account object type412configures application programming interfaces (APIs) and allowed network access points for accessing the source account or the target account); and (g) a network policy account object type420(e.g., a network policy object of the network policy account object type420provides network configurations for accessing the source account and the target account of the user). In some embodiments, the at least first account object is the user account object. The roles account object is determined as the at least second account object based on the dependency. In some aspects, the at least second account object further includes the grant account object, and the grant account object further configures a grant of privileges associated with the role. In some embodiments, the replication is performed based on at least one secure credential (e.g., a user password) associated with the user account object. In some aspects, the replication request further includes the database account object and a list of at least one allowed database, the at least one allowed database being a subset of the one or more databases. The at least second account object is configured to include the list of at least one allowed database. A replication of the at least one allowed database is performed from the source account into the target account of the data provider. In some aspects, the replication request further includes scheduling information. The replication can be performed according to a replication schedule. The replication schedule can be configured based on the scheduling information. FIG.11illustrates a diagrammatic representation of a machine1100in the form of a computer system within which a set of instructions may be executed for causing the machine1100to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.11shows a diagrammatic representation of the machine1100in the example form of a computer system, within which instructions1116(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1100to perform any one or more of the methodologies discussed herein may be executed. For example, instructions1116may cause machine1100to execute any one or more operations of method1100(or any other technique discussed herein, for example in connection withFIG.4-FIG.10). As another example, instructions1116may cause machine1100to implement one or more portions of the functionalities discussed herein. In this way, instructions1116may transform a general, non-programmed machine into a particular machine1100(e.g., the client device114, the compute service manager108, or a node in the execution platform110) that is specially configured to carry out any one of the described and illustrated functions in the manner described herein. In yet another embodiment, instructions1116may configure the client device114, the compute service manager108, and/or a node in the execution platform110to carry out any one of the described and illustrated functions in the manner described herein. In alternative embodiments, the machine1100operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1100may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1100may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a smartphone, a mobile device, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1116, sequentially or otherwise, that specify actions to be taken by the machine1100. Further, while only a single machine1100is illustrated, the term “machine” shall also be taken to include a collection of machines1100that individually or jointly execute the instructions1116to perform any one or more of the methodologies discussed herein. Machine1100includes processors1110, memory1130, and input/output (I/O) components1150configured to communicate with each other such as via a bus1102. In some example embodiments, the processors1110(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1112and a processor1114that may execute the instructions1116. The term “processor” is intended to include multi-core processors1110that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions1116contemporaneously. AlthoughFIG.11shows multiple processors1110, the machine1100may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory1130may include a main memory1132, a static memory1134, and a storage unit1136, all accessible to the processors1110such as via the bus1102. The main memory1132, the static memory1134, and the storage unit1136store the instructions1116embodying any one or more of the methodologies or functions described herein. The instructions1116may also reside, completely or partially, within the main memory1132, within the static memory1134, within machine storage medium1138of the storage unit1136, within at least one of the processors1110(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1100. The I/O components1150include components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1150that are included in a particular machine1100will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1150may include many other components that are not shown inFIG.11. The I/O components1150are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1150may include output components1152and input components1154. The output components1152may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), other signal generators, and so forth. The input components1154may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures or other tactile input components), audio input components (e.g., a microphone), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1150may include communication components1164operable to couple the machine1100to a network1180or devices1170via a coupling1182and a coupling1172, respectively. For example, the communication components1164may include a network interface component or another suitable device to interface with the network1180. In further examples, the communication components1164may include wired communication components, wireless communication components, cellular communication components, and other communication components to provide communication via other modalities. The device1170may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a universal serial bus (USB)). For example, as noted above, machine1100may correspond to any one of the client device114, the compute service manager108, or the execution platform110, and the devices1170may include the client device114or any other computing device described herein as being in communication with the network-based database system102or the cloud storage platform104. The various memories (e.g.,1130,1132,1134, and/or memory of the processor(s)1110and/or the storage unit1136) may store one or more sets of instructions1116and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions1116, when executed by the processor(s)1110, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate arrays (FPGAs), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network1180may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network1180or a portion of the network1180may include a wireless or cellular network, and the coupling1182may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling1182may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions1116may be transmitted or received over the network1180using a transmission medium via a network interface device (e.g., a network interface component included in the communication components1164) and utilizing any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, instructions1116may be transmitted or received using a transmission medium via coupling1172(e.g., a peer-to-peer coupling or another type of wired or wireless network coupling) to the device1170. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions1116for execution by the machine1100, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of method1100may be performed by one or more processors. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine but also deployed across several machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across several locations. Described implementations of the subject matter can include one or more features, alone or in combination as illustrated below by way of examples. Example 1 is a system comprising: at least one hardware processor; and at least one memory storing instructions that cause the at least one hardware processor to perform operations comprising: decoding a replication request received from a client device of a data provider, the replication request indicating at least a first account object, a source account, and a target account of the data provider; determining an object dependency of the at least first account object to at least a second account object of the data provider; and performing replication of the at least first account object and the at least second account object from the source account into the target account of the data provider. In Example 2, the subject matter of Example 1 includes subject matter where the at least first account object and the at least second account object are associated with a corresponding account object type of a plurality of account object types. In Example 3, the subject matter of Example 2 includes subject matter where the plurality of account object types comprises at least one of: a user account object type, wherein a user account object of the user account object type lists a user authorized to access the target account; a roles account object type, wherein a roles account object of the roles account object type configures privileges for accessing an account object, the privileges associated with a role of the user; a grant account object type, wherein a grant account object of the grant account object type configures a grant of at least one of the privileges for accessing the account object to at least another object; a warehouse object type, wherein a warehouse object of the warehouse object type indicates compute resources for executing a workload associated with one or more databases; and a resource monitor object type, wherein a resource monitor object of the resource monitor object type configures monitoring usage of the compute resources. In Example 4, the subject matter of Example 3 includes subject matter where the at least first account object is the user account object, and wherein the instructions further cause the at least one hardware processor to perform operations comprising: determining the roles account object as the at least second account object based on the dependency. In Example 5, the subject matter of Example 4 includes subject matter where the at least second account object further includes the grant account object, the grant account object further configures a grant of privileges associated with the role. In Example 6, the subject matter of Examples 4-5 includes subject matter where the instructions further cause the at least one hardware processor to perform operations comprising: performing the replication based on at least one secure credential associated with the user account object. In Example 7, the subject matter of Examples 3-6 includes subject matter where the plurality of account object types further comprises: a database account object type, and wherein a database account object of the database account object type indicates the one or more databases; a share account object type, wherein a share account object of the share account object type configures sharing of the one or more databases from the source account to at least another account of the user; an integration account object type, wherein an integration account object of the integration account object type configures application programming interfaces (APIs) and allowed network access points for accessing the source account or the target account; and a network policy account object type, wherein a network policy object of the network policy account object type provides network configurations for accessing the source account and the target account of the user. In Example 8, the subject matter of Example 7 includes subject matter where the replication request further includes the database account object and a list of at least one allowed database, the at least one allowed database being a subset of the one or more databases. In Example 9, the subject matter of Example 8 includes subject matter where the instructions further cause the at least one hardware processor to perform operations comprising: configuring the at least second account object to include the list of at least one allowed database; and performing a replication of the at least one allowed database from the source account into the target account of the data provider. In Example 10, the subject matter of Examples 1-9 includes subject matter where the replication request further includes scheduling information, and wherein the instructions further cause the at least one hardware processor to perform operations comprising: performing the replication according to a replication schedule, the replication schedule configured based on the scheduling information. Example 11 is a method comprising: decoding, by at least one hardware processor, a replication request received from a client device of a data provider, the replication request indicating at least a first account object, a source account, and a target account of the data provider; determining, by the at least one hardware processor, an object dependency of the at least first account object to at least a second account object of the data provider; and performing, by the at least one hardware processor, a replication of the at least first account object and the at least second account object from the source account into the target account of the data provider. In Example 12, the subject matter of Example 11 includes subject matter where the at least first account object and the at least second account object are associated with a corresponding account object type of a plurality of account object types. In Example 13, the subject matter of Example 12 includes subject matter where the plurality of account object types comprises at least one of: a user account object type, wherein a user account object of the user account object type lists a user authorized to access the target account; a roles account object type, wherein a roles account object of the roles account object type configures privileges for accessing an account object, the privileges associated with a role of the user; a grant account object type, wherein a grant account object of the grant account object type configures a grant of at least one of the privileges for accessing the account object to at least another object; a warehouse object type, wherein a warehouse object of the warehouse object type indicates compute resources for executing a workload associated with one or more databases; and a resource monitor object type, wherein a resource monitor object of the resource monitor object type configures monitoring usage of the compute resources. In Example 14, the subject matter of Example 13 includes subject matter where the at least first account object is the user account object, and the method further comprising: determining the roles account object as the at least second account object based on the dependency. In Example 15, the subject matter of Example 14 includes subject matter where the at least second account object further includes the grant account object, the grant account object further configures a grant of privileges associated with the role. In Example 16, the subject matter of Examples 14-15 includes, the method further comprising: performing the replication based on at least one secure credential associated with the user account object. In Example 17, the subject matter of Examples 13-16 includes subject matter where the plurality of account object types further comprises: a database account object type, and wherein a database account object of the database account object type indicates the one or more databases; a share account object type, wherein a share account object of the share account object type configures sharing of the one or more databases from the source account to at least another account of the user; an integration account object type, wherein an integration account object of the integration account object type configures application programming interfaces (APIs) and allowed network access points for accessing the source account or the target account; and a network policy account object type, wherein a network policy object of the network policy account object type provides network configurations for accessing the source account and the target account of the user. In Example 18, the subject matter of Example 17 includes subject matter where the replication request further includes the database account object and a list of at least one allowed database, the at least one allowed database being a subset of the one or more databases. In Example 19, the subject matter of Example 18 includes, the method further comprising: configuring the at least second account object to include the list of at least one allowed database; and performing a replication of the at least one allowed database from the source account into the target account of the data provider. In Example 20, the subject matter of Examples 11-19 includes subject matter where the replication request further includes scheduling information, and the method further comprising: performing the replication according to a replication schedule, the replication schedule configured based on the scheduling information. Example 21 is a computer-readable medium comprising instructions that, when executed by one or more processors of a machine, configure the machine to perform operations comprising: decoding, by at least one hardware processor, a replication request received from a client device of a data provider, the replication request indicating at least a first account object, a source account, and a target account of the data provider; determining, by the at least one hardware processor, an object dependency of the at least first account object to at least a second account object of the data provider; and performing, by the at least one hardware processor, a replication of the at least first account object and the at least second account object from the source account into the target account of the data provider. In Example 22, the subject matter of Example 21 includes subject matter where the at least first account object and the at least second account object are associated with a corresponding account object type of a plurality of account object types. In Example 23, the subject matter of Example 22 includes subject matter where the plurality of account object types comprises at least one of: a user account object type, wherein a user account object of the user account object type lists a user authorized to access the target account; a roles account object type, wherein a roles account object of the roles account object type configures privileges for accessing an account object, the privileges associated with a role of the user; a grant account object type, wherein a grant account object of the grant account object type configures a grant of at least one of the privileges for accessing the account object to at least another object; a warehouse object type, wherein a warehouse object of the warehouse object type indicates compute resources for executing a workload associated with one or more databases; and a resource monitor object type, wherein a resource monitor object of the resource monitor object type configures monitoring usage of the compute resources. In Example 24, the subject matter of Example 23 includes subject matter where the at least first account object is the user account object, and the operations further comprising: determining the roles account object as the at least second account object based on the dependency. In Example 25, the subject matter of Example 24 includes subject matter where the at least second account object further includes the grant account object, the grant account object further configures a grant of privileges associated with the role. In Example 26, the subject matter of Examples 24-25 includes, the operations further comprising: performing the replication based on at least one secure credential associated with the user account object. In Example 27, the subject matter of Examples 23-26 includes subject matter where the plurality of account object types further comprises: a database account object type, and wherein a database account object of the database account object type indicates the one or more databases; a share account object type, wherein a share account object of the share account object type configures sharing of the one or more databases from the source account to at least another account of the user; an integration account object type, wherein an integration account object of the integration account object type configures application programming interfaces (APIs) and allowed network access points for accessing the source account or the target account; and a network policy account object type, wherein a network policy object of the network policy account object type provides network configurations for accessing the source account and the target account of the user. In Example 28, the subject matter of Example 27 includes subject matter where the replication request further includes the database account object and a list of at least one allowed database, the at least one allowed database being a subset of the one or more databases. In Example 29, the subject matter of Example 28 includes, the operations further comprising: configuring the at least second account object to include the list of at least one allowed database; and performing a replication of the at least one allowed database from the source account into the target account of the data provider. In Example 30, the subject matter of Examples 21-29 includes subject matter where the replication request further includes scheduling information, and the operations further comprising: performing the replication according to a replication schedule, the replication schedule configured based on the scheduling information. Example 31 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of Examples 1-30. Example 32 is an apparatus comprising means to implement any of Examples 1-30. Example 33 is a system to implement any of Examples 1-30. Example 34 is a method to implement any of Examples 1-30. Although the embodiments of the present disclosure have been described concerning specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent, to those of skill in the art, upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim is still deemed to fall within the scope of that claim.
100,470
11860897
DETAILED DESCRIPTION Features of the inventive concept and methods of accomplishing the same may be understood more readily by reference to the detailed description of embodiments and the accompanying drawings. Hereinafter, embodiments will be described in more detail with reference to the accompanying drawings. The described embodiments, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments herein. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the aspects and features of the present inventive concept to those skilled in the art. Accordingly, processes, elements, and techniques that are not necessary to those having ordinary skill in the art for a complete understanding of the aspects and features of the present inventive concept may not be described. Unless otherwise noted, like reference numerals denote like elements throughout the attached drawings and the written description, and thus, descriptions thereof will not be repeated. Further, parts not related to the description of the embodiments might not be shown to make the description clear. In the drawings, the relative sizes of elements, layers, and regions may be exaggerated for clarity. Various embodiments are described herein with reference to sectional illustrations that are schematic illustrations of embodiments and/or intermediate structures. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Further, specific structural or functional descriptions disclosed herein are merely illustrative for the purpose of describing embodiments according to the concept of the present disclosure. Thus, embodiments disclosed herein should not be construed as limited to the particular illustrated shapes of regions, but are to include deviations in shapes that result from, for instance, manufacturing. For example, an implanted region illustrated as a rectangle will, typically, have rounded or curved features and/or a gradient of implant concentration at its edges rather than a binary change from implanted to non-implanted region. Likewise, a buried region formed by implantation may result in some implantation in the region between the buried region and the surface through which the implantation takes place. Thus, the regions illustrated in the drawings are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to be limiting. Additionally, as those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present disclosure. A distributed object store includes a cluster of separate data stores (e.g., disks). Each of the data stores is able to store data, or objects, thereon. Generally, the distributed object store will seek to keep the various data stores evenly balanced. By keeping the data stores relatively balanced, the distributed object store is able to prevent one of the data stores from running out of space while others have available space, thereby avoiding access hotspots. Some of the data stores of the distributed object store will store replica data. That is, one data store may store objects that are duplicated on one or more other data stores. Accordingly, the distributed object store will allow for data replication to tolerate against the failure of one of the data stores. During a planned maintenance operation (e.g., when taking a data store offline to upgrade the data store), the remaining data stores of the distributed object store will remain operative. However, commands for performing a data modification operation that involves the offline data store may continue while the data store is offline. For example, the distributed object store may receive a new write command for performing a write on the offline data store during the maintenance operation. If a resiliency requirement corresponding to the object that is intended to be written on the offline data store requires that the object be written to two different data stores, the distributed object store will ensure that the object is written to two other online data stores. This may cause a large degree of work for the distributed object store, as a relatively large amount of data may be moved between the various data stores of the distributed object store during balancing. In a conventional distributed object store, the amount of data movement depends upon a data distribution algorithm used by the object store for controlling data distribution in the cluster of data stores. The amount of data movement will be, in a best case scenario, at least equal to the amount of data that is impacted by the outage of the offline data store. However, the amount of data movement will more commonly be greater than only the amount of data that is impacted by the outage because the data distribution algorithm will also generally seek to achieve balanced data distribution in the cluster of data stores. That is, it is possible additional data beyond the data that is impacted due to an outage of a data store will also be moved across the cluster during activity directed by the data distribution algorithm. Accordingly, the conventional distributed object store redistributes data in the cluster to restore resiliency and balanced distribution. An amount of time for completing the data redistribution will be proportional to the amount of data that is being redistributed. As the distributed object store scales, planned outages of individual components (e.g., maintenance of data stores) will occur while the object store remains operational. However, triggering a data redistribution operation to synchronize the individual data stores across the cluster of the distributed object store after a planned outage generally inhibits performance of the object store immediately after any planned maintenance operation. Embodiments of the present disclosure provide high performance for a distributed object store by avoiding data redistribution as the primary means to synchronize data across the cluster, while also enabling the restoration of resiliency and balanced data distribution after a planned outage. Embodiments of the present disclosure also provide a method of efficiently tracking changes that are directed to impacted data stores during planned outages while using a reduced or minimal amount of information, and efficiently synchronizing the impacted stores when they come back online after a planned outage. Accordingly, embodiments of the present disclosure improve memory technology by providing a highly performant distributed object store that enables a large degree of concurrency and parallelism in data replication operations by providing a solution that is available during maintenance operations in distributed systems. FIG.1depicts a conceptual view of a catch-up log, according to one or more embodiments of the present disclosure. Referring toFIG.1, the present embodiment provides a mechanism that is referred to herein as a “catch-up log.” The catch-up log100is a log of entries that respectively represent data modification operations that are to be performed on an underlying data store (e.g., an affected data store, or data store “A,” which is described below with respect toFIGS.2and3below). Commands for the data modification operations are received when the underlying data store is offline (e.g., during a planned maintenance operation). The catch-up log100enables synchronization of the underlying data store with other data stores in the cluster to restore data resiliency and consistency. To enable use of the catch-up log100, the present embodiment provides a mechanism of logging data modification operations that occur during a planned outage in parallel with zero contention. The present embodiment also provides a mechanism that processes the entries in the catchup-log in parallel, and that scales linearly with the size of the cluster to efficiently bring data stores back online once respectively planned outages are complete. As shown inFIG.1, an object store of the present embodiment organizes data using a key-value pair101, wherein a key110of the key-value pair101is the object identifier, and a value120which contains an operation type of the key-value pair101is opaque data associated with the object identifier. As will be described further below, the catch-up log does not log the entirety of the data, but instead logs the type of the operation on the key. That is, one aspect of the disclosed embodiments is that the catch-up log does not log data, but instead simply logs the key and the operation type. The replay of the catch-up log depends upon reading the data from the good, online stores (i.e., relevant stores that were online during the planned outage), and then updating the store that returns to be online (i.e., after being offline during the planned outage). This allows for a small amount of memory that is able to keep track of a large number of changes, which generally increases the amount of downtime that can be supported. During a data store outage, a catch-up log100is generated for tracking data modification operations that occur during the outage and that involve the offline data store. For example, if a data entry on an offline data store is to be deleted or modified, when the data store comes back online, the data store can simply read, or replay, the entries of the catch-up log100to synchronize with the remaining data stores of the cluster. The catch-up log100uses only two pieces of information—the key110of the data being modified, and an operation type (e.g., write, delete, etc.) of the data modification operation. Accordingly, the size of an entry in the catch-up log100is able to be relatively small, thereby allowing a large number of entries to be logged without bloating memory or storage requirements. Small catch-up log entries, therefore, also allow for increased time for a planned outage of a data store. In the present embodiment, the catch-up log100is organized to allow multiple hardware threads to log entries (e.g., data modification operations) into the catch-up log100in parallel. The cluster of the distributed object store may include a plurality of nodes (e.g., nodes210,220described with respect toFIGS.2and3below). Each node in the cluster can have an independent catch-up log100, and each independent catch-up log100may be organized to have independent partitions for each respective hardware thread. Accordingly, multiple writers in the cluster are able to log entries in the catch-up log100in parallel without contention, and the multiple writers thus have zero impact on the performance IO path during the planned outage. When a data store comes back online (e.g., after completion of a planned maintenance operation), the object store may be synchronized. The object store is synchronized by processing the catch-up log100once the data store is online (e.g., following completion of the planned outage) to synchronize an affected data store(s) with the rest of the cluster. To synchronize the affected data store(s), a reference data store(s) is selected. Reference data stores are stores that remained online during the planned outage of the affected data store, and that have stored thereon the data that is used to synchronize the affected data store. Once the reference data stores are selected, the entries from the catch-up log100may be read, or replayed. When an entry of the catch-up log100is read, the affected data store may access data, which corresponds to the key110included in the entry, from the reference data store(s). Then, the object store may perform a data modification operation, which is indicated by the entry in the catch-up log100, on the affected data store. To enable parallelism during both a logging phase for generating the catch-up log100, and a replay phase for performing data modification operations corresponding to the entries of the catch-up log100, the present embodiment provides a mechanism for handling ordering constraints. FIG.2depicts logging entries in a catch-up log100that correspond to different dependent data modification operations that are respectively handled by different nodes in an underlying cluster of a distributed object store, according to one or more embodiments of the present disclosure. Referring toFIG.2, in the present example, a catch-up log100is created to log data modification operations being performed on an affected data store (data store “A”)230that is currently offline due to planned maintenance. In the present example, a data modification operation concerning the affected data store230is received in “NODE N”210, and another data modification operation concerning the affected data store230is received in “NODE1”220. Conventionally, if the order of the playback of the entries in the catch-up log100by an affected data store is incorrect, the recovery operation may unnecessarily delete a valid object in the affected data store230. For example, during the planned maintenance operation, there may first be an entry added to the catch-up log100bcorresponding to a delete operation of an object stored on the affected data store230, and there may thereafter be another entry added to the catch-up log100acorresponding to a write operation to write the object back into the affected data store230. That is, at a first time T1, an operation to delete a key K1arrives at a given node (NODE N)210, and is logged in a catch-up log100bstored on the node210. Then, at a later second time T2, an operation to add the key K1arrives at a different node (NODE1)220, and is logged in a catch-up log100astored on the other node220. Accordingly, if the entry corresponding to the write operation at the later time T2is read first from the catch-up log upon reactivation of the affected data store210, the key will be populated in the affected data store320. However this key will be deleted when the entry corresponding to the delete operation occurring at an earlier time T1is read later from the catch-up log, causing the valid entry in the affected data store to be deleted. This can cause data loss due to the ordering constraint between the operations performed at time T1and T2. FIG.3depicts parallel synchronization of the distributed object store, after bringing a data store online, by reading the entries of the catch-up log from the different nodes in the underlying cluster of the distributed object store, according to one or more embodiments of the present disclosure. Referring toFIG.3, embodiments of the present disclosure avoid unnecessary data movement (e.g., from the reference data store240to the affected data store230) by avoiding ordering constraints that would otherwise require the catch-up log entries to be replayed in a sequential, time-ordered fashion, and that would therefore inhibit parallel recovery of affected data stores230. That is, embodiments of the present disclosure avoid ordered playback, which would otherwise require the ability to sort the entries in the catch-up log100based on a reference time in the cluster200, and to sequentially playback entries in the sorted list. Such ordered playback would prevent synchronization of the affected data store230to be done in parallel in the cluster200. Instead, the present embodiment solves issues arising from ordering constraints by effectively making the catch-up log100an unordered set of entries representing data modification operations. By having an unordered catch-up log100, the present embodiment enables parallel logging of operations during planned maintenance, and parallel recovery during synchronization of the data store, thereby unlocking scalability and performance in the distributed object store for these operations. In the present embodiment, ordering constraints between different entries in the catch-up log100are eliminated as follows. After determining a data modification operation to perform in accordance with an entry in the catch-up log100, the object store may synchronize the data modification operation on a specified key110in the affected data store230by using a lock that is specific to that key110. Accordingly, only one thread is able to modify the data corresponding to the key110. The affected data store230may then check to determine whether the object has been erased from a reference data store240. That is, the affected data store may walk the reference store to determine whether the object corresponding to the data modification operation was deleted from the reference store while the affected data store230was offline, and whether the object remains deleted. If the entry corresponds to a delete operation, and if the affected data store230determines that the object exists in the reference data store240, then the catch-up log entry corresponding to that object is not processed, but is instead discarded. That is, by determining that the object exists in the reference data store240, the affected data store230may assume that the object was recreated after the delete operation, and that another entry in the catch-up log100, which the affected data store230has not yet read during the replay of the catch-up log100, corresponds to a write operation recreating the object. If the affected data store230determines that the object does not exist in the reference data store240, then the affected data store230may proceed with the delete/erase operation in the affected data store230. Similarly, if the entry corresponds to a write operation, and if the affected data store230determines that the object does not exist in the reference data store240, then the catch-up log entry corresponding to that object is not processed, but is instead discarded. That is, by determining that the object does not exist in the reference data store240, the affected data store230may assume that the object was deleted after the write operation, and that another entry in the catch-up log100, which the affected data store230has not yet read during the replay of the catch-up log100, corresponds to the data modification operation deleting the object. If the affected data store230determines that the object does exist in the reference data store240, then the affected data store230may proceed with the write operation in the affected data store230. Accordingly, embodiments of the present disclosure provide an improved or optimal amount of data logged for each key-value operation, provide scalable organization of the catch-up log to allow concurrent parallel writers to the catch-up log, enable the removal of ordering constraints between related operations to allow playback in any order, and enable concurrent playback from the catch-up log to allow parallel recovery across multiple nodes in the cluster of the distributed object store. The embodiments of the present disclosure are able to achieve logging of the entries in the catch-up log with zero contention, to achieve relatively small entry size to enable the logging of several million operations in the catch-up log to allow for an increased amount of available time for a planned outage, thereby using only a small amount of memory or storage that is dedicated to the catch-up log, and to achieve linear scalability while recovering in accordance with the catch-up log due to parallelism in processing of the catch-up log. In the description, for the purposes of explanation, numerous specific details provide a thorough understanding of various embodiments. It is apparent, however, that various embodiments may be practiced without these specific details or with one or more equivalent arrangements. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring various embodiments. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “have,” “having,” “includes,” and “including,” when used in this specification, specify the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. When a certain embodiment may be implemented differently, a specific process order may be performed differently from the described order. For example, two consecutively described processes may be performed substantially at the same time or performed in an order opposite to the described order. The electronic or electric devices and/or any other relevant devices or components according to embodiments of the present disclosure described herein may be implemented utilizing any suitable hardware, firmware (e.g., an application-specific integrated circuit), software, or a combination of software, firmware, and hardware. For example, the various components of these devices may be formed on one integrated circuit (IC) chip or on separate IC chips. Further, the various components of these devices may be implemented on a flexible printed circuit film, a tape carrier package (TCP), a printed circuit board (PCB), or formed on one substrate. Further, the various components of these devices may be a process or thread, running on one or more processors, in one or more computing devices, executing computer program instructions and interacting with other system components for performing the various functionalities described herein. The computer program instructions are stored in a memory which may be implemented in a computing device using a standard memory device, such as, for example, a random access memory (RAM). The computer program instructions may also be stored in other non-transitory computer readable media such as, for example, a CD-ROM, flash drive, or the like. Also, a person of skill in the art should recognize that the functionality of various computing devices may be combined or integrated into a single computing device, or the functionality of a particular computing device may be distributed across one or more other computing devices without departing from the spirit and scope of the embodiments of the present disclosure. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present inventive concept belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and/or the present specification, and should not be interpreted in an idealized or overly formal sense, unless expressly so defined herein. Embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise for example indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present disclosure as set forth in the following claims, with functional equivalents thereof to be included therein.
24,305
11860898
DETAILED DESCRIPTION Some examples of the claimed subject matter are now described with reference to the drawings, where like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. Nothing in this detailed description is admitted as prior art. One or more techniques and/or computing devices for non-disruptively establishing a synchronous replication relationship between a primary volume and a secondary volume and/or for resynchronizing the primary volume and the secondary volume are provided herein. For example, a synchronous replication relationship may be initially established between the primary volume (e.g., used to actively store client data for access) and the secondary volume (e.g., used as a backup to store replicated client data from the primary volume) in a non-disruptive manner with little to no client data access disruption to the primary volume. If the primary volume and the secondary volume become out of sync over time (e.g., due to a network issue, a storage controller failure, etc.), then the primary volume and the secondary volume may be resynchronized in a non-disruptive manner. To provide context for non-disruptively establishing and/or resynchronizing a synchronous replication relationship,FIG.1illustrates an embodiment of a clustered network environment100or a network storage environment. It may be appreciated, however, that the techniques, etc. described herein may be implemented within the clustered network environment100, a non-cluster network environment, and/or a variety of other computing environments, such as a desktop computing environment. That is, the instant disclosure, including the scope of the appended claims, is not meant to be limited to the examples provided herein. It will be appreciated that where the same or similar components, elements, features, items, modules, etc. are illustrated in later figures but were previously discussed with regard to prior figures, that a similar (e.g., redundant) discussion of the same may be omitted when describing the subsequent figures (e.g., for purposes of simplicity and ease of understanding). FIG.1is a block diagram illustrating the clustered network environment100that may implement at least some embodiments of the techniques and/or systems described herein. The clustered network environment100comprises data storage systems102and104that are coupled over a cluster fabric106, such as a computing network embodied as a private Infiniband, Fibre Channel (FC), or Ethernet network facilitating communication between the data storage systems102and104(and one or more modules, component, etc. therein, such as, nodes116and118, for example). It will be appreciated that while two data storage systems102and104and two nodes116and118are illustrated inFIG.1, that any suitable number of such components is contemplated. In an example, nodes116,118comprise storage controllers (e.g., node116may comprise a primary or local storage controller and node118may comprise a secondary or remote storage controller) that provide client devices, such as host devices108,110, with access to data stored within data storage devices128,130. Similarly, unless specifically provided otherwise herein, the same is true for other modules, elements, features, items, etc. referenced herein and/or illustrated in the accompanying drawings. That is, a particular number of components, modules, elements, features, items, etc. disclosed herein is not meant to be interpreted in a limiting manner. It will be further appreciated that clustered networks are not limited to any particular geographic areas and can be clustered locally and/or remotely. Thus, in one embodiment a clustered network can be distributed over a plurality of storage systems and/or nodes located in a plurality of geographic locations; while in another embodiment a clustered network can include data storage systems (e.g.,102,104) residing in a same geographic location (e.g., in a single onsite rack of data storage devices). In the illustrated example, one or more host devices108,110which may comprise, for example, client devices, personal computers (PCs), computing devices used for storage (e.g., storage servers), and other computers or peripheral devices (e.g., printers), are coupled to the respective data storage systems102,104by storage network connections112,114. Network connection may comprise a local area network (LAN) or wide area network (WAN), for example, that utilizes Network Attached Storage (NAS) protocols, such as a Common Internet File System (CIFS) protocol or a Network File System (NFS) protocol to exchange data packets. Illustratively, the host devices108,110may be general-purpose computers running applications, and may interact with the data storage systems102,104using a client/server model for exchange of information. That is, the host device may request data from the data storage system (e.g., data on a storage device managed by a network storage control configured to process I/O commands issued by the host device for the storage device), and the data storage system may return results of the request to the host device via one or more storage network connections112,114. The nodes116,118on clustered data storage systems102,104can comprise network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, cloud storage (e.g., a storage endpoint may be stored within a data cloud), etc., for example. Such a node in the clustered network environment100can be a device attached to the network as a connection point, redistribution point or communication endpoint, for example. A node may be capable of sending, receiving, and/or forwarding information over a network communications channel, and could comprise any device that meets any or all of these criteria. One example of a node may be a data storage and management server attached to a network, where the server can comprise a general purpose computer or a computing device particularly configured to operate as a server in a data storage and management system. In an example, a first cluster of nodes such as the nodes116,118(e.g., a first set of storage controllers configured to provide access to a first storage aggregate comprising a first logical grouping of one or more storage devices) may be located on a first storage site. A second cluster of nodes, not illustrated, may be located at a second storage site (e.g., a second set of storage controllers configured to provide access to a second storage aggregate comprising a second logical grouping of one or more storage devices). The first cluster of nodes and the second cluster of nodes may be configured according to a disaster recovery configuration where a surviving cluster of nodes provides switchover access to storage devices of a disaster cluster of nodes in the event a disaster occurs at a disaster storage site comprising the disaster cluster of nodes (e.g., the first cluster of nodes provides client devices with switchover data access to storage devices of the second storage aggregate in the event a disaster occurs at the second storage site). As illustrated in the clustered network environment100, nodes116,118can comprise various functional components that coordinate to provide distributed storage architecture for the cluster. For example, the nodes can comprise network modules120,122and data modules124,126. Network modules120,122can be configured to allow the nodes116,118(e.g., network storage controllers) to connect with host devices108,110over the storage network connections112,114, for example, allowing the host devices108,110to access data stored in the distributed storage system. Further, the network modules120,122can provide connections with one or more other components through the cluster fabric106. For example, inFIG.1, the network module120of node116can access a second data storage device130by sending a request through the data module126of a second node118. Data modules124,126can be configured to connect one or more data storage devices128,130, such as disks or arrays of disks, flash memory, or some other form of data storage, to the nodes116,118. The nodes116,118can be interconnected by the cluster fabric106, for example, allowing respective nodes in the cluster to access data on data storage devices128,130connected to different nodes in the cluster. Often, data modules124,126communicate with the data storage devices128,130according to a storage area network (SAN) protocol, such as Small Computer System Interface (SCSI) or Fiber Channel Protocol (FCP), for example. Thus, as seen from an operating system on nodes116,118, the data storage devices128,130can appear as locally attached to the operating system. In this manner, different nodes116,118, etc. may access data blocks through the operating system, rather than expressly requesting abstract files. It should be appreciated that, while the clustered network environment100illustrates an equal number of network and data modules, other embodiments may comprise a differing number of these modules. For example, there may be a plurality of network and data modules interconnected in a cluster that does not have a one-to-one correspondence between the network and data modules. That is, different nodes can have a different number of network and data modules, and the same node can have a different number of network modules than data modules. Further, a host device108,110can be networked with the nodes116,118in the cluster, over the storage networking connections112,114. As an example, respective host devices108,110that are networked to a cluster may request services (e.g., exchanging of information in the form of data packets) of nodes116,118in the cluster, and the nodes116,118can return results of the requested services to the host devices108,110. In one embodiment, the host devices108,110can exchange information with the network modules120,122residing in the nodes116,118(e.g., network hosts) in the data storage systems102,104. In one embodiment, the data storage devices128,130comprise volumes132, which is an implementation of storage of information onto disk drives or disk arrays or other storage (e.g., flash) as a file-system for data, for example. Volumes can span a portion of a disk, a collection of disks, or portions of disks, for example, and typically define an overall logical arrangement of file storage on disk space in the storage system. In one embodiment a volume can comprise stored data as one or more files that reside in a hierarchical directory structure within the volume. Volumes are typically configured in formats that may be associated with particular storage systems, and respective volume formats typically comprise features that provide functionality to the volumes, such as providing an ability for volumes to form clusters. For example, where a first storage system may utilize a first format for their volumes, a second storage system may utilize a second format for their volumes. In the clustered network environment100, the host devices108,110can utilize the data storage systems102,104to store and retrieve data from the volumes132. In this embodiment, for example, the host device108can send data packets to the network module120in the node116within data storage system102. The node116can forward the data to the data storage device128using the data module124, where the data storage device128comprises volume132A. In this way, in this example, the host device can access the volume132A, to store and/or retrieve data, using the data storage system102connected by the network connection112. Further, in this embodiment, the host device110can exchange data with the network module122in the host118within the data storage system104(e.g., which may be remote from the data storage system102). The host118can forward the data to the data storage device130using the data module126, thereby accessing volume1328associated with the data storage device130. It may be appreciated that non-disruptively establishing and/or resynchronizing a synchronous replication relationship may be implemented within the clustered network environment100. In an example, a synchronous replication relationship may be established between the volume132A of node116(e.g., a first storage controller) and the volume1328of the node118(e.g., a second storage controller) in a non-disruptive manner with respect to client data access to the volume132A and/or the volume1328. If the volume132A and the volume1328become out of sync, then the volume132A and the volume1328may be resynchronized in a non-disruptive manner. It may be appreciated that non-disruptively establishing and/or resynchronizing a synchronous replication relationship may be implemented for and/or between any type of computing environment, and may be transferrable between physical devices (e.g., node116, node118, etc.) and/or a cloud computing environment (e.g., remote to the clustered network environment100). FIG.2is an illustrative example of a data storage system200(e.g.,102,104inFIG.1), providing further detail of an embodiment of components that may implement one or more of the techniques and/or systems described herein. The data storage system200comprises a node202(e.g., host nodes116,118inFIG.1), and a data storage device234(e.g., data storage devices128,130inFIG.1). The node202may be a general purpose computer, for example, or some other computing device particularly configured to operate as a storage server. A host device205(e.g.,108,110inFIG.1) can be connected to the node202over a network216, for example, to provides access to files and/or other data stored on the data storage device234. In an example, the node202comprises a storage controller that provides client devices, such as the host device205, with access to data stored within data storage device234. The data storage device234can comprise mass storage devices, such as disks224,226,228of a disk array218,220,222. It will be appreciated that the techniques and systems, described herein, are not limited by the example embodiment. For example, disks224,226,228may comprise any type of mass storage devices, including but not limited to magnetic disk drives, flash memory, and any other similar media adapted to store information, including, for example, data (D) and/or parity (P) information. The node202comprises one or more processors204, a memory206, a network adapter210, a cluster access adapter212, and a storage adapter214interconnected by a system bus242. The data storage system200also includes an operating system208installed in the memory206of the node202that can, for example, implement a Redundant Array of Independent (or Inexpensive) Disks (RAID) optimization technique to optimize a reconstruction process of data of a failed disk in an array. The operating system208can also manage communications for the data storage system, and communications between other data storage systems that may be in a clustered network, such as attached to a cluster fabric215(e.g.,106inFIG.1). Thus, the node202, such as a network storage controller, can respond to host device requests to manage data on the data storage device234(e.g., or additional clustered devices) in accordance with these host device requests. The operating system208can often establish one or more file systems on the data storage system200, where a file system can include software code and data structures that implement a persistent hierarchical namespace of files and directories, for example. As an example, when a new data storage device (not shown) is added to a clustered network system, the operating system208is informed where, in an existing directory tree, new files associated with the new data storage device are to be stored. This is often referred to as “mounting” a file system. In the example data storage system200, memory206can include storage locations that are addressable by the processors204and network adapters210,212,214for storing related software application code and data structures. The processors204and network adapters210,212,214may, for example, include processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The operating system208, portions of which are typically resident in the memory206and executed by the processing elements, functionally organizes the storage system by, among other things, invoking storage operations in support of a file service implemented by the storage system. It will be apparent to those skilled in the art that other processing and memory mechanisms, including various computer readable media, may be used for storing and/or executing application instructions pertaining to the techniques described herein. For example, the operating system can also utilize one or more control files (not shown) to aid in the provisioning of virtual machines. The network adapter210includes the mechanical, electrical and signaling circuitry needed to connect the data storage system200to a host device205over a network216, which may comprise, among other things, a point-to-point connection or a shared medium, such as a local area network. The host device205(e.g.,108,110ofFIG.1) may be a general-purpose computer configured to execute applications. As described above, the host device205may interact with the data storage system200in accordance with a client/host model of information delivery. The storage adapter214cooperates with the operating system208executing on the node202to access information requested by the host device205(e.g., access data on a storage device managed by a network storage controller). The information may be stored on any type of attached array of writeable media such as magnetic disk drives, flash memory, and/or any other similar media adapted to store information. In the example data storage system200, the information can be stored in data blocks on the disks224,226,228. The storage adapter214can include input/output (I/O) interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a storage area network (SAN) protocol (e.g., Small Computer System Interface (SCSI), iSCSI, hyperSCSI, Fiber Channel Protocol (FCP)). The information is retrieved by the storage adapter214and, if necessary, processed by the one or more processors204(or the storage adapter214itself) prior to being forwarded over the system bus242to the network adapter210(and/or the cluster access adapter212if sending to another node in the cluster) where the information is formatted into a data packet and returned to the host device205over the network216(and/or returned to another node attached to the cluster over the cluster fabric215). In one embodiment, storage of information on disk arrays218,220,222can be implemented as one or more storage volumes230,232that are comprised of a cluster of disks224,226,228defining an overall logical arrangement of disk space. The disks224,226,228that comprise one or more volumes are typically organized as one or more groups of RAIDs. As an example, volume230comprises an aggregate of disk arrays218and220, which comprise the cluster of disks224and226. In one embodiment, to facilitate access to disks224,226,228, the operating system208may implement a file system (e.g., write anywhere file system) that logically organizes the information as a hierarchical structure of directories and files on the disks. In this embodiment, respective files may be implemented as a set of disk blocks configured to store information, whereas directories may be implemented as specially formatted files in which information about other files and directories are stored. Whatever the underlying physical configuration within this data storage system200, data can be stored as files within physical and/or virtual volumes, which can be associated with respective volume identifiers, such as file system identifiers (FSIDs), which can be 32-bits in length in one example. A physical volume corresponds to at least a portion of physical storage devices whose address, addressable space, location, etc. doesn't change, such as at least some of one or more data storage devices234(e.g., a Redundant Array of Independent (or Inexpensive) Disks (RAID system)). Typically the location of the physical volume doesn't change in that the (range of) address(es) used to access it generally remains constant. A virtual volume, in contrast, is stored over an aggregate of disparate portions of different physical storage devices. The virtual volume may be a collection of different available portions of different physical storage device locations, such as some available space from each of the disks224,226, and/or228. It will be appreciated that since a virtual volume is not “tied” to any one particular storage device, a virtual volume can be said to include a layer of abstraction or virtualization, which allows it to be resized and/or flexible in some regards. Further, a virtual volume can include one or more logical unit numbers (LUNs)238, directories236, Qtrees235, and files240. Among other things, these features, but more particularly LUNS, allow the disparate memory locations within which data is stored to be identified, for example, and grouped as data storage unit. As such, the LUNs238may be characterized as constituting a virtual disk or drive upon which data within the virtual volume is stored within the aggregate. For example, LUNs are often referred to as virtual drives, such that they emulate a hard drive from a general purpose computer, while they actually comprise data blocks stored in various parts of a volume. In one embodiment, one or more data storage devices234can have one or more physical ports, wherein each physical port can be assigned a target address (e.g., SCSI target address). To represent respective volumes stored on a data storage device, a target address on the data storage device can be used to identify one or more LUNs238. Thus, for example, when the node202connects to a volume230,232through the storage adapter214, a connection between the node202and the one or more LUNs238underlying the volume is created. In one embodiment, respective target addresses can identify multiple LUNs, such that a target address can represent multiple volumes. The I/O interface, which can be implemented as circuitry and/or software in the storage adapter214or as executable code residing in memory206and executed by the processors204, for example, can connect to volume230by using one or more addresses that identify the one or more LUNs238. It may be appreciated that non-disruptively establishing and/or resynchronizing a synchronous replication relationship may be implemented for the data storage system200. In an example, a synchronous replication relationship may be established between the volume230of the node202(e.g., a first storage controller) and a second volume of a second node (e.g., a second storage controller) in a non-disruptive manner with respect to client data access to the volume230. If the volume230and the second volume become out of sync, then the volume230and the second volume may be resynchronized in a non-disruptive manner. It may be appreciated that non-disruptively establishing and/or resynchronizing a synchronous replication relationship may be implemented for and/or between any type of computing environment, and may be transferrable between physical devices (e.g., node202, host device205, etc.) and/or a cloud computing environment (e.g., remote to the node202and/or the host device205). One embodiment of non-disruptively establishing and/or resynchronizing a synchronous replication relationship is illustrated by an exemplary method300ofFIG.3. In an example, a synchronous replication relationship may be established between a first storage controller, hosting a primary volume, and a second storage controller. It may be appreciated that the synchronous replication relationship may be established for a file within the primary volume, a LUN within the primary volume, a consistency group of one or more files or LUNs, a consistency group spanning any number of primary volumes, a subdirectory within the primary volume, and/or any other grouping of data, and that the techniques described herein are not limited to merely a single primary volume and secondary volume, but can apply to any number of files, LUNs, volumes, and/or consistency groups. The synchronous replication relationship may be established in a non-disruptive manner such that client access to the primary volume may be facilitated during the establishment of the synchronous replication relationship. Accordingly, a base snapshot, of the primary volume, may be created. The base snapshot may comprise a point in time representation of data within the primary volume, such as data within a consistency group of files and/or storage objects. At302, a baseline transfer of data from the primary volume to the second storage controller may be performed using the base snapshot to create a secondary volume accessible to the second storage controller. At304, one or more incremental transfers may be performed between the primary volume and the secondary volume until a synchronization criteria is met (e.g., and/or other primary volumes and/or secondary volumes where the synchronous replication relationship exists for a consistency group spanning multiple primary volumes). For example, the synchronization criteria may correspond to a threshold number of incremental transfers or where a last incremental transfer transfers an amount of data below a threshold (e.g., about 10 mb or any other value indicative of the primary volume and the secondary volume having a relatively small amount of divergence). In an example, an incremental snapshot of the primary volume may be created. The incremental snapshot may correspond to a point in time representation of data within the primary snapshot at a time subsequent to when the base snapshot was created. A difference between the incremental snapshot and a prior snapshot (e.g., a snapshot used to perform the baseline transfer or a last incremental transfer) of the primary volume may be used to perform an incremental transfer of data (e.g., differences in data within the primary volume from when the prior snapshot was created and when the incremental snapshot was created). For example, a block level incremental transfer, of data blocks that are different between the prior snapshot and the incremental snapshot, may be performed. Responsive to completion of the incremental transfer, a common snapshot may be created from the secondary volume. For example, the common snapshot may be used to roll the secondary volume back to a state when the secondary volume mirrored the primary volume, such as for performing a resynchronization between the primary volume and the secondary volume. At306, dirty region logs may be initialized (e.g., in memory) to track modifications of files or LUNs within the primary volume (e.g., and/or other primary volumes where the synchronous replication relationship exists for a consistency group spanning multiple primary volumes) (e.g., track dirty data that has been written to the primary volume but not yet replicated to the secondary volume). For example, a dirty region log for a file or LUN may comprise bits that can be set to indicate whether regions of the file have been modified by client write requests that have not yet been replicated to the secondary volume. For example, a client write request, targeting a region of the file or LUNs, may be received. The client write request may be implemented upon the region (e.g., to write client data into the region). Responsive to successful implementation of the client write request, a bit within the dirty region log may be set to indicate that the region is a dirty region comprising dirty data. The bit may be reset once the client write request and/or the dirty data has been successfully replicated to the secondary volume. At308, splitter objects for end points, such as files or LUNs, may be configured to subsequently split client write requests (e.g., at312) to the primary volume (e.g., and/or other primary volumes and/or secondary volumes where the synchronous replication relationship exists for a consistency group spanning multiple primary volumes) and to the secondary volume (e.g., a splitter object is created and associated with a dirty region log for a file or LUN). For example, a splitter object may be subsequently used to intercept and split a client write request (e.g., before the client write request is received by a file system) into a replication client write request so that the client write request can be locally implemented by the first storage controller upon the primary volume and the replication client write request can be remotely implemented by the second storage controller upon the secondary volume. At this point, the splitter object starts tracking dirty regions using the dirty region logs. At310, responsive to the dirty region logs tracking modifications to the primary volume (e.g., marking regions, modified by client write requests, as dirty), an asynchronous transfer from the primary volume to the secondary volume may be performed (e.g., a final incremental transfer). At312, a synchronous transfer engine session may be initiated between the primary volume and the secondary volume (e.g., and/or other primary volumes and/or secondary volumes where the synchronous replication relationship exists for a consistency group spanning multiple primary volumes), such that a transfer engine is replicating incoming client write requests to the secondary volume based upon data within the dirty region log. For example, responsive to an incoming client write request targeting a dirty region of the file or LUN within the primary volume (e.g., a bit within the dirty region log may indicate that the dirty region has been modified and that the modification has not yet been replicated to the secondary volume), the incoming client write request may be committed to the primary volume and not split for replication to the secondary volume because the dirty region will be subsequently replicated to the secondary volume by a cutover scanner. Responsive to the incoming client write request corresponding to a non-dirty region, the incoming client write request may be locally committed to the non-dirty region of the primary volume and a replication client write request, split from the incoming client write request, may be remotely committed to the secondary volume. Responsive to the incoming client write request corresponding to a partially dirty region associated with an overlap between a dirty block and a non-dirty block, the incoming client write request may be locally committed to the partially dirty region of the primary volume (e.g., committed to the dirty and non-dirty blocks) and the entire replication client write request may be remotely committed to the secondary volume. At314, the cutover scanner may be initiated to scan the dirty region log for transferring dirty data of dirty regions from the primary volume to the secondary volume (e.g., and/or other primary volumes and/or secondary volumes where the synchronous replication relationship exists for a consistency group spanning multiple primary volumes). For example, the cutover scanner may identify a current dirty region of the primary volume using the dirty region log. A lock may be set for the current dirty region to block incoming client write requests to the current dirty region. In an example, while the lock is set, a new incoming client write request, targeting the current dirty region, may be queued. Dirty data of the current dirty region may be transferred to the second storage controller for storage into the secondary volume. The bit, within the dirty region log, may be reset to indicate that the current dirty region is now a clean region with clean data replicated to the secondary volume. Responsive to successful storage of the dirty data into the secondary volume and/or the bit being reset, the current dirty region may be unlocked. Responsive to the current dirty region (e.g., the clean region) being unlocked, the new incoming client write request may be processed (e.g., the clean region may be locked while the new incoming client write request is being implemented upon the clean region, and then the clean region may be unlocked). Responsive to the cutover scanner completing, the primary volume and the secondary volume may be designated as being in the synchronous replication relationship. While in the synchronous replication relationship, a current client write request to the primary volume may be received. The current client write request may be split into a current replication client write request. The current client write request may be locally committed to the primary storage. The current replication write request may be sent to the second storage controller for remote commitment to the secondary volume. Responsive to the current client write request being locally committed and the current replication client write request being remotely committed, a completion notification may be sent to a client that submitted the current client write request. In an example, the primary volume and the secondary volume may become out of sync for various reasons, such as network issues, a storage controller failure, etc. Accordingly, a common snapshot between the primary volume and the secondary volume may be used to roll the secondary volume back to a state where the secondary volume mirrored the primary volume. The synchronous replication relationship may be reestablished in a non-disruptive manner (e.g., the primary volume may remain accessible to clients during the resynchronization). For example, the dirty region logs, the splitter objects, the synchronous transfer engine session, and/or the cutover scanner may be used to reestablish the synchronous replication relationship (e.g., at least some of the actions302,304,306,308,310,312, and/or314may be performed to reestablish the synchronous replication relationship in a non-disruptive manner). In an example, the dirty region logs, the splitter objects, the synchronous transfer engine session, and/or the cutover scanner may be used to perform a volume migration operation of the primary volume. For example, the primary volume may be migrated in a non-disruptive manner where a relatively smaller disruption interval is achieved. In this way, client access may be facilitated to the primary volume during the volume migration operation. In an example, a flip resync may be performed in response to a switchover operation from the first storage controller to the second storage controller (e.g., the first storage controller may fail, and thus the second storage controller may take ownership of storage devices previously owned by the first storage controller, such as a storage device hosting the secondary volume, so that the second storage controller may provide clients with failover access to replicated data from the storage devices such as to the secondary volume). Accordingly, the techniques described in relation to method300may be implemented to perform the flip resync to synchronizing data from the secondary volume (e.g., now actively used as a primary by the second storage controller to provide clients with failover access to data) to the primary volume (e.g., now used as a secondary backup volume during the switchover operation). FIGS.4A-4Hillustrate examples of a system for non-disruptively establishing and/or resynchronizing a synchronous replication relationship.FIG.4Aillustrates a first storage controller402and a second storage controller404having connectivity over a network406(e.g., the storage controllers may reside in the same or different clusters). The source storage controller402may comprise a primary volume408for which a synchronous replication relationship is to be established with the second storage controller404. Accordingly, a base snapshot410of the primary volume408or portion thereof (e.g., a snapshot of a consistency group, such as a grouping of files or storage objects) may be created. A baseline transfer412, using the base snapshot410, may be performed to transfer data from the primary volume408to the second storage controller404for creating a secondary volume414, such that the secondary volume414is populated with data mirroring the primary volume408as represented by the base snapshot410. FIG.4Billustrates one or more incremental transfers being performed from the first storage controller402to the second storage controller404. For example, an incremental snapshot420of the primary volume408may be created. The incremental snapshot420may comprise a point in time representation of the primary volume408or portion thereof at a subsequent time from when the base snapshot410was created. The incremental snapshot420and the base snapshot410(e.g., or a last incremental snapshot used to perform a most recent incremental transfer) may be compared to identify differences of the primary volume408from when the incremental snapshot420was created and when last snapshot, such as the base snapshot410, was created and transferred/replicated to the secondary volume414(e.g., files, directories, and/or hard links may be created, deleted, and/or moved within the primary volume408, thus causing a divergence between the primary volume408and the secondary volume414). In an example, data differences may be transferred using the incremental transfer422. In another example, storage operations, corresponding to the differences (e.g., a create new file operation, a delete file operation, a move file operation, a create new directory operation, a delete directory operation, a move directory operation, and/or other operations that may be used by the second storage controller404to modify the secondary volume414to mirror the primary volume408as represented by the incremental snapshot420), may be transferred to the second storage controller404using the incremental transfer422for implementation upon the secondary volume414. In this way, files, directories, hard links, and/or data within the secondary volume414may mirror the primary volume408as represented by the incremental snapshot420. Incremental transfers, using incremental snapshots, may be performed until a synchronization criteria is met (e.g., a threshold number of incremental transfers, a last transfer transferring an amount of data below a threshold, etc.). FIG.4Cillustrates dirty region logs430being initialized for tracking modifications of files or LUNs within the primary volume408. For example, a dirty region log may comprise bits that may be set to indicate whether regions of a file or LUN have been modified by client write requests that have not been replicated to the secondary volume414and thus are dirty regions, or whether regions are synchronized with the same data between the primary volume408and the secondary volume414and thus are clean regions. Splitter objects431, for endpoints such as the second storage controller404or other storage controllers, may be configured to split client write requests to the primary volume408and to the secondary volume414. Responsive to the dirty region logs430tracking modifications, an asynchronous transfer433of data from the primary volume408to the secondary volume414may be performed (e.g., a final incremental transfer). FIG.4Dillustrates the dirty region logs430being used to track modifications by client write requests to the primary volume408. For example, the first storage controller402may receive a client write request434targeting a second region within a file ABC. The client write request434may be locally implemented436upon the primary volume408. Accordingly, a bit, corresponding to the second region of the file ABC, may be set to indicate that the second region is a dirty region because the modification of the client write request434has not yet been replicated to the secondary volume414. FIG.4Eillustrates the splitter objects431performing client write request splitting. In an example, a synchronous transfer engine session may be initiated to use the dirty region logs430and/or the splitter objects431to process incoming client write requests. For example, an incoming client write request442may be received by the first storage controller402. The dirty region logs430may be evaluated to determine that the incoming client write request442targets a non-dirty region within the primary volume408. Accordingly, the incoming client write request442may be locally implemented444upon the primary volume408. The incoming client write request442may be split by the splitter objects431into a replication client write request446that is sent to the second storage controller404for remote implementation448upon the secondary volume414. In another example, a second incoming client write request, not illustrated, may be received by the first storage controller402. The second incoming client write request may correspond to a partially dirty region that is associated with an overlap between one or more dirty blocks and one or more non-dirty blocks of the primary volume408. Accordingly, the first storage controller402may locally commit the entire second incoming client write request to the primary volume408, and the entire second incoming client write request may be replicated to the secondary volume414. FIG.4Fillustrates the splitter objects431performing client write request splitting. For example, an incoming client write request450may be received by the first storage controller402. The dirty region logs430may be evaluated to determine that the incoming client write request450targets a dirty region within the primary volume408. Accordingly, the incoming client write request450may be locally implemented452upon the primary volume408, but not replicated to the secondary volume414because a cutover scanner may subsequently replicated dirty data within the dirty region to the secondary volume414. FIG.4Gillustrates the cutover scanner460being initiated to scan the dirty region logs430for transferring dirty data of dirty regions from the primary volume408to the secondary volume414. For example, the cutover scanner460may scan the dirty region logs430to determine that the second region of file ABC is a dirty region. Accordingly, dirty data within the dirty region is replicated to the secondary volume414using a dirty region transfer462, and the dirty region logs430are modified to indicate that the second region is now clean. In this way, the cutover scanner460replicates dirty data to the secondary volume414so that the secondary volume414mirrors the primary volume408(e.g., to reduce or eliminate data divergence between the primary volume408and the secondary volume414in order to bring the primary volume408and the secondary volume414into sync). Additionally, the splitter objects431are splitting and replicating incoming client write requests to the primary volume408and the secondary volume414, which can also reduce or eliminate data divergence in order to bring the primary volume408and the secondary volume414into sync. Thus, once the cutover scanner460is complete, the primary volume408and the secondary volume414are designated as being in a synchronous replication relationship470where data consistency is maintained between the primary volume408and the secondary volume414(e.g., client write requests are committed to both the primary volume408and the secondary volume414before client requests are responded back to clients as being complete), as illustrated inFIG.4H. In an example, the primary volume408and the secondary volume414may become out of sync for various reasons, such as network issues, a storage controller failure, etc. Accordingly, a common snapshot between the primary volume408and the secondary volume414may be used to roll the secondary volume414back to a state where the secondary volume414mirrored the primary volume408. Once the secondary volume414has been rolled back, the synchronous replication relationship470may be reestablished using the techniques described above (e.g., method300ofFIG.3and/orFIGS.4A-4G) that were used to initially establish the synchronous replication relationship470. For example, the dirty region logs430, the splitter objects431, the synchronous transfer engine session, and/or the cutover scanner460may be used to reestablish the synchronous replication relationship470. In an example, the dirty region logs430, the splitter objects431, the synchronous transfer engine session, and/or the cutover scanner460may be used to perform a volume migration operation of the primary volume408. For example, the primary volume408may be migrated in a non-disruptive manner where a relatively smaller disruption interval is achieved. In this way, client access may be facilitated to the primary volume408during the volume migration operations. Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device that is devised in these ways is illustrated inFIG.5, wherein the implementation500comprises a computer-readable medium508, such as a CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data506. This computer-readable data506, such as binary data comprising at least one of a zero or a one, in turn comprises a processor-executable computer instructions504configured to operate according to one or more of the principles set forth herein. In some embodiments, the processor-executable computer instructions504are configured to perform a method502, such as at least some of the exemplary method300ofFIG.3, for example. In some embodiments, the processor-executable computer instructions504are configured to implement a system, such as at least some of the exemplary system400ofFIGS.4A-4H, for example. Many such computer-readable media are contemplated to operate in accordance with the techniques presented herein. It will be appreciated that processes, architectures and/or procedures described herein can be implemented in hardware, firmware and/or software. It will also be appreciated that the provisions set forth herein may apply to any type of special-purpose computer (e.g., file host, storage server and/or storage serving appliance) and/or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings herein can be configured to a variety of storage system architectures including, but not limited to, a network-attached storage environment and/or a storage area network and disk assembly directly attached to a client or host computer. Storage system should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. In some embodiments, methods described and/or illustrated in this disclosure may be realized in whole or in part on computer-readable media. Computer readable media can include processor-executable instructions configured to implement one or more of the methods presented herein, and may include any mechanism for storing this data that can be thereafter read by a computer system. Examples of computer readable media include (hard) drives (e.g., accessible via network attached storage (NAS)), Storage Area Networks (SAN), volatile and non-volatile memory, such as read-only memory (ROM), random-access memory (RAM), EEPROM and/or flash memory, CD-ROMs, CD-Rs, CD-RWs, DVDs, cassettes, magnetic tape, magnetic disk storage, optical or non-optical data storage devices and/or any other medium which can be used to store data. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Various operations of embodiments are provided herein. The order in which some or all of the operations are described should not be construed to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated given the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Furthermore, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard application or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer application accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. As used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component includes a process running on a processor, a processor, an object, an executable, a thread of execution, an application, or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers. Moreover, “exemplary” is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B and/or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used, such terms are intended to be inclusive in a manner similar to the term “comprising”. Many modifications may be made to the instant disclosure without departing from the scope or spirit of the claimed subject matter. Unless specified otherwise, “first,” “second,” or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first set of information and a second set of information generally correspond to set of information A and set of information B or two different or two identical sets of information or the same set of information. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.
52,984
11860899
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Implementations of the present disclosure are directed to a synchronization framework for executing data updates in multi-tenant, service-based applications. More particularly, and as described in further detail herein, the synchronization framework of the present disclosure leverages a messaging queue and uses messaging queue partitioning and consumer groups to apply consistent and concurrent updates of data in multi-tenant, service-based applications. As described in further detail herein, implementations of the present disclosure enable data updates in database systems without database locking. In some implementations, actions include receiving, by a messaging system, a first message having a first key, the first key indicating a first tenant of a set of tenants, providing, by the messaging system, the first message in a first partition of a messaging queue, reading, by a first service instance, the first message from the first partition, the first service instance being in a set of services instances, each service instance executing a service of a service-based application, and in response to the first message, updating, by the first service instance, at least a portion of first data stored within a database system, the portion of first data being associated with the first tenant, the database system storing data of each tenant of the set of tenants. To provide further context for implementations of the present disclosure, and as introduced above, cloud computing can be described as Internet-based computing that provides shared computer processing resources, and data to computers and other devices on demand. Users can establish respective sessions, during which processing resources, and bandwidth are consumed. During a session, for example, a user is provided on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications, and services). The computing resources can be provisioned and released (e.g., scaled) to meet user demand. A common architecture in cloud platforms includes services (also referred to as microservices), which have gained popularity in service-oriented architectures (SOAs). In such SOAs, applications are composed of multiple, independent services that are deployed in standalone containers with well-defined interfaces. The services are deployed and managed within the cloud platform and run on top of a cloud infrastructure. For example, a software vendor can provide an application that is composed of a set of services that are executed within a cloud platform. By way of non-limiting example, an electronic commerce (e-commerce) application can be composed of a set of 20-30 services, each service performing a respective function (e.g., order handling, email delivery, remarking campaigns, handling and payment). Each service is itself an application (e.g., a Java application) and one or more instances of a service can execute within the cloud platform. In some examples, such as in the context of e-commerce, multiple tenants (e.g., users, enterprises) use the same application. For example, and in the context of e-commerce, while a brand (e.g., an enterprise) has their individual web-based storefront, all brands share the same underlying services. Consequently, each service is multi-tenant aware (i.e., manages multiple tenants) and provides resource sharing (e.g., network throughput, database sharing, hypertext transfer protocol (HTTP) restful request handling on application programming interfaces (APIs)). In multi-tenant deployments of service-based applications, services intermittently need to merge incoming data. For example, catalog updates and/or intelligence metrics are intermittently merged into data records within database systems. When running multiple instances of a service, each service instance can have a scheduled thread that wakes up, determines all tenants that need updating of data (e.g., product catalog update), performs the calculations for merging the data, and merges the data. If multiple services are attempting to merge data at the same, it is likely that there will be a conflict between the multiple services, which could result in corruption of data within the underlying database. To alleviate this, traditional approaches implement database locking, in which merging executed for a service locks at least a portion of the database from access by other services. This is problematic in that the other services are unable to execute certain functionality, while database locking is engaged. In view of the above context, implementations of the present disclosure provide a synchronization framework that leverages a messaging queue and uses messaging queue partitioning and consumer groups to apply consistent and concurrent updates of data in multi-tenant, service-based applications. As described in further detail herein, implementations of the present disclosure enable data updates in database systems without database locking. Implementations of the present disclosure are described in further detail herein with reference to an example context. The example context includes an e-commerce context, in which multiple enterprises (e.g., a brands) use an application provided by a software vendor, each of the enterprises being referred to as a tenant with respect to the application. The application is composed of a set of services that are executed within a container orchestration system (e.g., Kubernetes) of a cloud platform. In the example e-commerce context, the enterprises can include respective web-based front-ends, through which users (e.g., customers) can interact with the application. In some examples, each web-based front-end is provided as a progressive web application, which can be described as a type of application software (e.g., programmed using hypertext mark-up language (HTML), cascading style sheets (CSS), and/or JavaScript) delivered through the Internet, and intended to work on platforms that use standards-compliant browsers (e.g., desktop, mobile). In the e-commerce context, a user can browse a catalog of products and/or services and can make purchases through the web-based front-end. Further, back-ends (e.g., back offices, workbenches) are provided, which enable enterprise to configure respective strategies of the front-end (storefront) will be designed and what content to show where. That is, the back-end is where all of the strategies, product promotions, and the like, are configured by a respective enterprise. In the e-commerce context, a user can browse a catalog of products and/or services and can make purchases through the web-based front-end. User interactions with the application result in requests being transmitted to one or more services for handling. For example, user interactions can result in functionality that is performed by one or more services (e.g., order handling, email delivery, remarking campaigns, handling and payment). Enterprises, however, periodically update product catalogs, which can be centrally stored in a database system. For example, product catalogs can be intermittently updated to reflect products that are out-of-stock, products that are discontinued, and/or newly offered products. Further, enterprises collect intelligence metrics that can be representative of performance of their e-commerce site, product performance (e.g., impressions, conversions), and the like. The intelligence metrics are intermittently updated within a database system, for example, to enable analytics and other functionality to be performed on the data. While implementations of the present disclosure are described herein with reference to the e-commerce context, it is contemplated that implementations of the present disclosure can be realized in any appropriate context. FIG.1depicts an example container orchestration architecture100in accordance with implementations of the present disclosure. In the depicted example, the example container orchestration architecture100represents deployment of a portion of the container orchestration system Kubernetes introduced above. More particularly, the example architecture100represents a basic structure of a cluster within Kubernetes. In the example ofFIG.1, the example architecture100includes a control plane102and a plurality of nodes104. Each node104can represent physical worker machines and are configured to host pods. In Kubernetes, a pod is the smallest deployable unit of resources and each pod is provided as one or more containers with shared storage/network resources, and a specification for how to run the containers. In some examples, a pod can be referred to as a resource unit that includes an application container. The control plane102communicates with the nodes104and is configured to manage all of the nodes104and the pods therein. In further detail, the control plane102is configured to execute global decisions regarding the cluster as well as detecting and responding to cluster events. In the example ofFIG.1, the control plane102includes a control manager110, one or more application programming interface (API) server(s)112, one or more scheduler(s)114, and a cluster data store116. The API server(s)112communicate with the nodes104and exposes the API of Kubernetes to exchange information between the nodes104and the components in the control plane102(e.g., the cluster data store116). In some examples, the control plane102is set with more than one API server(s)112to balance the traffic of information exchanged between the nodes104and the control plane102. The scheduler(s)114monitor the nodes104and execute scheduling processes to the nodes104. For example, the scheduler(s)114monitors events related to newly created pods and selects one of the nodes104for execution, if the newly created pods are not assigned to any of the nodes104in the cluster. The cluster data store116is configured to operate as the central database of the cluster. In this example, resources of the cluster and/or definition of the resources (e.g., the required state and the actual state of the resources) can be stored in the cluster data store116. The controller manager110of the control plane102communicates with the nodes104through the API server(s)112and is configured to execute controller processes. The controller processes can include a collection of controllers and each controller is responsible for managing at least some or all of the nodes104. The management can include, but is not limited to, noticing and responding to nodes when an event occurs, and monitoring the resources of each node (and the containers in each node). In some examples, the controller in the controller manager110monitors resources stored in the cluster data store116based on definitions of the resource. As introduced above, the controllers also verify whether the actual state of each resource matches the required state. The controller is able to modify or adjust the resources, so that actual state matches the required state depicted in the corresponding definition of the resources. In some examples, the controllers in the controller manager110should be logically independent of each other and be executed separately. In some examples, the controller processes are all compiled into one single binary that is executed in a single process to reduce system complexity. It is noted the control plane102can be run/executed on any machine in the cluster. In some examples, the control plane102is run on a single physical worker machine that does not host any pods in the cluster. In the example ofFIG.1, each node104includes an agent120and a proxy122. The agent120is configured to ensure that the containers are appropriately executing within the pod of each node104. The agent120is referred to as a kubelet in Kubernetes. The proxy122of each node104is a network proxy that maintains network rules on nodes104. The network rules enable network communication to the pods in the nodes104from network sessions inside or outside of the cluster. The proxy122is a kube-proxy in Kubernetes. FIG.2depicts a conceptual representation of an application200hosted in a cloud platform. As described herein, the application200is a multi-tenant, service-based application. In the example ofFIG.2, a front-end202and a back-end204are depicted, the back-end204representing a cloud platform. The front-end202includes front-end components210,212,214associated with respective tenants (e.g., enterprises). For example, each front-end components210,212,214can be provided as a web-based front-ends, through which users (e.g., customers) can interact with the application200. For example, in the e-commerce context, the front-end components210,212,214can be provided as browser-based store front-ends of respective enterprises that enable customers of the respective enterprises to peruse products and make purchases. In the example ofFIG.2, the application200is a service-based application and includes an application service220and services222,224,226,228. In the depicted example, the service222interacts with a database system230. In some examples, the application service202functions as a gateway between the front-end components210,212,214and the services222,224,226,228. For example, the front-end components210,212,214and the application service220can be provided by a software vendor, while one or more of the services222,224,226,228and/or the database system230can be provided by another software vendor. In some examples, the application service220handles requests received from one or more of the front-end components210,212,214. For example, the application service220can itself include logic that can process a request from one of the front-end components210,212,214and provide a response the front-end component210,212,214. As another example, the application service220can include a request handler to forward requests to one or more of the services222,224,226,228. For example, each service222,224,226,228executes particularized functionality to provide a response to a request, or at least a portion of a request. In this sense, and for the functionality executed by and data received from the services222,224,226,228, the application service220functions as a proxy. In some examples, the application service220receives a request, breaks down the request into one or more sub-requests (e.g., specialized requests) and forwards the one or more sub-requests to respective services222,224,226,228. The application service220receives responses from the respective services222,224,226,228, and packages the responses into a single response back to the front-end component210,212,214that issued the original request. In this sense, the application service220can be described as an aggregator of data returned by the services222,224,226,228. In some examples, the application service220and the services222,224,226,228take incoming requests through multiple channels. An example channel includes representational state transfer (REST) controllers for HTTP requests (e.g., GET, PUT, POST, DELETE). Another example channel is through messaging systems (e.g., messaging queue listeners, consumers, such as Kafka and Rabbit). With reference to the non-limiting context of e-commerce, a request from the front-end component210can be received by the application service220. For example, the request can be issued by a user to request display of products that are available through the e-commerce site of an enterprise, here, a first tenant (Tenant 1). In response to the request, the application service220can determine that a page is to be displayed in the front-end component210and can determine that product information and product suggestions are to be displayed in the page. The application service220can issue a sub-request to the service222, which is, in this example, a product catalog service. The service222retrieves product catalog data from the database system230and provides a response to the application service220, which response includes the product catalog data. The application can issue a sub-request to the service224, which is, in this example, a product suggestions service (e.g., executing a recommendation engine). The service224provides product recommendation data in a response to the application service220. The application service220aggregates the product catalog data and the product recommendation data and provides a response to the front-end component210to display a page depicting the product catalog data and the product recommendation data (e.g., as images and/or text). Continuing with this example, the user can decide to purchase a product by selecting the product and providing payment information represented in a request to the application service220. The application service220can send a sub-request to the service226, which, in this example, is a payment handling service. In this example, the service226can send a sub-request to the service228, which, in this example, is a payment verification service (e.g., a credit card service that provides purchase approvals/denials). The service228provides a response to the service226, which provides a response to the application service220. The application service220provides a response to the front-end component210(e.g., instructions to display purchase approval/denial to the user). In the example ofFIG.2, the application200further includes a service240that interacts with a database system242. In some examples, the service240can be an analytics service that executes analytics over intelligence metrics (data) of respective enterprises. In some examples, the intelligence metrics are stored in the database system242and is intermittently updated. As noted above, product catalog data is provided in the database system230. The product catalog data is intermittently updated in the database system230. As a cloud-based application, the components of the back-end204are executed within containers of a container orchestration system, such as Kubernetes. As introduced above, a container is defined as a standalone unit of software that is packaged with application (service) code along with all dependencies of the application (service) to run as a single application (service). In the example ofFIG.2, because the application200supports multiple tenants (e.g., multiple enterprises having respective e-commerce sites powered by the application200), each of the components is tenant-aware. That is, for example, the application service220discerns a tenant of a set of tenants based on data provided in the request (e.g., data indicating which tenant is the source of the request) to, in response to the request, provide instructions to display a page that is particular to the tenant (e.g., includes the logo, branding, and the like of the particular tenant). As another example, the service222discerns a tenant of the set of tenants based on data provided in the sub-request (e.g., data indicating which tenant is the sub-request is sent on behalf of) to, in response to the sub-request, provide product catalog data that is particular to the tenant (e.g., includes the products offered for sale by the particular tenant). As evidenced in the discussion ofFIG.2, in order to support front-end components (e.g., the front-end components210,212,214), services (e.g., the services220,222,224,226,228,240) execute on the cloud platform to provide the core functionality of the platform (e.g., e-commerce platform including the application200). As introduced above, one of the key challenges in so-configured platforms (e.g., multi-tenant, service-based applications) is to intermittently synchronize tenant-specific data stored in one or more back-end database systems with new data. This synchronization is done repeatedly for all of the tenants. In some examples, synchronization of data is executed at regular time intervals t. FIG.3Adepicts an example architecture300that uses database locking for data updates. In the example ofFIG.3A, the example architecture300includes a service302that intermittently updates data withing a database system304. In the example ofFIG.3A, intelligence metrics306and updated data308(e.g., updated product catalog data) are depicted. In a cloud platform, multiple instances of the service302are executed in respective resource units. In the case of Kubernetes, a pod is a resource unit and includes a container, within which an instance of the service302is executed. In the example ofFIG.3A, three containers C1, C2, C3 (i.e., three pods) are depicted. In some examples, each container corresponds to a respective tenant. That is, a service instance is provided for each tenant. As depicted inFIG.3A, a thread of the service302wakes up after time t and acquires a database lock in the database system304. For example, a thread (Thread 1) can correspond to the container C1. At time t, the thread reads an update table within the database system304to determine whether data is to be updated for any tenant (e.g., do any of the enterprises have an update to a respective product catalog). In some examples, the update table includes a tenant column and an update column and each row indicates a respective tenant and an indicator as to whether data for the tenant is to be updated. In the example ofFIG.3A, the first thread can read the update table and determine that data is to be updated. In response, the first thread acquires a database lock on the update table within the database system304. In some examples, a row-level database lock is acquired on the update table (i.e., the table with the list of tenants eligible for an update operation). In the example ofFIG.3A, if a first tenant corresponding to the first thread is next in line to apply an update, the row containing the first tenant record is locked, such that no other thread can touch or update the first tenant. In some examples, the database lock can include a SQL lock on a relational table corresponding to a tenant. During synchronization, the SQL lock blocks access to the relational table from other threads that attempt to update for the tenant. In some examples, the synchronization service acquires a PESSIMISTIC_WRITE lock, which also blocks read-only access (e.g., prevents other transactions from reading, updating, deleting). The database lock is released after the thread completes synchronization, such that other threads can continue processing the tenant. For example, and as depicted inFIG.3A, a thread of the container C2 (Thread 2) and a thread of the container C3 (Thread 3) are blocked from access, until the lock is released. This approach, however, is very cumbersome and problematic, because, for example, it requires application-triggered locks on the database, which essentially blocks concurrent requests (even read-only requests) to the tenant, while synchronization is ongoing. In view of this, implementations of the present disclosure provide a synchronization framework that leverages a messaging queue and uses messaging queue partitioning and consumer groups to apply consistent and concurrent updates of data in multi-tenant, service-based applications. In this manner, database locking is obviated. As described in further detail herein, the messaging system is used to simulate database locking in a concurrent manner. Implementations of the present disclosure ensure that, when constructing interim data for batch updates, exactly one interim data copy is created and a received counter stays correct. Implementations of the present disclosure ensure that only one instance of a thread is processing one tenant to avoid data duplication. In some examples, a messaging system is provided as Apache Kafka, also referred to herein as Kafka, which can be described as an open-source distributed event streaming messaging platform. It is contemplated, however, that any appropriate messaging system can be used. Kafka is a publish-subscribe messaging system that uses uniquely named topics to deliver feeds of messages from producers to consumers. For example, consumers can subscribe to a topic to receive a notification when a message of the topic is added to a messaging queue. Producers produce messages, which include key-value pairs, and can assign topics to messages. Kafka provides multiple partitions within the messaging queue for a topic to scale and achieve fault tolerance. Partitions enable data of a topic to be split across multiple broker servers for writes by multiple producers and reads by multiple consumers. Messages (and data within the messages) are stored in the partitions for subsequent consumption by consumers. Each message is assigned a key and each partition corresponds to a key. In this manner, messages having the same key are assigned to the same partition. On the consumer side, multiple consumers will read from the same topic and, as such, each message could be read more than once. However, only one consumer is allowed to read data from a partition. To handle this, Kafka provides for grouping of consumers into consumer groups. For example, consumers can be grouped into a consumer group based on common functionality within a system. While Kafka provides only one consumer to read per partition, multiple consumer groups can read from a partition. Here, each consumer is assigned to a consumer group and subscribes to a topic using a consumer group identifier. In accordance with implementations of the present disclosure, each container is a consumer and the containers are assigned to a consumer group. Messages associated with tenants are published with a key indicating the tenant. In this manner, all messages associated with a particular tenant go to the same partition. In other words, partitions are specific to tenants. For example, for a first tenant, a key <Tenant1> can be used and, for a second tenant, a key <Tenant2> can be used. Messages with the same key go to the same partition. This is represented in Table 1, below: TABLE 1Message Distribution to PartitionsMessage (Event)KeyPartition1Tenant1P12Tenant2P23Tenant1P14Tenant3P3 With multiple service instances (containers) that consume messages, the number of partitions can be set to the number of service instances. By assigning each service instance to a single, respective consumer group, the messaging system ensures that only a single service instance (container) will ever consume any given message. This is represented in Table 2, below: TABLE 2Consumer to Partitions MappingMessage (Event)PartitionConsumer1P1C12P2C23P1C14P3C3 FIG.3Bdepicts an example architecture300′ that provides data updates in accordance with implementations of the present disclosure. In the example ofFIG.3B, the example architecture300′ includes the synchronization service302, the database system304, and a messaging system318(e.g., Apache Kafka). The messaging system318includes a messaging queue320that includes partitions322,324,326. Although three partitions are depicted, it is contemplated that any appropriate number of partitions can be used. Further, a set of consumer groups is defined within the messaging system318. In partitioning of the message queue320, the partitioning dictates how messages (events) are placed and consumed from the message queue320. When an event comes into the messaging system318, the message is placed in one of the partitions322,324,326. For example, the partition322can correspond to C1, the partition324can correspond to C2, and the partition326can correspond to the container326. To update data, a batch update is assigned a unique identifier <UUID> and a type <TYPE>. For example, the batch update can include a set of product catalog data that is to be updated in the database system304. Here, <TYPE> indicates an event type. For example, messages for intelligence metrics will have the type ‘metrics-update’ and the messages from catalog update have the type ‘catalog-update’. In some examples, a message also includes a payload of data that is to be updated. A message including the <UUID> and the <TYPE> is put into a topic, and the message is keyed by tenant <TENANT>. Here, <TENANT> indicates the particular tenant, for which the data is being updated (e.g., product catalog data for the product catalog of Tenant1). This keying procedure ensures that records will be processed by the consumers in order. In the context of a cloud application, and as noted above, the application is made up of several services. If a first service sends a message to one or more second services, the first is a producer and the second service(s) is/are consumer(s). In some examples, a catalog service is provided, which manages the catalog updates. The catalog service publishes a message with <UUID> and <TYPE> to mark all of the catalog messages with the same <TYPE> (e.g., ‘catalog-update’). A consumer reads the message from the respective partition and start processing the update based on the information provided in the message. As by design, only one consumer can read any given message, once a message is read, no other consumer can read the message and also attempt to process the update. For example, and with reference toFIG.3B, a message keyed to <Tenant1> is published and is added to the partition322. The container C1 reads the message from the partition322and begins processing the update for Tenant1. Once the message is consumed, the message cannot be consumed again. FIG.4depicts an example process400that can be executed in accordance with implementations of the present disclosure. In some examples, the example process400is provided using one or more computer-executable program executed by one or more computing devices. A message is published to a message queue (402). For example, and as described herein, a catalog service can publish a message with <UUID> and <TYPE> to mark all of the catalog messages with the same <TYPE> (e.g., ‘catalog-update’). The message is keyed by tenant <TENANT>. Here, <TENANT> indicates the particular tenant, for which the data is being updated (e.g., product catalog data for the product catalog of Tenant1). As discussed herein, this keying procedure ensures that records will be processed by the consumers in order. The message is stored in a partition corresponding to the key (404) and the message is read by a service instance that is assigned to a consumer group of the partition (406). For example, and as described herein, a message keyed to <Tenant1> is published and is added to the partition322. The container C1 reads the message from the partition322and begins processing the update for Tenant1. Once the message is consumed, the message cannot be consumed again. Data is updated within the database system (408). For example, and as described herein, the container C1 updates data (e.g., product catalog data) within the database system304using a payload provided in the message. Referring now toFIG.5, a schematic diagram of an example computing system500is provided. The system500can be used for the operations described in association with the implementations described herein. For example, the system500may be included in any or all of the server components discussed herein. The system500includes a processor510, a memory520, a storage device530, and an input/output device540. The components510,520,530,540are interconnected using a system bus550. The processor510is capable of processing instructions for execution within the system500. In some implementations, the processor510is a single-threaded processor. In some implementations, the processor510is a multi-threaded processor. The processor510is capable of processing instructions stored in the memory520or on the storage device530to display graphical information for a user interface on the input/output device540. The memory520stores information within the system500. In some implementations, the memory520is a computer-readable medium. In some implementations, the memory520is a volatile memory unit. In some implementations, the memory520is a non-volatile memory unit. The storage device530is capable of providing mass storage for the system500. In some implementations, the storage device530is a computer-readable medium. In some implementations, the storage device530may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. The input/output device540provides input/output operations for the system500. In some implementations, the input/output device540includes a keyboard and/or pointing device. In some implementations, the input/output device540includes a display unit for displaying graphical user interfaces. The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device, for execution by a programmable processor), and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer can include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer can also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube), LCD (liquid crystal display), or LED (light emitting diode) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet. The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.
37,364
11860900
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to. DETAILED DESCRIPTION Various embodiments of methods and apparatus for implementing distributed transactions using log-structured repositories of state transition records and a hybrid approach involving aspects of optimistic and pessimistic concurrency control mechanisms are described. The term “distributed transaction”, as used herein, refers to an atomic unit of work that comprises storage operations (e.g., reads and/or writes) directed to a plurality of data stores. Thus, for example, with respect to a given distributed transaction comprising one or more writes, either all the included writes are committed using the techniques described herein, or none of the included writes are committed, regardless of the number of distinct data stores involved or the types of data stores involved. Distributed transactions may also be referred to herein as “cross-data-store” transactions, in contrast to “single-data-store” transactions within each of which storage operations are directed to only a single data store. Cross-data-store transactions involving a wide variety of data store types may be supported in various embodiments, such as instances of relational database systems, non-relational databases, collections of storage objects implemented at an object storage service providing a web-services interface, and so on. A group of data stores, such as an instance RD-inst1 of a relational database system and an instance NRD-inst1 of a non-relational database system, may be registered as member stores of a storage system for which atomic cross-data-store transactions are to be supported. A given cross-data-store transaction may include several lower level modification operations, each of which may be considered a different state transition at a respective member store of the storage system in at least some embodiments. In some such embodiments, at least one persistent log-structured repository may be established to store records of cross-data-store transaction requests (CTRs) generated by client-side components of the storage system. Such client-side components may include, for example, application programs that utilize client libraries supported by the storage system to prepare and submit transaction requests. The CTR repository may be deemed log-structured in that entries may typically be logically appended to the repository, e.g., in order of corresponding acceptance decisions. In at least some implementations, the order of acceptance of the CTRs may be indicated by respective sequence numbers included in the entries of the repository. Such a repository may also be referred to herein as a “persistent log” for CTRs. At any given point in time, the CTR repository may represent the accumulated transitions of a state machine representing the sequence of cross-data-store transactions received and accepted for processing at the data store. Other similar log-structured repositories or persistent logs may also be established for state transitions at each member data store and used for local (i.e., data-store-level) conflict detection in some embodiments as described below in further detail. The terms “persistent log” and “log-structured repository” may be used synonymously herein. In some embodiments, a decision as to whether to accept or reject a CTR submitted by a client-side component may be made by a cross-data-store transaction admission controller of the storage system. A given CTR may include a number of different constituent elements in various embodiments as described below. These elements may include, for example, descriptors or entries representing write sets and write payloads of the transaction, corresponding read sets indicative of the data store objects on which the writes depend, logical constraint descriptors (e.g., to support idempotency of transactions and sequencing between transactions) and so on. The admission controller may perform one or more validation checks (e.g., to ensure that the requested CTR is not a duplicate of an earlier-submitted CTR and/or to ensure that the requested CTR does not violate transaction sequencing constraints) to determine whether to accept the CTR for further processing. If the CTR is accepted for processing, a record or entry corresponding to the CTR may be stored in the CTR log. It is noted that the cross-data-store transaction admission controller may not perform read-write conflict detection in at least some embodiments; instead, as described below, read-write conflict detection may be performed separately for individual data stores involved in the transaction. One or more cross-data-store transaction coordinators (CTCs) may be established to examine the entries of the CTR log in the order in which they were inserted in some embodiments. For each CTR examined, a CTC may coordinate a simplified two-phase commit (2PC) protocol to determine whether the changes indicated in the CTR are to be made permanent or not. In the first phase, the CTC may unpack the CTR and generate respective voting transition requests (VTRs) to be transmitted to local log-based transaction managers (LTMs) of the data stores to which storage operations of the CTR are directed. For example, if a given CTR includes writes W1 and W2 directed to two data stores DS1 and DS2, the CTC may generate respective VTRs VTR1 and VTR2 and transmit them to the LTMs of DS1 and DS2 respectively. Each VTR may include data-store level elements similar to the elements of the CTRs—e.g., a read set pertaining to one data store, a write set pertaining to the data store, and so on. In at least some embodiments, the LTMs may be responsible not only for participating in the 2PC protocol with respect to cross-data-store transactions, but also for responding to single-data-store transaction requests directed to the member data stores as described below. Upon receiving a VTR, in some embodiments an LTM may perform read-write conflict detection and/or one or more logical constraint checks to determine whether the VTR should be accepted or rejected. For example, the VTR may indicate a read set comprising one or more storage objects on whose values the writes of the VTR depend—that is, the writes of the VTR may have been generated on the basis of the values of the read set objects at the time the CTR was generated by the client-side component. If the LTM is able to determine, based on an analysis of the VTR elements and at least a subset of previously-committed transition records in its persistent log, that none of the read set objects have been modified since the CTR was generated, the VTR may be conditionally accepted in some embodiments. The acceptance may be conditional in the sense that the final disposition of the writes of the VTR (e.g., whether the writes are to be committed or aborted) may have to wait until the second phase of the 2PC protocol, during which the CTC sends a terminating transition request (TTR) indicating the commit or abort decision. To conclude the first phase of the 2PC protocol, the LTM at each data store involved in the transaction may send a respective “vote” response to the CTC, indicating whether the VTR transmitted to the LTM was conditionally accepted or not. In at least some embodiments, a record of the conditionally accepted VTR may be appended or inserted in the persistent log of the LTM. If the VTR cannot be accepted by an LTM (e.g., due to a read-write conflict or a logical constraint violation), the “vote” response may indicate that the VTR was rejected. During the period between the conditional acceptance of the VTR and the receipt by the LTM of a corresponding commit/abort TTR, in at least some embodiments, the record of conditional acceptance may be treated as the logical equivalent of a write lock on the read set and the write set of the VTR. During this period, a single-data-store transaction request (or another VTR) that attempts to modify the conditionally-accepted VTR's read/write sets may not be accepted by the LTM. Thus, the LTM may perform conflict detection between newly-received transition requests and the record of the conditionally-accepted VTR (as well as the records of previously-committed transitions) during the period pending the receipt of the TTR. In some embodiments, any conflicting transitions may be rejected immediately, while on other embodiments the processing of the conflicting transition requests may be deferred until the 2PC operations for the conditionally-accepted VTR are completed. Meanwhile, the responses from the various LTMs to the VTRs may be collected by the CTC. The first phase of the 2PC protocol may be considered complete when all the responses are received (or if a timeout occurs, which may be considered the equivalent of an abort response). If all the LTMs conditionally accepted their VTRs, the CTC may send a commit request as the TTR to each of the LTMs as part of the second phase of the 2PC protocol. If at least one LTM did not conditionally accept a VTR, the CTS may instead send an abort TTR. If a commit TTR is received, an LTM may unconditionally commit the corresponding VTR changes that were previously conditionally accepted—e.g., by modifying a field in the transition record or “lock” record that was stored for the VTR originally and converting the record to a committed transition record. The write payloads of the committed transitions may eventually (or synchronously) be propagated or applied to the storage devices of the data stores and/or additional write destinations such as materialized views of the data stores in various embodiments. If an abort TTR is received rather than a commit, the previously conditionally-accepted changes may be discarded (or simply ignored) without making them permanent. Regardless of the nature of the TTR that is received, the logical lock on the read/write set of the VTR may be released by the LTM. As mentioned earlier, the LTMs may continue to receive and process single-data-store transaction requests (STRs) during the phases of the 2PC protocol in at least some embodiments. With respect to acceptance/rejection decisions, an STR may be treated by an LTM in a manner similar to a VTR—that is, the transaction represented by the STR may be considered to comprise a single transition since it only applies to one data store. The contents of the STRs may include read sets, write sets and logical constraint descriptors for the particular data store to which they are directed. The LTM may perform conflict detection with respect to the STRs based on the read sets, the previously-committed transition requests and the pending conditionally-accepted VTRs. Details regarding the manner in which read-write conflicts may be detected for STRs and VTRs, as well as the way in which logical constraints such as de-duplication and sequencing constraints are implemented for STRs and VTRs in various embodiments are provided below. At least some of the persistent logs used for CTRs and also for the individual data stores' LTMs may be implemented using groups of geographically dispersed nodes organized as replication DAGs (directed acyclic graphs) in some embodiments as described below' Such replication DAGs may provide a very high level of availability and data durability for the various log entries used for managing the different types of transactions. In at least some embodiments, the implementation of the CTCs and the CTR persistent logs may be optimized for increased throughput, e.g., by setting up respective logs and corresponding CTCs for different combinations of data stores, as also described below in further detail. In at least some environments, the fraction of VTRs and/or STRs for which read-write conflicts and/or logical constraint violations are detected may typically be quite low. Furthermore, in at least some operating environments, the number of cross-data-store transactions generated per unit time may typically be much smaller than the number of single-data-store transactions. Thus, using the log-based transaction management approach described, the majority of transactions may be committed efficiently, without requiring the overhead of maintaining locks for all the data objects in the various data stores. At the same time, the use of the 2PC technique may ensure the required level of consistency (e.g., sequential consistency) and atomicity for those operations that do involve writes to several different data stores. In at least some embodiments, one or more optimizations of the basic distributed commit technique described above may be implemented. For example, in one embodiment, multi-data-store read-only transactions (i.e., transactions that include no writes, but read objects of several data stores) may be handled using a single-phase protocol. The CTC may transmit VTRs with null write sets to the LTMs of the data stores to which the reads of the transaction are directed, and if all the LTMs accept the VTRs (e.g., if no logical constraint violations are found by any of the LTMs), a success indicator may be returned to the client-side component that issued the read-only CTR. In another optimization employed in some embodiments, the CTC may decompose at least some cross-data-store transactions into smaller units (e.g., resolving conflicts among the units at the CTC itself, and re-ordering the units if needed) which can each be atomically implemented at a single data store. In at least one embodiment, causal consistency (a weaker levels of consistency than sequential consistency) may be supported for at least some transactions. Example System Environment FIG.1illustrates an example distributed storage system environment in which transactions directed to various combinations of data stores are managed using a plurality of persistent logs for state transition records, according to at least some embodiments, according to at least some embodiments. As shown, distributed storage system100includes a plurality of member data stores such as DS1, DS2 and DS3, each of which has an associated log-based transaction manager (LTM)151with a respective conflict detector155and a respective persistent log (i.e., a log-structured repository)156in which records of state transitions pertaining to that particular data store are placed. For example, LTM151A with persistent log156A and conflict detector155A is associated with DS1, LTM151B with persistent log156B and conflict detector155B is associated with DS2, and LTM151C with persistent log156C and conflict detector155C is associated with DS2 in the depicted system. Distributed transactions spanning instances of a variety of data store architectures may be supported in distributed system100in some embodiments—e.g., DS1 may be an instance of a relational database system, DS2 may be an instance of a non-relational database system, DS3 may be comprise a collection of unstructured objects managed by a storage service and accessible via web-services interfaces, and so on. Several instances of the same type of data store may be included in a storage system100in some embodiments—e.g., DS1, DS2 and DS3 may each represent a different instance of a relational database system in some cases. A number of storage-related applications, such as application150, may include respective client-side components (CSCs)160of the storage system, at which transaction requests of various types directed to one or more data stores may be generated. In some embodiments, the distributed storage system may expose a set of transaction-related application programming interfaces (APIs), e.g., in the form of a library, which can be invoked by the CSCs to submit transaction requests. Broadly speaking, the client-side components160may generate two categories of transaction requests in the depicted embodiment: single-data-store transaction requests (STRs)142, and cross-data-store transaction requests (CTRs)141. An STR142may include write operations (and/or reads) directed to a single member data store, while a CTR may include writes (and/or reads) directed to more than one member data store. Thus, for example, STRs142A whose writes are directed only to DS1 may be sent by client-side component160to LTM151A, STRs142B with writes directed solely to DS2 may be sent to LTM151B, and STRs142C with writes directed only to DS3 may be sent to LTM151C. Each STR may include, in addition to information about the write sets (the objects being modified) and write payloads (the content of the modifications), a number of addition elements that may be used by the receiving LTM's conflict detector to determine whether the STR is to be accepted or rejected. Such elements may include read sets indicating the objects on which the writes depend, conflict check delimiters (e.g., state indicators or last-modified sequence numbers of the LTM's persistent logs, indicating the most recent state of the data store which was viewed by the CSC when preparing the STR), logical constraint descriptors for managing de-duplication and sequencing among transactions, and so on. Using the contents of the STRs and at least a subset of the transition records already stored in their persistent logs (with the subset being selected based on the conflict check delimiters, for example), a conflict detector155may decide to accept an STR for commit, or to reject/abort the STR. Respective transition records comprising at least some of the elements of accepted STRs may be stored in the associated persistent logs156, e.g., with respective commit sequence numbers indicating the order in which the STRs were accepted. After a commit transition record corresponding to an STR is stored, the corresponding write payload may be propagated or applied to the storage devices at which the contents of the data stores are located (not shown inFIG.1), and at least in some cases to additional destinations. Such additional destinations may, for example, include various materialized views162of the data stores, such as view162A of data store DS1, view162B of data store DS2, and view1632C of data store DS3. The client-side components may in some embodiments use the materialized views162as the sources for their transactions' reads. The committed writes may be propagated to the materialized views162via respective write appliers164(e.g.,164A,164B or164C). In some embodiments, the operations of at least some write appliers164may be asynchronous with respect to the insertion of the transition records within the persistent logs156, while in other embodiments at least some write appliers164may propagate the writes synchronously with respect to the insertion of the transition records. The LTMs151may provide an indication of the disposition (commit or abort) of the STRs to the client-side components. In some embodiments, transition records corresponding to aborted/rejected STRs may also be stored in the persistent logs. The client-side component160may submit cross-data-store transaction requests141(each of which includes writes and/or reads directed to more than one data store) to a transaction admission controller135in the depicted embodiment, e.g., instead of transmitting the CTRs to any of the LTMs151associated with individual data stores. A given CTR141may include a plurality of write sets and corresponding write payloads directed to a respective data store in the depicted embodiment. In addition, a plurality of read sets, conflict check delimiters, and/or logical constraint descriptors may be included in a CTR, as described below with respect toFIG.2. Some of the logical constraints (such as de-duplication requirements or sequencing requirements) may apply to cross-data-store operations in the depicted embodiment. For example, a de-duplication check may be required from the admission controller135to determine whether a CTR with identical elements was previously submitted, or a sequencing check may be required to verify that some specified previous cross-data-store transaction was committed prior to the acceptance of the currently-requested transaction. Other logical constraints included in a CTR may be defined at the individual data-store level in some embodiments, so that the constraint checking may be performed by the LTMs151rather than the admission controller135. In at least some embodiments, while the admission controller135may be responsible for verifying that cross-data-store level logical constraints are not violated by a CTR, the admission controller may not be required to perform read-write conflict detection. Instead, read-write conflicts may be detected at the individual data store level by the LTMs during the first phase of a distributed commit protocol as described below. If the admission controller135determines that a CTR is to be accepted for further processing using the distributed commit protocol, a transition record corresponding to the CTR may be stored in persistent log133. In some embodiments, each such transition record of CTR persistent log133may comprise a sequence number indicative of the order in which it was inserted, as well as at least some of the elements of the CTR itself. A distributed commit protocol, similar to a two-phase commit (2PC) protocol may be initiated by a cross-data-store transaction coordinator (CTC)137for each of the accepted CTRs in the depicted embodiment. The admission controller135, associated persistent log133, and CTC137may collectively be referred to as distributed transaction management resources herein. The CTC may examine the CTR transition records of log133in the order in which they were inserted in at least some embodiments, and generate a set of voting transition requests (VTRs) corresponding to each examined transition record in the first phase of the distributed commit protocol175. The number of VTRs generated for a given CTR may equal the number of different data stores to which writes are directed in the CTR in at least some implementations. Thus, for example, if a CTR includes one write directed to DS1 and one write directed to DS2, the CTC137may generate two VTRs: VTR1 directed to LTM151A, and VTR2 directed to LTM151B. In effect, the CTC137may unpack the data-store-specific elements (e.g., write set, write payload, read set, logical constraints) of a CTR141which are relevant to DS1, and include those elements in VTR1. Similarly, the data-store-specific elements of the CTR141which are relevant to DS2 may be unpacked or extracted from the CTR and included in VTR2 sent to DS2. After transmitting the VTRs to the LTMs151, the CTC may wait for (typically asynchronous) responses from the LTMs. In response to receiving a VTR, an LTM151's conflict detector155may perform similar checks as would be performed when an STR142is received—e.g., read-write conflicts (if any) with respect to previously-store transition records at the data store's persistent log156may be identified, the log may be checked for duplicate transitions and/or sequencing violations, and so on. If no conflicts or violations are identified, the VTR may be conditionally accepted (pending the completion of the distributed commit protocol for which the VTR was generated), and a new transition record indicative of the conditional or pending acceptance of the VTR may be stored in the persistent log156of the LTM151in some embodiments. In at least one implementation in which sequence numbers are stored in transition records of the persistent log156, a new sequence number may be added to the record representing the conditional acceptance of the VTR at this stage. The new transition record indicative of a conditional acceptance of a VTR may be considered the logical equivalent of an exclusive lock held on the read/write sets of the VTR until the termination transition request (TTR) of the second phase of the distributed commit protocol is processed at the LTM. In the interim, i.e., until the TTR is received, any new STRs (or VTRs) that conflict with the conditionally accepted VTR (or any of the earlier-stored transition records of the persistent log156) may be rejected by the conflict detector155. An indication of whether the VTR was accepted or rejected may be sent back to the CTC137as part of the first phase of the protocol. Upon receiving the responses to the VTRs from the various LTMs151, the CTC137may determine the disposition or fate of the corresponding CTR141(and its constituent write operations). If all the responses indicated that the VTRs were conditionally accepted, the CTC137may initiate the second phase of the protocol, e.g., by transmitting respective commit TTRs to the LTMs151. If one or more of the VTRs was rejected (e.g., due to read-write conflicts or logical constraint violations), the CTC137may instead send abort TTRs to each of the LTMs151to which a VTR had been sent during the first phase. Upon receiving a commit TTR, in at least some embodiments the LTM151may store an indication in the corresponding VTR record in its persistent log156that the acceptance of the VTR is now unconditional—e.g., the conditional VTR acceptance record may be converted to a commit transition record similar to those created for committed STRs142. The modifications indicated in the transition record may subsequently be treated just as committed writes of STRs are treated, and the “logical lock” that was set on the VTR's read/write set may be removed. Upon receiving an abort TTR, in at least some embodiments the LTM may also release the logical lock. In some embodiments, the conditional VTR acceptance record that was stored in the persistent log156may be modified or deleted as well in response to the receipt of an abort TTR. After receiving the TTRs and performing the corresponding modifications, the LTMs151may send an acknowledgement of the TTRs back to the CTC137. In some embodiments, the CTC137may then provide a response to the client-side component160to indicate the result of the CTR (e.g., whether it was committed or aborted in the second phase of the distributed commit protocol). A number of variations of the technique described above may be implemented in various embodiments. As described below with respect toFIG.8, for example, in some embodiments separate CTCs (and corresponding admission controllers and CTR logs) may be established for CTRs that involve different combinations of data stores. One CTC may be set up solely for transactions involving writes to DS1 and DS2, for example, while another CTC may be set up for transactions involving writes to DS1 and DS3. In at least some embodiments, some or all of the persistent logs (e.g., the CTR persistent log, or the data-store-specific persistent logs) may be implemented using a DAG or replication nodes to provide higher levels of availability and durability. In at least one embodiment, each of the data stores may implement its own programmatic interfaces (e.g., APIs) for reads, and the client-side components may use those APIs to read data instead of relying on materialized views. Transaction Request Contents FIG.2illustrates example elements of single-data-store transaction requests (STRs) and cross-data-store transaction requests (CTRs) that may be generated by client-side components of a distributed storage service, according to at least some embodiments. As shown, an STR244may include a conflict check delimiter (CCD)202, a read set descriptor (RSD)204, a write set descriptor (WSD)206, a write payload (WP)208and one or more optional logical constraint descriptors (LCDs)210in the depicted embodiment. A client library provided by the distributed storage service may be utilized to assemble or generate the STR244and/or the CTR284in the depicted embodiment. In at least some embodiments, the client library may automatically record the read locations from which data is read during the transaction, and/or the write location to which data is written. In some implementations, the client library may also obtain, from the data store to which the STR is directed and from which the data indicated in the RSD is read, a corresponding sequence number (SN) of the most recent transition whose writes have been applied at the data store. Such sequence numbers may also be referred to as “commit sequence numbers” herein. In one embodiment, the SN may be retrieved before any of the reads of the STR are issued. In the depicted embodiment, the SN that represents the state of the data store at the time of the reads may be used as the conflict check delimiter202. The conflict check delimiter202may also be referred to as a committed state identifier, as it represents a committed data store state upon which the requested transaction depends. In some embodiments, a selected hash function may be applied to each of the read locations to obtain a set of hash values to be included in read descriptor204. Similarly, a selected hash function (either the same function as was used for the read descriptor, or a different function, depending on the implementation) may be applied to the location of the write(s) to generate the write set descriptor206in at least one embodiment. In other embodiments, hashing may not be used; instead, for example, an un-hashed location identifier may be used for each of the read and write set entries. The write payload208may include a representation of the data that is to be written for each of the writes included in the STR. Optional logical constraints210may include signatures that are to be used for duplicate detection/elimination and/or for sequencing specified STRs before or after other transitions, as described below in further detail. Some or all of the contents of the transaction request descriptor244may be stored as part of the transition records stored in a persistent log by an LTM to which the STR is directed in some embodiments. It is noted that the read and write locations from which the read descriptors and write descriptors are generated may represent different storage granularities, or even different types of logical entities in various embodiments. For example, for a data store comprising a non-relational database in which a particular data object is represented by a combination of container name (e.g., a table name), a user name (indicating the container's owner), and some set of keys (e.g., a hash key and a range key), a read set may be obtained as a function of the tuple (container-ID, user-ID, hash key, range key). For a relational database, a tuple (table-ID, user-ID, row-ID) or (table-ID, user-ID) may be used. In various embodiments, the conflict detector155of the LTM151to which the STR is directed may be responsible, using the contents of a STR and the persistent log established for the data store, for identifying conflicts between the reads indicated in the STR and the writes indicated in the log. For relatively simple read operations, generating a hash value based on the location that was read, and comparing that read location's hash value with the hash values of writes indicated in the persistent log may suffice for detecting conflicts. For more complex read requests, using location-based hash values may not always suffice. For example, consider a scenario in which a read request R1 comprises the query “select product names from table T1 that begin with the letter ‘G’”, and the original result set was “Good-product1”. If, by the time that a STR whose write W1 is dependent on R1's results is examined for acceptance, the product name “Great-product2” was inserted into the table, this would mean that the result set of R1 would have changed if R1 were re-run at the time the acceptance decision is made, even though the location of the “Good-product1” data object may not have been modified and may therefore not be indicated in the write records of the log. To handle read-write conflicts with respect to such read queries, or for read queries involving ranges of values (e.g., “select the set of product names of products with prices between $10 and $20”), logical or predicate-based read set descriptors may be used in some embodiments. The location-based read set indicators described above may thus be considered just one example category of result set change detection metadata that may be used in various embodiments for read-write conflict detection. A CTR284may include, corresponding to each data store that is affected by the storage operations to be performed atomically in the requested transaction, at least some elements similar to those of an STR244. Thus, for example, if three data stores were read from during the preparation of the CTR, an array291of three conflict check delimiters252A-252C may be included in the CTR. Similarly, an array292of read set descriptors254A-254C, an array293of write set descriptors256A-256C, and/or an array294of write payloads258A-258C may be included. Some logical constraints to be enforced for the transaction may be defined at the cross-data-store level (e.g., to check whether duplicate CTRs were previously sent to the same admission controller135), while others may be defined at the individual data store level in the depicted embodiment. Accordingly, the array of logical constraint descriptors295may comprise more elements260A-260E than the number of data stores for which read sets or write sets are included. It is noted that the different arrays shown in CTR284may not all have the same number of elements—e.g., the number of data stores read may differ from the number of data stores written to, so the RSD array may have a different size than the WSD or WP arrays. It is also noted that although, to simplify the presentation, the elements of CTR284are shown as arrays inFIG.2, other data structures such as linked lists or hash tables may be used in various embodiments to convey similar types of information. In some embodiments, the client-side component that prepares the CTR284may not necessarily identify which reads, writes or constraints apply to which specific data store—instead, for example, object identifiers of the read/written objects (e.g., table names or file names) may be used, and the CTC137may be responsible for determining the mappings between object names and the data stores. As described below, the CTC may extract the data-store-specific information from a CTR to generate voting transition requests (VTRs) to be directed to the log-based transaction managers of the respective data stores during the distributed commit protocol in at least some embodiments. Distributed Commit Protocol FIG.3illustrates example operations that may be performed prior to and during a first phase of a distributed commit protocol implemented for cross-data-store transactions at a storage system, according to at least some embodiments. As shown, a client-side component160of the storage system may submit STRs390directly to the log-based transaction manager (LTM)350(such as LTM350A) of the relevant data store, while cross-data-store transaction requests301may instead be directed to an admission controller335designated specifically for transactions that involve multiple data stores. Using the contents of the CTR and at least a subset of the CTR persistent log, the admission controller may decide whether to accept or reject a CTR301. In at least some embodiments, as mentioned earlier, the admission controller335may reject CTRs based on detected violation of idempotency-related (e.g., de-duplication) constraints or sequencing considerations, but may not perform any read-write conflict detection. Rejected CTRs362may be discarded in the depicted embodiment by the admission controller335, e.g., after informing the requesting client-side-component. Corresponding to each CTR that is not rejected, an entry or transition record indicative of the acceptance may be inserted at the tail of CTR persistent log333in the depicted embodiment, as indicated by arrow310. In at least some implementations, a sequence number (e.g., a value of a monotonically-increasing counter or logical clock maintained by the admission controller) may be included in each transition record stored in the log333. The persistent log333may be considered the equivalent of a FIFO (first-in, first-out) queue in the depicted embodiment, into which newly accepted CTR records are inserted at one end (the tail), and from which the CTC137extracts records from the other end (the head) as indicated by arrow315. In some embodiments, instead of dealing with the CTRs stored in persistent log333in strict FIFO order, a CTC137may be able to process multiple CTRs in parallel under certain conditions. For example, the CTC may be able to examine the contents of the records or entries for two CTRs, CTR1 and CTR2 (e.g., the entry at the head of the log, and the next entry), and determine that the corresponding transactions are directed to non-overlapping portions of the data stores involved and therefore will not conflict with each other. In such scenarios, the CTC137may implement the two-phase distributed commit protocol described below at least partly in parallel for CTR1 and CTR2. In the depicted embodiment, the CTC337may be configured to complete both phases of a two-phase distributed commit protocol for a given CTR transition record, before it begins the first phase of the next CTR transition record. During the first phase, the CTC337may unpack or extract the constituent elements of the CTR301into data-store-specific subgroups, and include one such data-store-specific group of elements in each of one or more voting transition requests (VTR)340. For example, the read set descriptors, write set descriptors, conflict check delimiters etc. pertinent to data store DS1 may be determined from CTR301and included in VTR340A sent to LTM350A. Similarly, the read set descriptors, write set descriptors, conflict check delimiters etc. pertinent to data store DS2 may be determined from CTR301and included in VTR340B sent to LTM350B during the first phase of the protocol. The conflict detectors of LTMs350may examine the submitted VTRs340and perform the appropriate read-write conflict detection analysis and/or logical constraint analysis. Corresponding to a VTR340for which no conflicts or constraint violations are found, a transition entry376indicating conditional acceptance may be stored in the LTM's data-store-specific persistence log356(e.g., log356A in the case of LTM350A, and log356B in the case of LTM350B), as indicated by arrows317A and317B. The conditional acceptance entries may represent logical locks (as indicated by the letter “L”) that are held on the read/write sets of the VTRs temporarily, pending the receipt of an indication of the disposition of the CTR301(i.e., whether the CTR is to be aborted or committed) during the second phase of the distributed commit protocol. VTRs for which read-write conflicts or constraint violations are identified may be rejected by the LTMs350. Transition entries for rejected VTRs may not be stored in persistent logs356in at least some implementations. A respective voting transition response (e.g., response341A or341B) may be sent to the CTC337from the LTMs350A and350B to indicate whether the VTR was conditionally accepted or whether it was rejected based on data-store-specific analysis. After the CTC337receives the responses341, the second phase of the distributed commit protocol may be begun. It is noted that during the pendency of a conditionally-accepted VTR's record, the decision as to whether a given STR390is to be accepted at a given LTM may be made based at least in part on conflict detection with respect to the conditionally-accepted VTR (as well as the commit transition records377of the logs356). Similarly, any new VTRs that are received at the LTM during the first phase of the commit protocol may also be checked with respect to read-write conflicts with pending VTRs. Thus, in at least the depicted embodiment, it may sometimes be the case that an STR or a VTR may be rejected on the basis of a particular VTR which never gets committed. FIG.4aandFIG.4billustrates example operations that may be performed during a second phase of a distributed commit protocol, according to at least some embodiments. Upon receiving the responses to the VTRs from the LTMs of the individual data stores during the first phase of the protocol, the CTC337may determine whether at least one of the LTMs rejected a VTR. If none of the VTRs were rejected, the CTC337may initiate the second phase by submitting a commit terminating transition request (TTR)440to each LTM350(e.g., commit TTR440A to LTM350A, and commit TTR340B to LTM350B), as indicated inFIG.4a. Upon receiving the commit TTR, an LTM may convert the conditional acceptance or lock transition record for the VTR to a commit transition record, as indicated by the arrows476A and476B. In at least some embodiments, after the VTR's record has been modified, an acknowledgement message441corresponding to the TTR may be sent to the CTC from the LTMs. The CTC may then inform the client-side component of the result of the CTR to complete the distributed commit protocol, and proceed to initiate the first phase of the commit protocol for the next CTR in the persistent log433. If at least one of the LTMs350rejects a VTR340(or if no response is received from one of the LTMs before a timeout expires), the CTC337may determine that the terminating transition of the CTR is to be an abort rather than a commit. Accordingly, respective abort TTRs470A and470B may be sent to LTMs350A and350B, as indicated inFIG.4b. Upon receiving an abort TTR470, in the depicted embodiment the LTM350may release the logical lock corresponding to the conditionally-accepted VTR (as indicated by arrows477A and477B). In at least some embodiments, the log entries representing the conditionally-accepted VTRs may be modified to indicate the cancellation of the corresponding changes, and/or removed from the persistent logs356. A respective TTR acknowledgement471(e.g.,471A or471B) may be sent back to the CTC337by each LTM, and the CTC may then notify the client-side component of the rejection of the CTR to complete the distributed commit protocol. In at least one embodiment, some or all of the persistent logs (such as logs356A and356B of the data stores DS1 and DS2, and/or CTR persistent log333) may be implemented using a write-once approach towards log entries, so that any given log entry cannot be modified or overwritten after it is entered into the log. In such implementations, instead of modifying an entry that originally represented a conditionally-accepted VTR, an LTM such as350A may add a new termination transition entry in the data-store-specific persistent log (e.g., log356A in the case of LTM350A) to indicate whether the corresponding CTR was aborted or committed in the second phase of the protocol. The termination transition entry may, for example, include a pointer to the previously-added log entry representing the conditional acceptance of the corresponding VTR. Write appliers associated with the persistent logs of the various data stores may be able to use such termination transition entries to determine whether the writes of the corresponding VTRs are to be applied. In some embodiments, different types of log entries may be stored in the data-store-specific persistent logs for commit transitions of cross-data-store transactions than are stored for single-data-store commit transitions—e.g., a metadata field of a commit log entry may indicate the type of transaction (single-data-store versus cross-data-store) for which the entry is being stored. Log-Based Conflict Detection and Constraint Checking As mentioned earlier, in at least some embodiments the conflict detectors of the LTMs may examine portions of their persistent logs to determine whether transition requests (e.g., STRs or VTRs) are to be accepted or rejected. Over time, the number of entries stored in the logs may become quite large, and it may become inefficient to examine the entire log for each submitted transition request. Accordingly, one or more techniques may be implemented to limit the set of log records that have to be examined for a given transition request.FIG.5illustrates examples of the use of transition request elements in conjunction with selected subsets of persistent log records for conflict detection and logical constraint management, according to at least some embodiments. In the depicted embodiment, a transition request544may belong to one of three categories: a VTR (voting transition request), an STR (a single-data-store transaction) or a TTR (a terminating transition request). The category of the transition request may be indicated in transition type element592. As discussed earlier, STRs may be generated by client-side components of the storage service and transmitted directly to LTMs of the appropriate data stores. In contrast, in various embodiments VTRs and TTRs may be generated by cross-data-store transaction coordinators, e.g., using data-store-specific elements extracted from CTRs for which records have been stored in a CTR log, and transmitted to the appropriate LTMs. The decision as to whether to accept or reject the transition request544may be made on the basis of three types of analyses in the depicted embodiment: read-write conflict detection, duplication detection, and/or transition sequencing considerations. Transition records (TRs)552indicative of state transitions (e.g., writes) that have been applied at a given data store may be stored in sequence number order (i.e., in the order in which the writes were committed or applied) in persistent log510. Thus, TR552F represents the latest (most recent) state transition applied at the data store, while TRs552A-552E represent earlier transitions in the order in which they were applied, with TR552A being the least recent among the six TRs552A-552F. Each TR552may include at least an indication of a sequence number (SN)504and a write set descriptor (WSD)505of the corresponding transition. In addition, each TR552may also include a de-duplication signature506and/or a sequencing signature507in the depicted embodiment. At least some transition records of log510may also include an element (e.g., conditional flag599of TR552F) to indicate that they represent conditionally accepted VTRs. In some embodiments, the TRs552may also include read set descriptors of the corresponding transition requests. As implied by the name, a read-write conflict may be said to occur if the read set (on the basis of which the write set and/or write payload may have been generated) of the requested transition has changed (or has a non-zero probability of having been changed) since the transition request was generated. The transition request544may include respective indications or descriptors514and516of the read set and the write set of the transition. In addition, a read-write conflict check delimiter512(e.g., a sequence number representing the last committed transition of the data store as of the time the contents of the read set were examined by the client-side component) may also be included in the transition request544. Such a delimiter512may be used by the conflict detector of the LTM to identify the subset579of the TRs that have to be examined for read-write detection. For example, if the delimiter512indicates that all the changes represented by serial numbers smaller than SN504C had already been committed at the data store before the transition request (or the corresponding CTR) was generated, this means that only the TRs552C,552D,552E and552F (with serial numbers greater than or equal to SN504C) have to be examined for detecting possible read-write conflicts. In at least some embodiments, clients of the storage service may wish to enforce idempotency requirements on state transitions, e.g., to ensure that duplicate writes are not applied to one or more of the data stores. In order to avoid duplicate transitions, one or more exclusion signatures522(which may also be referred to as de-duplication signatures) may be generated by the client-side component, and included with a de-duplication check delimiter520in a de-duplication constraint descriptor518. To determine whether the requested is a duplicate of an earlier transition, another TR set559may be identified, e.g., by the LTM's conflict detector, in the depicted embodiment starting at a sequence number corresponding to de-duplication check delimiter520, and ending at the most recent transition record552F. For each of the transition records in set559, the conflict detector may check whether the de-duplication signature506stored in the transition record matches the exclusion signature(s)522of the requested transition. If such a match is found, this would indicate that the requested transition is a duplicate. Any of a number of different approaches to the detection of a duplicate transition may be taken in different embodiments. In one embodiment, in which the storage system implements idempotency semantics for transition requests, a duplicate transition request may be treated as follows. While no new work for applying the changes of the transition may be scheduled (since the requested changes have already been committed or conditionally accepted, as indicated by the presence of the match), a success indicator or positive acknowledgement may be provided to the requester of the transition. (Depending on the type of transition which is being checked for duplicates, the requester may be the client-side component or the CTC.) Consequently, in such embodiments, repeated submissions of the same transition request (or transaction request) would have the same net effect as a single submission. Idempotency with regard to duplicates may be especially important in distributed storage systems where network messages (e.g., messages containing transition requests) may sometimes get delayed or dropped, resulting in re-transmissions of the same requests. In other embodiments, a duplicate transition request may be explicitly rejected, e.g., even if no read-write conflicts were detected. If no match is found, and the transition is eventually committed, the transition request's exclusion signature may eventually be stored as the de-duplication signature in the transition record representing the commit. For some applications, clients may be interested in enforcing a commit order among specified sets of transactions or transitions—e.g., a client that submits three different STRs for transactions T1, T2 and T3 respectively may wish to have T1 committed before T2, and T3 to be committed only after T1 and T2 have both been committed. Such commit sequencing constraints may be enforced using sequencing constraint descriptor524in some embodiments. The sequencing descriptor may contain required sequencing signature(s)528representing one or more transitions that are expected to be committed prior to the transition represented by request544, as well as a sequencing check delimiter526to demarcate the set of transition records in the log510that should be checked for sequencing verification. To determine whether the requested transition's sequencing constraints are met, another TR set509may be identified in the depicted embodiment starting at a sequence number corresponding to sequencing check delimiter526, and ending at the most recent transition record552F. The conflict detector may have to verify that respective transition records with sequencing signatures that match required signatures528exist within TR set509. If at least one of the required signatures is not found in TR set509, the sequencing constraint may be violated and the requested transition may be rejected, even if no read-write conflicts were detected. If all the required sequencing signatures are found in TR set509, and if no read-write conflicts or de-duplication constraint violations that are to result in explicit rejections are detected, the transition may be accepted conditionally (if it is a VTR) or for an unconditional commit (if it is an STR or TTR). In at least some embodiments, a de-duplication signature506may represent the data items written in the corresponding transition in a different way (e.g., with a hash value generated using a different hash function, or with a hash value stored using more bits) than the write set descriptors. Such different encodings of the write set may be used for de-duplication versus read-write conflict detection for any of a number of reasons. For example, for some applications, clients may be much more concerned about detecting duplicates accurately than they are about occasionally having to resubmit transactions as a result of a false-positive read-write conflict detection. For such applications, the acceptable rate of errors in read-write conflict detection may therefore be higher than the acceptable rate of duplicate-detection errors. Accordingly, in some implementations, cryptographic-strength hash functions whose output values take 128 or 256 bits may be used for de-duplication signatures, while simpler hash functions whose output is stored using 16 or 32 bits may be used for the write signatures included in the WSDs. In some scenarios, de-duplication may be required for a small subset of the data stores being used, while read-write conflicts may have to be checked for a much larger set of transitions. In such cases, storage and networking resource usage may be reduced by using smaller WSD signatures than de-duplication signatures in some embodiments. It may also be useful to logically separate the read-write conflict detection mechanism from the de-duplication detection mechanism instead of conflating the two for other reasons—e.g., to avoid confusion among users of the storage service, to be able to support separate billing for de-duplication, and so on. In other embodiments, the write set descriptors may be used for both read-write conflict detection and de-duplication purposes (e.g., separate exclusion signatures may not be used). Similarly, in some embodiments, the same sequence number value may be used as a read-write conflict check delimiter and a de-duplication check delimiter—i.e., the sets of commit records examined for read-write conflicts may also be checked for duplicates. In at least one embodiment, de-duplication may be performed by default, e.g., using the write set descriptors, without the need for inclusion of a logical constraint descriptor in the transition request. As in the case of de-duplication signatures, the sequencing signatures507stored within the TRs552may be generated using a variety of techniques in different embodiments. In some embodiments, they may be generated from the write sets of the transitions; in other embodiments, sequencing signatures may be based at least in part on other factors. For example, the identity of the requesting client may be encoded in the sequencing signatures in addition to the write signatures in some embodiments, the clock time at which the transaction was requested may be encoded in the sequencing signatures, or an indication of the location from which the transaction was requested may be encoded, and so on. Similar considerations as described above regarding the use of different techniques for representing de-duplication signatures than write set signatures may apply in some embodiments. Accordingly, in some embodiments, a different technique may be used to generate sequencing signatures than is used for generating write set descriptor contents, even if both the sequencing signatures and the write set signatures are derived from the same underlying write locations. For example, a different hash function or a different hash value size may be used. In other embodiments, however, the write set descriptors may be used for both read-write conflict detection and sequencing enforcement purposes (e.g., separate sequencing signatures may not be used). Similarly, in some embodiments, the same sequence number value may be used as a read-write conflict check delimiter, a de-duplication check delimiter, and/or a sequencing check delimiter—i.e., the sets of commit records examined for read-write conflicts may also be checked for sequencing and de-duplication. In some cases arbitrary numbers or strings unrelated to write sets may be used as sequencing signatures. In some embodiments, in addition to lower bound sequence numbers for the set of TRs to be checked, upper bounds may also be specified within a transition request to indicate the range of TRs that should be examined for constraint checking. In various embodiments, a cross-data-store transaction admission controller may implement de-duplication and/or sequencing constraint verification using a similar technique as that described above for CTRs, VTRs and TTRs. For example, as indicated inFIG.2, a given CTR may include one or more transaction-level or cross-data-store logical constraint (e.g., sequencing or de-duplication) descriptors, and the transition records stored in the CTR persistent log may also include cross-data-store sequencing signatures and/or cross-data-store de-duplication signatures. The admission controller may use constraint check delimiters included in the CTR to identify the subset of records of the CTR repository that are to be examined for constraint verification, and reject the CTR if either a de-duplication constraint or a sequencing constraint is violated. De-duplication constraints and the straightforward sequencing constraints discussed in the context ofFIG.5represent two specific examples of logical constraints that may be imposed by clients of the storage system on state transitions. In some embodiments, more complex sequencing constraints may be enforced, either at the single-data-store level or at the cross-data-store level. For example, instead of simply requesting the storage service to verify that two transitions T1 and T2 must have been committed (in any order) prior to the requested transition's commit, a client may be able to request that T1 must have been committed prior to T2. Similarly, in some embodiments a client may be able to request negative ordering requirements: e.g., that some set of transitions {T1, T2, Tk} should have been committed before the requested transition in some specified order (or in any order), and also that some other set of transitions {Tp, Ts} should not have been committed. Example Implementations of Persistent Logs In some embodiments, the persistent logs used for individual data stores and/or for CTR transition records may be replicated for enhanced data durability and/or higher levels of availability.FIG.6illustrates an example of a replication DAG (directed acyclic graph) that may be used to implement a persistent log used for transitions associated with the data stores of a storage system, according to at least some embodiments. In general, a replication DAG640may include one or more acceptor nodes610to which transition requests650(such as STRs, VTRs, or TTRs) may be submitted, one or more committer nodes614, zero or more intermediary nodes612each positioned along a replication pathway comprising DAG edges leading from an acceptor node to a committer node, and zero or more standby nodes616that are configured to quickly take over responsibilities of one of the other types of nodes in the event of a node failure. In some implementations, instead of being incorporated within an acceptor node, the conflict detector may be implemented as a separate entity. In at least some embodiments, each node of a particular replication DAG such as640may be responsible for replicating transition records for the corresponding state machine (e.g., either a state machine of a single data store, or a state machine representing the sequence of cross-data-store transactions processed at the storage service). The transition records may be propagated along a set of edges from an acceptor node to a committer node of the DAG along a replication pathway. InFIG.6, the current replication pathway starts at acceptor node610, and ends at committer node614via intermediary node612. For a given transition record, one replica may be stored each of the nodes along the replication path, e.g., in transition sets672A,672B and672C. Each transition record propagated within the DAG may include a respective sequence number or a logical timestamp that is indicative of an order in which the corresponding transaction request was processed (e.g., at the acceptor node610). When a particular transition record reaches a committer node, e.g., after a sufficient number of replicas of the record have been saved along the replication pathway, the corresponding transition may be explicitly or implicitly committed. If for some reason a sufficient number of replicas cannot be created, the transition records may be removed in some embodiments from the nodes (if any) at which they have been replicated thus far. After the modification has been committed, one or more write appliers692may propagate the change to a set of destinations (such as materialized views, or storage devices at which the contents of the data stores are located) that have been configured to receive the state transitions, as described earlier. In some implementations, only a subset of the DAG nodes may be read by the appliers692in order to propagate committed writes to their destinations. In other embodiments, the appliers may read commit records from any of the DAG nodes to propagate the changes. In at least one embodiment, write appliers may be implemented as respective threads or processes that may run at the same hosts at one or more of the DAG nodes. In other embodiments, write appliers may run on different hosts than the DAG nodes. A transition record may also be transmitted eventually to standby node616, and a replica of it may be stored in transition records set672D after it has been committed, so that the standby node616is able to replace a failed node of the DAG quickly if and when such a failover becomes necessary. A log configuration manager (LCM)664may be responsible for managing changes to DAG configuration (e.g., when nodes leave the DAG due to failures, or join/re-join the DAG) by propagating configuration-delta messages asynchronously to the DAG nodes in the depicted embodiment. Each configuration-delta message may indicate one or more changes to the DAG configuration that have been accepted or committed at the LCM664. In some embodiments, each replication node may implement a respective deterministic finite state machine, and the LCM may implement another deterministic finite state machine. The protocol used for managing DAG configuration changes may be designed to maximize the availability or “liveness” of the DAG in various embodiments. For example, the DAG nodes may not need to synchronize their views of the DAG's configuration in at least some embodiments; thus, the protocol used for transition record propagation may work correctly even if some of the nodes along a replication pathway have a different view of the current DAG configuration than other nodes. InFIG.6, each of the nodes may update its respective DAG configuration view674(e.g.,674A,674B,674C or674D) based on the particular sequence of configuration-delta messages it has received from the LCM664. It may thus be the case, in one simple example scenario, that one node A of a DAG640continues to perform its state transition processing responsibilities under the assumption that the DAG consists of nodes A, B, C and D in that order (i.e., with a replication pathway A-to-B-to-C-to-D), while another node D has already been informed as a result of a configuration-delta message that node C has left the DAG, and has therefore updated D's view of the DAG as comprising a changed pathway A-to-B-to-D. The LCM may not need to request the DAG nodes to pause processing of transitions in at least some embodiments, despite the potentially divergent views of the nodes regarding the current DAG configuration. Thus, the types of “stop-the-world” configuration synchronization periods that may be required in some state replication techniques may not be needed when using replication DAGs of the kind described herein to implement persistent logs for distributed transaction management. Although a linear replication pathway is shown inFIG.6, in general, a replication pathway may include branches at least at some points of time (e.g., during periods when some DAG nodes have received different configuration delta messages than others). Under most operating conditions, the techniques used for propagating DAG configuration change information may eventually result in a converged consistent view of the DAG's configuration at the various member nodes, while minimizing or eliminating any downtime associated with node failures/exits, node joins or node role changes. It is noted that in some embodiments, the transition records used for distributed transaction management may be stored without using the kinds of replication DAGs illustrated inFIG.6. In at least some embodiments, the member nodes of a replication DAG may each be implemented as a respective process or thread running at a respective host or hardware server. The hosts themselves may be physically dispersed, e.g., within various data centers of a provider network. Networks set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of multi-tenant and/or single-tenant cloud-based computing or storage services) accessible via the Internet and/or other networks to a distributed set of clients or customers may be termed provider networks in this document. Provider networks may also be referred to as “public cloud” environments. Some or all of the data stores for which distributed transaction support is provided using the techniques described herein may be established using network-accessible database services and/or other storage services of a provider network in some embodiments. In one embodiment, a provider network at which support for distributed transactions is implemented may be organized into a plurality of geographical regions, and each region may include one or more availability containers, which may also be termed “availability zones” herein. An availability container in turn may comprise portions or all of one or more distinct locations or data centers, engineered in such a way (e.g., with independent infrastructure components such as power-related equipment, cooling equipment, or physical security components) that the resources in a given availability container are insulated from failures in other availability containers. A failure in one availability container may not be expected to result in a failure in any other availability container; thus, the availability profile of a given resource is intended to be independent of the availability profile of resources in a different availability container. FIG.7illustrates an example of a distribution of nodes of a persistent log's replication DAG across multiple availability containers of a provider network, according to at least some embodiments. Provider network702includes three availability containers766A,766B and766C in the depicted embodiment, with each availability container comprising some number of DAG node hosts710. Node host710A of availability container766A, for example, comprises a DAG node722A, local persistent storage (e.g., one or more disk-based devices)730A, and a proxy712A that may be used as a front end for communications with clients of the storage system. Similarly, node host710B in availability container766B comprises DAG node722B, local persistent storage730B, and a proxy712B, and node host710C in availability container766C includes DAG node722C, local persistent storage730C and a proxy712C. In the depicted embodiment, DAG nodes722(and/or proxies712) may each comprise one or more threads of execution, such as a set of one or more processes. The local persistent storage devices730may be used to store transition records as they are propagated along replication path791(and/or DAG configuration-delta message contents received at the DAG nodes722) in the depicted embodiment. The log configuration manager (LCM) of the DAG depicted in the embodiment ofFIG.7itself comprises a plurality of nodes distributed across multiple availability containers. As shown, a consensus-based LCM cluster790may be used, comprising LCM node772A with LCM storage775A located in availability container766A, and LCM node772B with LCM storage775B located in availability container766B. The depicted LCM may thus be considered fault-tolerant, at least with respect to failures that do not cross availability container boundaries. The nodes of such a fault-tolerant LCM may be referred to herein as “configuration nodes”, e.g., in contrast to the member nodes of the DAG being managed by the LCM. Changes to the DAG configuration (including, for example, node removals, additions or role changes) may be approved using a consensus-based protocol among the LCM nodes772. Representations of the DAG configuration may have to be stored in persistent storage by a plurality of LCM nodes before the corresponding configuration-delta messages are transmitted to the DAG nodes722. The number of availability containers used for the LCM and/or for a given replication DAG may vary in different embodiments and for different applications, depending for example on the availability requirements or data durability requirements of the applications. Partition-Based Cross-Data-Store Transaction Management In some embodiments, as mentioned earlier, a cross-data-store transaction coordinator (CTC) may wait until the distributed commit protocol for one transaction is complete before initiating the distributed commit protocol processing for the next transaction request in the CTR persistent log. Such an approach may be used, for example, to ensure that at least with respect to cross-data-store transactions, sequential consistency is enforced at the storage system. However, depending on the number of data stores in the system and the nature of the cross-data-store transactions (i.e., which combinations of data stores are modified in various transactions), an approach that uses multiple CTCs may be more efficient.FIG.8illustrates an example of a distributed transaction management architecture in which a plurality of transaction coordinators may be configured, with each coordinator responsible for managing cross-data-store transactions directed to a respective combination of data stores, according to at least some embodiments. In the depicted embodiment, client-side components860(such as860A or860B) may generate cross-data-store transaction requests, each directed at some subset of data stores DS1, DS2 and DS3 of a distributed storage system. However, instead of directing all the CTRs to a single admission controller, the client-side components may select one of four admission controllers835A-835D for any given CTR, depending on which combination of data stores is being read and/or modified in the transaction request. Admission controller835A, associated persistent log833A and CTC837A may be established to manage transactions that involve the combination of DS1 and DS2 in the depicted embodiment. Similarly, for transactions that read or write to the combination of DS1 and DS3, admission controller835B, log833B and CTC837B may be used, and for transactions that involve DS2 and DS3, admission controller835C, log833C and CTC837C may be configured in the depicted embodiment. Finally, for transactions that read and/or write data at all three data stores, admission controller835D, log833D and CTC837D may be used. In at least one embodiment, the CTC responsible for transactions directed to a given combination of data stores may be able to process more than one such transaction at a time under some conditions, instead of dealing with such transactions in strict sequence (in a manner similar to that described earlier for overlapping processing of CTRs in implementations in which a single persistent log is used for all combinations of data stores). For example, CTC837A may be able to detect, upon examining a set of two or more CTRs in log833A, that the CTRs of the set do not conflict with each other, and may therefore proceed with the distributed commit protocols for several or all of the transactions of the set in parallel. Such deployment of respective sets of distributed transaction management resources for different combinations of data stores may be advantageous in that the amount of time that a given CTR has to wait in a persistent log, before the distributed commit protocol for the CTR is begun, may thereby be reduced. For example, consider a scenario in which CTR1 involving DS1, DS2 and DS3 is ready for admission control at a time T0, and that a different request CTR2, which involves only DS1 and DS2, is ready for admission control a short time later, e.g., at T0+delta1. Assume that it takes time Tproc for the processing of the distributed transaction protocol for CTR1, where Tproc is much larger than delta1. If a single persistent log and a single CTC were being used, then the first phase of CTR2's distributed commit protocol processing may not even be begun until T0+Tproc: that is, for (Tproc-delta1), no progress may be made on CTR2. If, instead, the partitioned approach to distributed transaction illustrated inFIG.8is used, the processing of CTR2 may be begun much sooner (e.g., by CTC837A) in parallel with CTR1's processing (which may be handled by CTC837D). Parallelized handling of the distributed transactions, conceptually similar to the approach illustrated inFIG.8, may be implemented in other ways in different embodiments. In some embodiments, for example, a single admission controller may still be used, or a single persistent log may still be deployed, while distributing the commit protocol workload among several different coordinators. In some embodiments in which the storage system comprises a large number of data stores, separate sets of CTR management resources need not be set up for all the different combinations of data stores—instead, some CTCs (with associated persistent logs and/or admission controllers) may be established to handle more than one combination of data store. Consider a storage system comprising four data stores DS1-DS4, so that 11 data store combinations are possible (6 combinations involving two data stores each, 4 involving three data stores each, and one involving all four data stores). In one embodiment, the 11 combinations may be mapped to just three CTCs: CTC1 responsible for (DS1×DS2), (DS1×DS3) and (DS1×DS4), CTC2 responsible for (DS2×DS3), (DS2×DS4) and (DS3×DS4), and CTC3 responsible for the three-data-store combinations and the four-data-store combination. Methods for Supporting Distributed Transactions Using Persistent Change Logs FIG.9is a flow diagram illustrating aspects of operations that may be performed by cross-data-store transaction admission controllers and coordinators, according to at least some embodiments. As shown in element901, one or more repositories for cross-data-store transaction requests (CTRs) may be established at a distributed storage system comprising a plurality of data stores with respective log-based transaction managers (LTMs). An admission controller may be configured for making decisions as to which CTRs should be added or inserted into the repository. In some embodiments, the repositories may themselves be log-structured, similar to the persistent logs used for storing state transition records at the individual data stores. In other embodiments, other storage mechanisms (such as implementations of FIFO queues or linked lists) may be used for the CTR repositories. As indicated in element904, an admission controller may receive a CTR from a client-side component of the system, e.g., via a programmatic interface exposed as a client library of the distributed transaction management environment. The CTR may comprise some combination of data-store-level elements (such as, for one or more data stores, respective read set descriptors, write set descriptors, conflict-check delimiters, logical constraint descriptors comprising exclusion signatures or required signatures) and/or transaction-level elements (e.g., de-duplication or sequencing constraint descriptors for the transaction as a whole) in the depicted embodiment. In some embodiments, the admission controller may perform one or more checks to determine whether the CTR is to be accepted for further processing: e.g., if any global logical constraint descriptors are included in the CTR, the admission controller may verify that the constraints would not be violated if the CTR were to be accepted. To do so, the admission controller may, for example, examine at least a subset of the transition records stored in the CTR persistent log, comparing exclusion signatures of the CTR with de-duplication signatures of the previously-stored log records to identify duplicates, and comparing required signatures of the CTR with sequencing signatures of the previously-stored log records to verify commit sequencing. If the CTR does not violate any of the constraints checked by the admission controller (as indicated in element907), a transition record indicating that the CTR has been accepted for processing may be added to the persistent log (element913). In at least some implementations, a logical timestamp or sequence number indicative of the order in which the CTR was approved relative to other CTRs may be included in the transition record, in addition to some or all of the data-store-specific elements included in the CTR. If the CTR cannot be accepted, e.g., due to a constraint violation, different actions may be taken depending on the nature of the violation and the idempotency policies being supported. In some cases, as indicated in element910, a message indicating that the CTR has been rejected may be transmitted to the client-side component. In other cases, e.g., if the CTR was identified as representing a duplicate of an earlier-committed transaction and if idempotency for such duplicate requests is being supported, an indication that the CTR was committed may be provided to the client-side component by the admission controller. Regardless of whether the CTR was approved or rejected, the admission controller may then wait for subsequent CTRs, and repeat the operations corresponding to elements904onwards for the next CTR received. It is noted that some CTRs may not include transaction-level or global logical constraints, in which case the admission controller may not have to perform any constraint-checking, and may simply insert corresponding transition records into the CTR persistent log. In the depicted embodiment, a CTC may be assigned to examine entries of the persistent log (e.g., in insertion or FIFO order) and initiate distributed commit protocol processing for each entry examined. During each iteration of its operations, the CTC examine the next CTR transition record (i.e., the most recent record that has not yet been examined) in the CTR persistent log (element951). To start the first phase of the distributed commit protocol, the CTC may unpack or extract the data-store-specific elements (read sets, write sets, etc.) of the CTR, and generate respective VTRs (voting transition requests) for the one or more data stores to which the operations of the CTR are directed. The VTRs may then be transmitted to the LTMs of the respective data stores. The CTC may then wait to receive responses to the VTRs from the LTMs to complete the first phase of the commit protocol for the CTR being processed. In other embodiments, as discussed earlier, the CTC may schedule the distributed commit protocol operations for more than one CTR in parallel (e.g., if the CTC is able to verify that the CTRs do not conflict with one another) instead of processing the log entries in strict sequential order. During the second phase of the commit protocol, the CTC may transmit one of two types of terminating transition requests (TTRs) to the LTMs of the data stores. If all the responses from the data store LTMs indicate that the VTRs were conditionally approved (as detected in element954), the CTC may send a commit TTR to each LTM (element957) to indicate that the modifications indicated in the corresponding VTR are to be made permanent. In contrast, if one or more of the LTMs reject their VTR (as also detected in element954), the CTC may send an abort TTR to each LTM (element960) to indicate (to those LTMs that may have conditionally accepted their VTRs) that the modifications indicated in the VTRs are not to be made permanent. In at least one implementation, the CTC may treat the sustained absence of a response to a VTR from an LTM as the equivalent of a rejection—e.g., if a timeout period associated with a VTR expires and a particular LTM has not yet responded, the CTC may send abort TTRs to one or more of the LTMs. In some embodiments, the second phase of the commit protocol may be considered complete when a respective response (e.g., an acknowledgement) of the TTR is received from each LTM. In such embodiments, the CTC may provide an indication of the disposition of the CTR to the client-side component (e.g., whether the transaction was aborted or committed). The CTC may then examine the persistent log to begin the next iteration of its processing, and repeat operations corresponding to elements951onwards for the next CTR examined. FIG.10is a flow diagram illustrating aspects of operations that may be performed by a log-based transaction manager (LTM) associated with a data store supporting distributed transactions, according to at least some embodiments. A particular LTM, LTM1, may be established for handling state transition requests directed to a particular data store DS1 of a distributed storage system comprising a plurality of data stores. As indicated in element1001, LTM1 may receive an indication of a particular transition request TR1, which may comprise either a single-data-store transaction request (STR) submitted by a client-side component of the system, or one of two types of transition requests (VTRs or TTRs) submitted by a cross-data-store transaction coordinator (CTC). In the depicted example, TR1 may include elements (e.g., a read set descriptor, a write set descriptor and/or a conflict check delimiter) that can be used to determine whether a read-write conflict exists between TR1 and previously-stored records of the LTN's persistent log. For example, TR1's read set may indicate one or more objects that were read in order to determine the contents of the TR1 write set (or write payload), and TR1's read-write conflict check delimiter may indicate a sequence number corresponding to a committed state of DS1 at the time that the one or more objects were read. In one embodiment, if any writes directed to the read set subsequent to the sequence number indicated as the conflict check delimiter have been accepted (either conditionally or unconditionally), a determination may be made by the LTM that a read-write conflict has been detected. Similarly, logical constraint descriptors of the kinds described earlier (e.g., de-duplication constraint descriptors or sequencing constraint descriptors) may contain exclusion signatures, required signatures, and constraint-checking delimiters that may be usable by the LTM, together with at least a subset of previously-stored unconditional or conditional transition records) to determine whether TR1 violates a logical constraint in the depicted embodiment. If either a read-write conflict or a logical constraint violation is detected with respect to TR1 (as determined in element1004), a selected type of conflict detection response may be sent to the source of TR1 (the client-side component if TR1 were an STR, or a CTC if TR1 were a VTR or TTR) in the depicted embodiment (element1028). The response may be an explicit rejection, e.g., if a read-write conflict were detected, a sequencing constraint violation were detected, or if a policy to respond to duplicate TRs with explicit rejections were in use. If duplicate transition requests are to be handled in accordance with idempotency semantics, a commit ACK corresponding to a duplicate TR may be sent to the source of TR1 instead of a rejection. If no conflict or constraint violation is detected (as also determined in element1004), different operations may be performed depending on whether TR1 is an STR, a VTR or a TTR in the depicted embodiment. If TR1 is a VTR (as determined in element1007), a transition record indicative of conditional acceptance of the VTR may be added to the LTM's persistent log (element1010), and an indication of the conditional acceptance may be provided to the source CTC from which the VTR was received. As indicated earlier, the conditional acceptance may be considered the logical equivalent of acquiring a lock on the read/write sets of the VTR. Any new VTRs or STRs that are received prior to the corresponding TTR may be rejected if the new VTR/STR conflicts with the conditionally-accepted VTR (or other records of the persistent log) in the depicted embodiment. If TR1 is a commit TTR corresponding to an earlier-stored conditional acceptance transition record for some VTR (VTRk) (as determined in element1013), the LTM may modify the conditionally-accepted transition record (or store a new commit transition record) in its persistent log to indicate that the writes of VTRk are being committed (element1016). In some embodiments, an acknowledgement of the commit TTR may be sent to the source CTC from which the commit TTR was received. If TR1 is an abort TTR corresponding to an earlier-stored conditional acceptance transition record for some VTR (VTRk) (as determined in element1019), the LTM may modify the persistent log to indicate that the writes of VTRk have been rejected (element1022), and that the corresponding logical lock is to be released. In some embodiments, the conditional acceptance record of VTRk may simply be deleted from the persistent log, while in other embodiments a field within the transition record may be modified to indicate that the transaction corresponding to VTRk has been aborted. An abort acknowledgement response may be sent to the CTC in some embodiments. If TR1 is an STR (as would be the case if it is neither a VTR nor a TTR, which would also be ultimately determined in operations corresponding to element1019), a commit transition record representing the writes of STR1 may be stored in the persistent log of the LTM (element1025). In some embodiments, a response may be provided to the client-side component from which the STR was received to indicate that the STR has been approved. In at least some embodiments in which a replication DAG is used to implement the persistent logs used by the LTMs and the CTCs, a sufficient number of replicas of the state transition records may have to be stored to persistent storage before the transition is considered effective. It is noted that in various embodiments, operations other than those illustrated in the flow diagrams ofFIG.9andFIG.10may be used to implement at least some of the techniques for supporting distributed transactions discussed herein. Some of the operations shown may not be implemented in some embodiments, may be implemented in a different order than illustrated inFIG.9orFIG.10, or in parallel rather than sequentially. Use Cases The techniques described above, of providing support for distributed transactions that span multiple data stores may be useful in a variety of scenarios. In some provider network environments, many different types of storage and/or database architectures may be supported. One or more of the databases or storage services may not even provide support for atomic multi-write transactions, while others may support transactions with several writes only if all the writes of a given transaction are directed to a single database instance. In at least some provider networks, internal-use-only applications developed for administering the resources (e.g., guest virtual machines and physical machines used at a virtual computing service, or network configurations of various types) of the provider network may themselves require atomicity for groups of operations that span different internal data stores. In addition, various applications developed by provider network customers may also be designed to utilize a variety of data stores, such as a mix of relational and non-relational databases. The ability to interact with several different data stores with respective data models and/or programmatic interfaces may be especially valuable as applications scale to larger customer sets and data sets than can be supported by single-instance databases. Providing built-in robust transaction support for arbitrary combinations of data stores may help attract additional customers to the provider network, and may also improve the ease of administration of network resources. Illustrative Computer System In at least some embodiments, a server that implements one or more of the techniques described above for supporting distributed or cross-data-store transactions may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.FIG.11illustrates such a general-purpose computing device9000. In the illustrated embodiment, computing device9000includes one or more processors9010coupled to a system memory9020(which may comprise both non-volatile and volatile memory modules) via an input/output (I/O) interface9030. Computing device9000further includes a network interface9040coupled to I/O interface9030. In various embodiments, computing device9000may be a uniprocessor system including one processor9010, or a multiprocessor system including several processors9010(e.g., two, four, eight, or another suitable number). Processors9010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors9010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors9010may commonly, but not necessarily, implement the same ISA. In some implementations, graphics processing units (GPUs) may be used instead of, or in addition to, conventional processors. System memory9020may be configured to store instructions and data accessible by processor(s)9010. In at least some embodiments, the system memory9020may comprise both volatile and non-volatile portions; in other embodiments, only volatile memory may be used. In various embodiments, the volatile portion of system memory9020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM or any other type of memory. For the non-volatile portion of system memory (which may comprise one or more NVDIMMs, for example), in some embodiments flash-based memory devices, including NAND-flash devices, may be used. In at least some embodiments, the non-volatile portion of the system memory may include a power source, such as a supercapacitor or other power storage device (e.g., a battery). In various embodiments, memristor based resistive random access memory (ReRAM), three-dimensional NAND technologies, Ferroelectric RAM, magnetoresistive RAM (MRAM), or any of various types of phase change memory (PCM) may be used at least for the non-volatile portion of system memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory9020as code9025and data9026. In one embodiment, I/O interface9030may be configured to coordinate I/O traffic between processor9010, system memory9020, network interface9040or other peripheral interfaces such as various types of persistent and/or volatile storage devices. In some embodiments, I/O interface9030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory9020) into a format suitable for use by another component (e.g., processor9010). In some embodiments, I/O interface9030may include support for devices attached through various types of peripheral buses, such as a Low Pin Count (LPC) bus, a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface9030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface9030, such as an interface to system memory9020, may be incorporated directly into processor9010. Network interface9040may be configured to allow data to be exchanged between computing device9000and other devices9060attached to a network or networks9050, such as other computer systems or devices as illustrated inFIG.1throughFIG.10, for example. In various embodiments, network interface9040may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface9040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory9020may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1throughFIG.10for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device9000via I/O interface9030. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device9000as system memory9020or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface9040. Portions or all of multiple computing devices such as that illustrated inFIG.11may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices, or special-purpose computer systems, in addition to or instead of being implemented using general-purpose computer systems. The term “computing device”, as used herein, refers to at least all these types of devices, and is not limited to these types of devices. CONCLUSION Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
97,986
11860901
DETAILED DESCRIPTION Various embodiments for query execution in a provider network are described. According to some embodiments, a service (e.g., a “query processing service”) is disclosed that enables users to execute Structured Query Language (SQL) queries against relational databases over HyperText Transfer Protocol (HTTP) using connection pooling. The query processing service may execute user-provided queries against data stored by (or accessible to) a database service, which may be part of a same provider network as the query processing service. In an embodiment, the database service is a relational database service, or implements one or more relational databases. The database service may implement, for example, one or more “traditional” query processing systems (e.g., MySQL, MariaDB, PostgreSQL, H2, Microsoft SQL Server, Oracle, etc.), one or more NoSQL databases, one or more object database management systems, one or more object-relational database systems, one or more data warehouse systems (e.g., Amazon Redshift), a “serverless” interactive query service (e.g., Amazon Athena), a distributed “Big Data” processing system (e.g., Apache Spark), etc. The query processing service can be queried using a query written in one or more query languages (as defined by a query language definition), such as one or more of the many dialects, extensions, and implementations of SQL, such as Transact-SQL (T-SQL), Procedural Language/SQL (PL/SQL), PL/pgSQL (Procedural Language/PostgreSQL), SQL-86, SQL-92, SQL:2016, etc. Traditionally, applications built using native SQL protocols require a persistent connection to a database and the use of language-specific drivers to connect to and query the database. This may require users to manage database connection pools within their application and connection pooling frameworks to establish connections to the databases. Embodiments of the disclosed query processing service eliminate the need for users to manage connections to the databases and/or the use of language-specific drivers to connect to databases while executing queries against the databases. In some embodiments, the query processing service configures a web service endpoint that client applications may connect to in order to execute queries against the databases through the endpoint. The web service endpoint is easily accessible to the client application and abstracts the concept of database connections and connection pooling for users. Thus, users can query information from the databases without having to configure or manage connections to the databases. In some examples, the query processing service receives a query request at the web service endpoint and identifies a connection to a particular target database. The query request may comprise a HTTP message that includes a statement to be executed by the target relational database within a provider network. The statement could be an SQL statement, a Data Manipulation Language (DML) statement, an SQL query, or other command that could be executed by the target relational database. The query processing service transmits the statement for execution at the target database via an existing connection and obtains a query result based on the execution of the statement. The query processing service transforms the query result into a format suitable for the client and transmits a query response to the client. FIG.1is a block diagram illustrating an environment for providing a query processing service according to an embodiment of the present disclosure. In an embodiment, a query processing service, an authentication service, and a database service operate as part of a service provider network100and each comprises one or more software modules executed by one or more electronic devices at one or more data centers and geographic locations. The service provider network100inFIG.1shows only a select number of services for illustrative purposes; in general, a service provider network100may provide many different types of computing services as part of the provider network. A provider network100provides users with the ability to utilize one or more of a variety of types of computing-related resources such as compute resources (e.g., executing virtual machine (VM) instances and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (e.g., object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (e.g., configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (e.g., databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc. These and other computing resources may be provided as services, such as a hardware virtualization service that can execute compute instances, a storage service that can store data objects, etc. The users (or “customers”) of provider networks100may utilize one or more user accounts that are associated with a customer account, though these terms may be used somewhat interchangeably depending upon the context of use. Users may interact with a provider network100across one or more intermediate networks104(e.g., the internal via one or more interface(s), such as through use of Application Programming Interface (API) calls, via a console implemented as a website or application, etc. The interface(s) may be part of, or serve as a front-end to, a control plane112of the provider network100that includes “backend” services supporting and enabling the services that may be more directly offered to customers. To provide these and other computing resource services, provider networks100often rely upon virtualization techniques. For example, virtualization technologies may be used to provide users the ability to control or utilize compute instances (e.g., a VM using a guest operating system (O/S) that operates using a hypervisor that may or may not further operate on top of an underlying host O/S, a container that may or may not operate in a VM, an instance that can execute on “bare metal” hardware without an underlying hypervisor), where one or multiple compute instances can be implemented using a single electronic device. Thus, a user may directly utilize a compute instance hosted by the provider network to perform a variety of computing tasks or may indirectly utilize a compute instance by submitting code to be executed by the provider network, which in turn utilizes a compute instance to execute the code (typically without the user having any control of or knowledge of the underlying compute instance(s) involved). According to some embodiments described herein, a query processing service102enables users to execute queries against target customer databases without users having to configure or manage the connections to the databases. The query processing service sets up web service endpoints to the databases that enables client applications to access the databases through these endpoints. For example, a web service endpoint can correspond to a unique identifier (e.g., an Internet Protocol (IP) address) that is exposed to a client application for executing query requests from users. In some embodiments, the query processing service receives query requests originated by the client application and identifies a connection to a particular target database. In some examples, the query request comprises a HTTP message carrying a payload. The payload of the HTTP message comprises a query to be executed by a target database instance within the provider network, which may be expressed as SQL, for example. The query processing service102transmits the query for execution at the target database via the connection and obtains a query result based on the execution of the query. The query processing service102transforms the query result into a format suitable for the client and transmits a query response to the client. The query processing service102may be implemented in whole or in part within the provider network100. Additional operations performed by the query processing service102are described in more detail below. At circle “1,” the database service138sets up target customer databases140. The target customer databases140may comprise one or more target database instances140A to140N. Each target database instance (e.g.,140A) can store data related to a tenant or customer or user of the provider network100. In some examples, a target database instance can be made up of a cluster of one or more database instances. In other examples, a target database instance (e.g.,140N) can comprise one or more individual databases. Each database, for instance, may be a schema that represents a collection of tables in the target database instance. The database service138may be implemented as a relational database service138in one example, and the target database instances may be implemented as relational database instances in the provider network100. Each database instance may be identified by a database instance identifier and may be associated with a user account and/or one or more permissions indicating which users may access and/or query a particular database instance in which ways (e.g., read only, read and write, etc.). Once provisioned, the database instances may be queried using SQL to perform typical database operations such as create, delete, select, update, insert, etc., with tables in the database instance. In certain embodiments, prior to receiving query requests from users, at circle “2,” a user108B (e.g., an administrator of a target database instance (e.g.,140A)) may interact with a user interface (UI) of an electronic device106A to cause the electronic device106A to transmit a configuration request110to setup a web service endpoint for a target database instance, though in other embodiments a client application executed by the electronic device106A may transmit such a configuration request110without any instant user108B interaction. Examples of an electronic device106A include personal computers (PCs), cell phones, handheld messaging devices, laptop computers, set-top boxes, personal data assistants, electronic book readers, wearable electronic devices (e.g., glasses, wristbands, monitors), and the like. The web service endpoint may be (or be associated with) a unique identifier that, for example, a client142A of electronic device106B can use to execute query requests. The web service endpoint may be setup by the user108B by submitting an endpoint configuration request110to the query processing service102. In one example, the endpoint configuration request comprises a HTTP request message carrying a payload comprising one or more parameters, such as an identifier of the target database instance, the number of connections associated with the target database instance, and/or access permissions to access the target database instance. In one implementation, the endpoint configuration request is modeled as an API request and the endpoint configuration request may be submitted by the user via an API call to the query processing service108.FIG.4illustrates an example of a web service endpoint creation API request (e.g., configuration request110), which will be described later herein. At “3A”, the query processing service102configures the web service endpoint to the target database instance identified in the endpoint configuration request110. A client (e.g., client142A or client142B) can connect to the associated target database instance (e.g., target database instance140N) using the web service endpoint via a web service API114in the control panel112. At “3B,” the query processing service102may communicate with an authentication service134to set up the necessary permissions required for users to access the web service endpoint prior to receiving query requests from the users. At circle “4A”, a query request117A is received by a control plane112of the provider network. In one example, the query request117A may be originated by a client application142A of electronic device106B, which may potentially occur responsive to a user108A interacting with the client142A. In some examples, the query request117A may be received via the API114in the control plane112which may then transmit the query request to the controller116in the query processing service102. The control plane112handles many of the tasks involved in accepting and processing requests from users, including traffic management, authorization and access control, monitoring, and API management. For example, in some embodiments the control plane112creates, publishes, maintains, and monitors various APIs for users to access and interact with services of the provider network100. In certain examples, the query request117A comprises a HTTP message carrying a payload comprising a query to be executed by a target database instance (e.g.,140N) within the provider network100. In one implementation, the query request117A can be modeled as a web service API request. For example, a user may interact with a client142A (e.g., as part of an application), causing the client142A to submit an “execute SQL” web service API query request117A via client142A to execute an SQL query against the target customer database.FIG.5illustrates one example of an “execute SQL” API query request117A, which will be described in greater detail later herein. The control plane112(and/or controller116) may also implement a variety of other APIs for query execution by users. These APIs may include one or more of, for example, a “get items” API that fetches one or more rows from a table in the customer database instance using a SQL predicate provided by the user, an “insert item” API for inserting values for one or more rows in the customer database instance, an “update items” API that updates the values of one or more rows in the customer database instance, a “delete item” API that deletes zero or more items using an indexed column from a table in the customer database instance, and the like. In other examples, as shown by the circle labeled “4B,” a query request117A can also originate from another client142B implemented within another service150in the provider network100such as an on-demand code execution service, a hardware virtualization service, or another service implemented by the provider network. For example, the client142B may be a “serverless” function that may include code provided by a user or other entity that can be executed on demand Serverless functions may be maintained within provider network100and may be associated with a particular user or account, or may be generally accessible to multiple users and/or multiple accounts. Each serverless function may be associated with a URL, URI, or other reference, which may be used to call the serverless function. Each serverless function may be executed by a compute instance, such as a virtual machine, container, etc., when triggered or invoked. In some embodiments, a serverless function can be invoked through an API call or a specially formatted HTTP request message. Accordingly, users can define serverless functions that can be executed on demand, without requiring the user to maintain dedicated infrastructure to execute the serverless function. Instead, the serverless functions can be executed on demand using resources maintained by the provider network100. In some embodiments, these resources may be maintained in a “ready” state (e.g., having a pre-initialized runtime environment configured to execute the serverless functions), allowing the serverless functions to be executed in near real-time. At circle “5,” the control plane112submits the query request117A to a controller116of the query processing service102, which may be implemented as software, hardware, or a combination of both. The query processing service102then performs a series of operations to associate the target database instance with a connection pool server, establish a connection to the target database instance using the connection pool server, initiate the execution of the user's query against the target database instance and provide query results to the user. The operations performed by the query processing service102are discussed in detail below. At “6A,” the controller116determines if there is an existing connection to the target database instance. For example, an existing connection can be determined by performing a lookup in an endpoint-to-connection pool server lookup table120(or other data structure known to those of skill in the art for performing lookups or otherwise determining associations) in a metadata database118. In some embodiments, the lookup table120stores information that associates the web service endpoint to a connection pool server (CPS)128that can establish a connection to the target database instance for the query request. In certain embodiments, each CPS is identified by a distinct port number and can be a separate connection point to a particular target database instance in the set of target database instances (140A-140N). A CPS may be implemented, for example, using a web server and connection pool libraries known to those of skill in the art, such as the HikariCP JDBC connection pool library, Vibur DBCP connection pool library, the Apache Tomcat JDBC connection pool, etc. In some embodiments, each CPS may comprise a set of one or more connection slots. Each connection slot may correspond to a connection established by the CPS to an individual database within a particular target database instance (e.g.,140N). In some examples, the number of connection slots in a CPS may correspond to the number of query requests that can arrive concurrently to the CPS to execute queries against different databases within a particular target database instance. The metadata database118stores information related to endpoints and unused CPSs. For example, information related to endpoints may include a target identifier of a target database instance or database cluster associated with the endpoint, and target database user authentication credentials. Information related to unused CPSs may include, for example, a CPS identifier for the CPS, the target customer database identifier associated with the CPS, user authorization details, and a number of connection slots provided by the CPS. In some embodiments, the metadata database118could include an in-memory cache (not shown inFIG.1) to perform lookups or otherwise determine associations between web service endpoints and CPSs. If an existing connection to a target database instance can be found by the controller116, for instance, by determining using the lookup table120that there is an association of the target database instance to a first CPS in the lookup table then, in certain embodiments, the controller116may associate the target database instance with the CPS in the lookup table. At “6B”, the controller may communicate with the authentication service134at circle to authenticate the user's access to the target database instance prior to transmitting the query for execution to the target database instance via the CPS, e.g., based on data included with the query request117A (e.g., credentials, encrypted material, etc., as is known in the art for performing user identification and authorization). At “7”, the controller116directly transmits the user's query to the identified CPS in the connection pooling system132. However, if an existing connection to the target database instance cannot be found (e.g., based on a lookup in the lookup table120), then at “6C,” the controller116selects and identifies a CPS from an available CPS list121that can establish a connection to the target database instance and updates the lookup table120to assign/associate the selected CPS with the target database instance. At this point, the controller116can communicate with the authentication service134at circle “6B” to authenticate the user's access the target database instance using the selected CPS. In certain embodiments, the controller116may be implemented as a fleet of instances that can be horizontally scaled and managed by the query processing service102. Each instance in the fleet of instances may comprise logic to determine whether to route a query request to an existing connection (CPS) or assign a new connection (CPS) for a query request if a connection to a target database instance is not available. In some embodiments, a connection pool monitoring fleet144monitors a set of database connections to the target database instances140A-140N. For example, at circle labeled “9,” which may occur prior to or after receipt of a query from the controller116, the connection pooling system132may communicate with a connection pool monitoring fleet144to initialize and launch a set of one or more CPSs to the target database instances. The creation of a new CPS may include, for example, launching a new web server together with an application utilizing a connection library as described herein. In some embodiments, the CPS may be deployed on a common compute instance146A (e.g., VM) as one or more CPSs128A-128N, and in some embodiments multiple such instances146A-146Z across one or potentially many host computing devices may be used by the query processing service102. In certain embodiments, the creation of a new CPS may involve assigning a port number to be used for the CPS to reach the particular target database instance. A newly created CPS may also include an identifier of the particular target database instance (e.g., an IP address, a unique database identifier within the context of the database service138, etc.) to connect to, a minimum number of connections that the CPS is to form with the target database instance, a maximum number of connections that the CPS is to form with the target database instance, an identifier of a location where a secret value is kept within the provider network100that can be used by the target database instance for query authorization and/or authentication, etc. As described herein, the minimum connection value may refer to a minimum number of connection slots provided by the CPS, whereas the maximum connection value may refer to a maximum number of connection slots provided by the CPS. In one example, the minimum connection value can be one connection slot and the maximum connection value can be five connection slots. In certain embodiments, the connection pool monitoring fleet144monitors a pool of database connections to the target database instances140A-140N. The connection pool monitoring fleet144may also monitor a variety of information related to the pool of connections. For example, it may monitor among other parameters, the maximum number of active connections, the maximum number of idle connections, the number of active connections that are being used at once, the number of connections that are being requested at once, the time to acquire a connection, SQL statement execution time, and so on. The connection pool monitoring fleet144may then, in some embodiments, automatically create database connection pools for users, dynamically scale the number of connections to adapt to varying workloads of query requests submitted to the target database instances140A-140N, scale up the amount of computing resources available for connection pool servers, etc. At circle “7” the controller116may transmit the query request117A (or a subset thereof, such as the query itself and perhaps user authentication/authorization information carried therein) issued on behalf of the user to the identified CPS (e.g., CPS128A). The CPS may be identified by a unique port number and the controller116may transmit query request117A to this port number. The port number identifying a CPS may be mapped to a particular target database instance (e.g.,140N) or to different databases within the target database instance. At circle “8” the CPS initiates the execution of the query against the target database instance by sending the query (and optionally, the additional information such as user authentication/authorization information) over the existing connection(s)—e.g., via proxy service, or directly to the target database instance. At circle “9” the target database instance executes the query, and at circle “10” the target database instance provides a query result to the CPS128A. At circle “11,” a query adaptor130in the CPS may transform the query result into a query format that is suitable for the client142A—e.g., the query adapter130may transform a database engine-specific query result format into a common format such as eXtensible Markup Language (XML), JavaScript Object Notation (JSON), etc. At circle “12,” the CPS128A transmits a query response148to the client142A based on the query result—for example, as shown the CPS128A may send the query response148at circle “12” to the controller116, which then sends the query response148at circle “12A” back to the client142A. The query response148may comprise one or more HTTP messages carrying a payload with the “transformed” query result. For example, the query adaptor130may transform a query result into an XML with a top level “SqlStatementResults” field carrying a list of SqlStatementResult values. A SqlStatementResult may include two values, where only one is populated—a ResultSet struct value that is returned if the query was a “regular” database query, and a NumberOfRowsUpdated value indicating a number of rows updated if the statement was an insert, update, or delete query (or specialized API call for performing one of these operations, as described herein). Each ResultSet struct may include a ResultSetMetadata struct including metadata about the results in the form of a list of columns and their types and may include a Rows list of Row structs. The ResultSetMetadata may include a ColumnInfos field carrying a list of ColumnInfo structs, where a ColumnInfo struct includes fields for Name (of column), Type (of column), Nullable (NOT_NULL or NULLABLE or UNKNOWN), and/or Precision (e.g., number of floating-point digits). Each Row struct may be an array (or list) of Data values, where each is cast as a string. Of course, variations from the above proscribed format may be flexibly used in different embodiments. By way of example, in some embodiments the query response may carry the transformed query result as JSON, and may be something akin to the following, which includes both metadata about the columns that are returned and the data itself: [{′ColumnInfo′: {′Name′:Sports′,′Type′:′varchar (255)′}}, {′Resultset′ :{′Row′: ′hockey′},{′Row′: ′football′},{′Row′: ′volleyball′},{′Row′: ′tennis′},{′Row′: ′basketball′}}] In other examples, the query response may carry the transformed query result in other serialized formats known in the art such as the “protocol buffer” format which is a language and platform independent format for serializing structured data in a query result. In some examples, the query response may be streamed back to the client. Beneficially, by returning query results in a common format, client application code can more simply interact with a variety of different types of databases despite these different types of databases natively returning results in different formats. Instead, the query processing service102can present a unified format for these results. The communication of information between one or more components inFIG.1to process queries from users is further described in relation toFIGS.2and3below. For example,FIG.2shows an example messaging flow between one or more components inFIG.1when a first query request is received from a user.FIG.3shows an example messaging flow between one or more components inFIG.1when subsequent query requests are received from a user. FIG.2is a diagram illustrating exemplary messaging between components of an environment for processing a first query request from a user according to some embodiments.FIG.2shows an example messaging flow between one or more components inFIG.1such as a client142executed by an electronic device, a controller116of query processing service, an endpoint-to-CPS lookup table120, an authentication service134, a connection pooling system132, and a database instance140N of a database service. It is to be understood that this messaging flow is only one messaging flow that could be used to implement some embodiments, and various alternate formulations with more or fewer messages, messages in different orderings, more or fewer or different components, etc., could alternatively be used and implemented by one of ordinary skill in the art in possession of this disclosure to implement various embodiments described herein. A user (via, e.g., a client142application) may initiate this process by sending a first query request117A-1to the controller116of the query processing service. The query request117A-1may be sent as an HTTP request and is received at a web service endpoint in the query processing service. In response to the query request, at202, the controller116performs a lookup in the endpoint-to-CPS lookup table120in the metadata database118to identify a CPS associated with the target database instance. At204, the controller116determines that there is no available connection (i.e., there is no CPS) associated with the target database instance. In response, at206the controller116may select and identify a CPS from the available CPS list121in the metadata database118and communicate with the authorization service134to authenticate the user making the query request prior to establishing a connection to the target database instance. Upon successful authentication, at208, the authentication service134returns the user's credentials to the controller116. The controller116then directly transmits the query request at210to the identified CPS in the connection pooling system132. At212, the identified CPS in the connection pooling system132initiates the execution of the query against the target database instance (e.g.,140N) by sending the query (and optionally, the additional information such as user authentication/authorization information) directly to the target database instance. At214, the database instance140N returns a query result to the CPS in the connection pooling system. Thereafter, the CPS (e.g., via a query adapter) transforms the query result into a query response148-1and transmits the query response to the controller116, which sends it on to the client142. FIG.3is a diagram illustrating exemplary messaging between components of an environment for processing subsequent query requests from a user according to some embodiments.FIG.3shows an example messaging flow between one or more components inFIG.1such as a client142at an electronic device, a controller116of a query processing service, an endpoint-to-CPS lookup table120, a connection pool server128, and a database instance140N of a database service. It is to be understood that this messaging flow is only one messaging flow that could be used to implement some embodiments, and various alternate formulations with more or fewer messages, messages in different orderings, more or fewer or different components, etc., could alternatively be used and implemented by one of ordinary skill in the art in possession of this disclosure to implement various embodiments described herein. A client142application may initiate this process by sending a subsequent query request300to the controller116. The query request300is received at a web service endpoint in the provider network. In one example, the query request may be sent as an HTTP request. In response to the query request, at302, the controller116performs a lookup in the endpoint-to-CPS lookup table120to identify a CPS associated with the endpoint. At304, a CPS identifier is returned to the controller116based on the lookup. At306, the controller116forwards the query to a connection pool server (e.g., CPS128A) through use of the CPS identifier (e.g., the destination port and/or destination IP address of the CPS). It may be observed that in the processing of subsequent query requests from a user to a particular target database instance, the controller116can directly forward the query request to the identified CPS without identifying and selecting a CPS from the available CPS list121because an existing connection (CPS) to the target database already exists. At308, the connection pool server128identifies the database instance and transmits the query to the database instance140N for execution. At310, the database instance140N provides a query result to the CPS128. Thereafter, the CPS transforms the query result into a query response148and transmits the query response to the user. FIG.4illustrates an example of a web service endpoint creation API request400in accordance with an embodiment of the present disclosure. The example endpoint creation API request400can be included in an HTTP request that is sent to the query processing service102(shown inFIG.1). As shown inFIG.4, the endpoint creation API request400includes header information specifying, for example, a target database instance identifier for the endpoint405, the number of connections that can be established for the target database instance410, and an identifier of authentication information (target database user authentication credentials)415, among other possible parameters. In some embodiments, this create endpoint API may be implemented by the query processing service, which may also implement a variety of other APIs related to endpoints. As an example, the query processing service may implement an endpoint deletion API that may be invoked by an administrator to delete a particular web service endpoint. FIG.5illustrates an example of an API request to execute a query in accordance with an embodiment of the present disclosure. The example API request500may be included in an HTTP request sent to the query processing service. As shown inFIG.5, this exemplary “execute SQL” API request includes header information specifying, for example, a user-provided name for the target database505, the target database instance510, the target schema515, the SQL statement string520, and target database user authentication credentials535, among other possible parameters. In some embodiments, the “execute SQL” API request may be implemented by the query processing service102, which as noted above, may also implement a variety of other APIs for query execution by users. These APIs may include, for example, a “get items” API that fetches one or more rows from a table in the customer database instance using a SQL predicate provided by the user, an “insert item” API for inserting values for one or more rows in the customer database instance, an “update items” API that updates the values of one or more rows in the customer database instance, a “delete item” request API that deletes zero or more items using an indexed column from a table in the customer database instance, etc. FIG.6is a flow diagram illustrating operations600of a method for executing queries against a relational database according to some embodiments. Some or all of the operations600(or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations600are performed by one or more components (e.g., the control plane112, the controller116, the endpoint-to-lookup table120, and the connection pooling system132) of the provider network100shown inFIG.1. The operations600include, at block602, receiving a query request at a provider network. In one example, the query request can originate from a client (e.g.,142A shown inFIG.1) when a user (e.g.,108A) submits a query request. The query request comprises a HTTP message carrying a payload. The payload comprises a query (e.g., an SQL query) to be executed by a target database instance within the provider network100. The SQL query may be modeled as a web service API request, in certain examples. In one example, the target database instance is a relational database. The target database instance may also comprise a database cluster comprising a plurality of relational database servers, in other examples. In some embodiments, the operations performed at block602may include, prior to receiving the query request, receiving an endpoint configuration request to create a web service endpoint to the target database instance and creating the web service endpoint to the target database instance. The endpoint configuration request comprises, among potentially other parameters, an identifier of the target database instance, the number of connections associated with the target database instance, and an identifier of authentication information associated with accessing the target database instance. The operations performed at block602may also include authenticating a user's access to a web service endpoint to the target database instance prior to receiving the query request. The operations600further include, at block604, identifying an existing connection to the target database instance. The operations performed at block604may include, prior to receiving the query request, initializing and/or launching a set of one or more connection pool servers (CPSs) to a plurality of target database instances. The plurality of target database instances includes the target database instance. The operations performed at block604may further include associating the target database instance with a first connection pool server in the set of connection pool servers in a lookup data structure (e.g., the endpoint to CPS lookup table120) and determining using the lookup structure that there is an association of the target database instance to the first connection pool server in the set of connection pool servers. The operations performed at block604may also include authenticating a user's access to the target database instance prior to transmitting the query for execution at the target database instance via the existing connection. In certain embodiments, the operations performed at block604may include determining, after the receipt of the query request, determining that there is no association of the target database instance to any connection pool server in the set of connection pool servers in the lookup data structure and associating a first connection pool server with the target database instance in the lookup data structure. The operations600further include, at block606, transmitting the query result for execution at the target database instance via the existing connection. The operations600include, at block608, obtaining a query result based on the execution of the query. The operations at block608may include transforming the query result into a query format associated with the client prior to transmitting the query response to the client. The operations600further include, at block610, transmitting a query response to the client based on the query result. FIG.7illustrates an example provider network (or “service provider system”) environment according to some embodiments. A provider network700may provide resource virtualization to customers via one or more virtualization services710that allow customers to purchase, rent, or otherwise obtain instances712of virtualized resources, including but not limited to computation and storage resources, implemented on devices within the provider network or networks in one or more data centers. Local Internet Protocol (IP) addresses716may be associated with the resource instances712; the local IP addresses are the internal network addresses of the resource instances712on the provider network700. In some embodiments, the provider network700may also provide public IP addresses714and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that customers may obtain from the provider700. Conventionally, the provider network700, via the virtualization services710, may allow a customer of the service provider (e.g., a customer that operates one or more client networks750A-750C including one or more customer device(s)752) to dynamically associate at least some public IP addresses714assigned or allocated to the customer with particular resource instances712assigned to the customer. The provider network700may also allow the customer to remap a public IP address714, previously mapped to one virtualized computing resource instance712allocated to the customer, to another virtualized computing resource instance712that is also allocated to the customer. Using the virtualized computing resource instances712and public IP addresses714provided by the service provider, a customer of the service provider such as the operator of customer network(s)750A-750C may, for example, implement customer-specific applications and present the customer's applications on an intermediate network740, such as the Internet. Other network entities720on the intermediate network740may then generate traffic to a destination public IP address714published by the customer network(s)750A-750C; the traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the local IP address716of the virtualized computing resource instance712currently mapped to the destination public IP address714. Similarly, response traffic from the virtualized computing resource instance712may be routed via the network substrate back onto the intermediate network740to the source entity720. Local IP addresses, as used herein, refer to the internal or “private” network addresses, for example, of resource instances in a provider network. Local IP addresses can be within address blocks reserved by Internet Engineering Task Force (IETF) Request for Comments (RFC) 1918 and/or of an address format specified by IETF RFC 4193, and may be mutable within the provider network. Network traffic originating outside the provider network is not directly routed to local IP addresses; instead, the traffic uses public IP addresses that are mapped to the local IP addresses of the resource instances. The provider network may include networking devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to local IP addresses and vice versa. Public IP addresses are Internet mutable network addresses that are assigned to resource instances, either by the service provider or by the customer. Traffic routed to a public IP address is translated, for example via 1:1 NAT, and forwarded to the respective local IP address of a resource instance. Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses. In some embodiments, the mapping of a standard IP address to a local IP address of a resource instance is the default launch configuration for all resource instance types. At least some public IP addresses may be allocated to or obtained by customers of the provider network700; a customer may then assign their allocated public IP addresses to particular resource instances allocated to the customer. These public IP addresses may be referred to as customer public IP addresses, or simply customer IP addresses. Instead of being assigned by the provider network700to resource instances as in the case of standard IP addresses, customer IP addresses may be assigned to resource instances by the customers, for example via an API provided by the service provider. Unlike standard IP addresses, customer IP addresses are allocated to customer accounts and can be remapped to other resource instances by the respective customers as necessary or desired. A customer IP address is associated with a customer's account, not a particular resource instance, and the customer controls that IP address until the customer chooses to release it. Unlike conventional static IP addresses, customer IP addresses allow the customer to mask resource instance or availability zone failures by remapping the customer's public IP addresses to any resource instance associated with the customer's account. The customer IP addresses, for example, enable a customer to engineer around problems with the customer's resource instances or software by remapping customer IP addresses to replacement resource instances. FIG.8is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers, according to some embodiments. Hardware virtualization service820provides multiple computation resources824(e.g., VMs) to customers. The computation resources824may, for example, be rented or leased to customers of the provider network800(e.g., to a customer that implements customer network850). Each computation resource824may be provided with one or more local IP addresses. Provider network800may be configured to route packets from the local IP addresses of the computation resources824to public Internet destinations, and from public Internet sources to the local IP addresses of computation resources824. Provider network800may provide a customer network850, for example coupled to intermediate network840via local network856, the ability to implement virtual computing systems892via hardware virtualization service820coupled to intermediate network840and to provider network800. In some embodiments, hardware virtualization service820may provide one or more APIs802, for example a web services interface, via which a customer network850may access functionality provided by the hardware virtualization service820, for example via a console894(e.g., a web-based application, standalone application, mobile application, etc.). In some embodiments, at the provider network800, each virtual computing system892at customer network850may correspond to a computation resource824that is leased, rented, or otherwise provided to customer network850. From an instance of a virtual computing system892and/or another customer device890(e.g., via console894), the customer may access the functionality of storage service810, for example via one or more APIs802, to access data from and store data to storage resources818A-818N of a virtual data store816(e.g., a folder or “bucket”, a virtualized volume, a database, etc.) provided by the provider network800. In some embodiments, a virtualized data store gateway (not shown) may be provided at the customer network850that may locally cache at least some data, for example frequently-accessed or critical data, and that may communicate with storage service810via one or more communications channels to upload new or modified data from a local cache so that the primary store of data (virtualized data store816) is maintained. In some embodiments, a user, via a virtual computing system892and/or on another customer device890, may mount and access virtual data store816volumes via storage service810acting as a storage virtualization service, and these volumes may appear to the user as local (virtualized) storage898. While not shown inFIG.8, the virtualization service(s) may also be accessed from resource instances within the provider network800via API(s)802. For example, a customer, appliance service provider, or other entity may access a virtualization service from within a respective virtual network on the provider network800via an API802to request allocation of one or more resource instances within the virtual network or within another virtual network. Illustrative System In some embodiments, a system that implements a portion or all of the techniques for query execution in relational database services as described herein may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media, such as computer system900illustrated inFIG.9. In the illustrated embodiment, computer system900includes one or more processors910coupled to a system memory920via an input/output (I/O) interface930. Computer system900further includes a network interface940coupled to I/O interface930. WhileFIG.9shows computer system900as a single computing device, in various embodiments a computer system900may include one computing device or any number of computing devices configured to work together as a single computer system900. In various embodiments, computer system900may be a uniprocessor system including one processor910, or a multiprocessor system including several processors910(e.g., two, four, eight, or another suitable number). Processors910may be any suitable processors capable of executing instructions. For example, in various embodiments, processors910may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, ARM, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors910may commonly, but not necessarily, implement the same ISA. System memory920may store instructions and data accessible by processor(s)910. In various embodiments, system memory920may be implemented using any suitable memory technology, such as random-access memory (RAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above are shown stored within system memory920as code925and data926. In one embodiment, I/O interface930may be configured to coordinate I/O traffic between processor910, system memory920, and any peripheral devices in the device, including network interface940or other peripheral interfaces. In some embodiments, I/O interface930may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory920) into a format suitable for use by another component (e.g., processor910). In some embodiments, I/O interface930may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface930may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface930, such as an interface to system memory920, may be incorporated directly into processor910. Network interface940may be configured to allow data to be exchanged between computer system900and other devices960attached to a network or networks950, such as other computer systems or devices as illustrated inFIG.1, for example. In various embodiments, network interface940may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface940may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks (SANs) such as Fibre Channel SANs, or via I/O any other suitable type of network and/or protocol. In some embodiments, a computer system900includes one or more offload cards970(including one or more processors975, and possibly including the one or more network interfaces940) that are connected using an I/O interface930(e.g., a bus implementing a version of the Peripheral Component Interconnect-Express (PCI-E) standard, or another interconnect such as a QuickPath interconnect (QPI) or UltraPath interconnect (UPI)). For example, in some embodiments the computer system900may act as a host electronic device (e.g., operating as part of a hardware virtualization service) that hosts compute instances, and the one or more offload cards970execute a virtualization manager that can manage compute instances that execute on the host electronic device. As an example, in some embodiments the offload card(s)970can perform compute instance management operations such as pausing and/or un-pausing compute instances, launching and/or terminating compute instances, performing memory transfer/copying operations, etc. These management operations may, in some embodiments, be performed by the offload card(s)970in coordination with a hypervisor (e.g., upon a request from a hypervisor) that is executed by the other processors910A-910N of the computer system900. However, in some embodiments the virtualization manager implemented by the offload card(s)970can accommodate requests from other entities (e.g., from compute instances themselves), and may not coordinate with (or service) any separate hypervisor. In some embodiments, system memory920may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system900via I/O interface930. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g., SDRAM, double data rate (DDR) SDRAM, SRAM, etc.), read only memory (ROM), etc., that may be included in some embodiments of computer system900as system memory920or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface940. Various embodiments discussed or suggested herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and/or other devices capable of communicating via a network. Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (TCP/IP), File Transfer Protocol (FTP), Universal Plug and Play (UPnP), Network File System (NFS), Common Internet File System (CIFS), Extensible Messaging and Presence Protocol (XMPP), AppleTalk, etc. The network(s) can include, for example, a local area network (LAN), a wide-area network (WAN), a virtual private network (VPN), the Internet, an intranet, an extranet, a public switched telephone network (PSTN), an infrared network, a wireless network, and any combination thereof. In embodiments utilizing a web server, the web server can run any of a variety of server or mid-tier applications, including HTTP servers, File Transfer Protocol (FTP) servers, Common Gateway Interface (CGI) servers, data servers, Java servers, business application servers, etc. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C # or C++, or any scripting language, such as Perl, Python, PHP, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, IBM®, etc. The database servers may be relational or non-relational (e.g., “NoSQL”), distributed or non-distributed, etc. The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and/or at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc. Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc-Read Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. In the preceding description, various embodiments are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described. Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) are used herein to illustrate optional operations that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments. Reference numerals with suffix letters (e.g.,818A-818N) may be used to indicate that there can be one or multiple instances of the referenced entity in various embodiments, and when there are multiple instances, each does not need to be identical but may instead share some general traits or act in common ways. Further, the particular suffixes used are not meant to imply that a particular amount of the entity exists unless specifically indicated to the contrary. Thus, two entities using the same or different suffix letters may or may not have the same number of instances in various embodiments. References to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Moreover, in the various embodiments described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.
63,906
11860902
DETAILED DESCRIPTION In accordance with the present principles, systems and methods of data indexing are provided. An objective function is formulated to index a dataset, which may include patient medical information. A portion of the dataset includes supervision information identifying data pairs that are similar and dissimilar. The remaining portion of the dataset does not include supervision information. The objective function includes a data property component and a supervision component. A tradeoff parameter is also included to balance the contributions from each component. The data property component utilizes a property of the dataset to group patients for all of the dataset (i.e., both supervised and unsupervised). In one embodiment, the data property component is determined by maximizing the variance of the dataset. Maximizing variance may include, e.g., principal component analysis and maximum variance unfolding. In another embodiment, the data property component is determined by separating clusters to form two balanced data clusters around the median of the projected data. The supervision component utilizes the supervision information to group patients using the supervised portion of the dataset. In one embodiment, the supervision component is determined by minimizing the pairwise distances for similar data pairs while maximizing the pairwise distances for dissimilar data pairs. In another embodiment, supervision component is determined by treating the projection of the dataset as a linear prediction function. Other embodiments of the data property component and the supervision component are also contemplated. The objective is optimized based on the determined data property component and the supervision component to partition a node into a plurality (e.g., 2) of children nodes. The optimization is recursively performed on each node to provide a binary space partitioning tree. Advantageously, the present principles provide for a more robust and accurate indexing tree without a significant increase in search time and tree construction time. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Referring now to the drawings in which like numerals represent the same or similar elements and initially toFIG.1, a block/flow diagram showing a high level diagram100of an application of the present principles is illustratively depicted in accordance with one embodiment. The diagram100shows an exemplary application of the present principles in a physician decision support system to identify similar patients from a query patient. It should be understood that the present principles are not limited to patient similarity applications or medical applications. Other applications are also contemplated within the scope of the present principles. A patient database102is used to initialize a patient similarity104. Patient database may include, e.g., Electronic Medical Records (EMR), which may be big data. An initial patient similarity is deployed to evaluate similar patients based on unprocessed data of the patient database102. An optimized data partitioning106is then performed in accordance with the present principles. Advantageously, the inventive optimized data partitioning provides a semi-supervised framework to learn each data partition, which leverages both expert knowledge and data properties and characteristics. These learned partitions will be used for constructing the indexing tree. Based on the optimized data partitioning, the patient similarity is refined108. Node purity threshold checking110is performed to compare the initial patient similarity with the refined patient similarity to evaluate the optimized data partitioning. Other evaluation techniques are also contemplated, such as, e.g., retrieval precision. If node purity does not pass, optimized data partitioning106is repeated. If node purity does pass, an indexing tree is generated112. The index tree may be used in a physician decision support system114to retrieve similar patients118from a query patient116. Referring now toFIG.2, a block/flow diagram of a system for data indexing200is illustratively depicted in accordance with one embodiment. A data indexing system may include a workstation or system202. System202preferably includes one or more processors208and memory210for storing applications, modules, medical records and other data. The memory210may include a variety of computer readable media. Such media may include any media that is accessible by the system202, and includes both volatile and non-volatile media, removable and non-removable media. System202may include one or more displays204for viewing. The displays204may permit a user to interact with the system202and its components and functions. This may be further facilitated by a user interface206, which may include a mouse, joystick, or any other peripheral or control to permit user interaction with the system202and/or its devices. It should be understood that the components and functions of the system202may be integrated into one or more systems or workstations. System202may receive an input212, which may include a patient database214. Patient database214may include medical information for a set of patients and may be big data. For patient database214, the set of patients is represented as={xi}i=1n, where xi∈dis the i-th data vector. X∈d×nis used to represent the entire dataset, where the i-th column vector corresponds to the sample xi. The goal is to build a tree structure to index the data points so that the nearest neighbors of a query vector q can be rapidly found. In medical scenarios, the data vectors ininclude patient profiles (e.g., EMRs) in the patient database and q is the profile of a query patient. Patient database214preferably includes some pairwise supervision information onin terms of must- and cannot-links. For instance, if patient i and j are similar to each other, then a must-link is placed between xiand xj. Similarly, if patient i and j are dissimilar to each other, a cannot-link is placed between them. A must-link setis constructed by collecting all data pairs with must-links, and a cannot-link set C is constructed by collecting all data pairs with cannot-links. It is assumed that there are a total of l data points with pairwise labels represented as Xl∈dxl, where each data point in Xlhas at least one must- or cannot-link. Other forms of supervision information are also contemplated. The data partitioning system202constructs binary space partitioning trees. From the root, the data points inare split into two halves by a partition hyperplane, and each half is assigned to one child node. Then each child node is recursively split in the same manner to create the tree. Memory210may include formulation module215configured to formulate an objective function to index a dataset. At each node, the partition hyperplane w is determined by optimizing the objective of equation (1) as follows: (w)=(w)+λ(w)  (1) where(w) is some supervised term involving expert knowledge and(w) is a pure data-dependent term without integrating any medical supervision. The supervision component(w) and the data property component(w) may be constructed using different implementations, according to user preferences or other factors. The objective of equation (1) may also include a constant λ as a tradeoff parameter to balance the contributions from each term. Formulation module215includes data property module216configured to construct the data property component(w) of the objective of equation (1) using all of the dataset (i.e., both supervised and unsupervised portions), but does not integrate the supervision information. Constructing(w) may include maximizing the data variance after projection or maximally separating the data clusters in its intrinsic space. The projection refers to the inner product of the data matrix X and the learned partition hyperplane w, which corresponds to the projected coordinates of X on the direction of w. Other implementations of maximizing the data variance are also contemplated. Variance maximization218is configured to maximize the data variance after projection. The goal of this type of approach is to find the direction under which the variance of the projected data is maximized such that the binary partition of those directions will more likely produce a balanced tree. Therefore the constructed tree will not be as deep so that the nearest neighbors of a query data point can be quickly found. In one embodiment, variance maximization218includes performing principal component analysis (PCA). PCA obtains the eigenvector from the data covariance matrix with the largest corresponding eigenvector. Suppose the data set has been centralized.(w) can be constructed as in equation (2): (w)=wTXXTw.(2) In another embodiment variance maximization218includes performing maximum variance unfolding (MVU). MVU maximizes the overall pairwise distances in the projected space by maximizing equation (3): ⁢(w)=∑ij⁢(wT⁢xi-wT⁢xj)2=wT⁢X⁡(n⁢⁢I-eeT)⁢XT⁢w,(3) where I is the order n identify matrix, and e is the n-dimensional all-one vector. I-in⁢eeT is the data centralization matrix such that X⁡(I-1n⁢eeT)⁢XT is equivalent to the data covariance matrix. Thus, the MVU of(w) and the PCA of(w) are the same up to a scaling factor. Other implementations of variance maximization218are also contemplated. Cluster separation220is configured to maximally separate the data clusters in intrinsic space. Cluster separation220seek a projection direction under which the two balanced data clusters can be formed and equally distributed on the two sides of the median of the projected data. In other embodiments, data clusters may be distributed around other statistics measures, such as, e.g., the mean or weighted mean. Data clusters can be obtained by minimizing the following objective of equation (4): ⁢(w)=∑Xi∈𝒢1,Xi∈𝒢2⁢wij=wT⁢X⁡(D-W)⁢XT⁢w(4) whereandare the two data clusters. W∈n×nis the data similarity matrix with its (i, j)-th entry computed as Wij=exp(−α∥xi−xj∥2).  (5) D∈n×nis a diagonal matrix with Dii=ΣjWij. One point to note is that(w) is minimized, instead of maximized. A potential bottleneck for constructing(w) is that an n×n data similarity matrix W is to be computed, as well as do matrix multiplication on X and D−W, which could be very time consuming when n is huge. Formulation module215also includes supervision module222configured to construct the expert supervision term(w) of the objective of equation (1) using the supervised portion of the data set. The construction of(w) incorporates experts' supervision information on the data (i.e., leveraging the knowledge contained inand C). Supervision module222may include projection perspective and prediction perspective. Other embodiments are also contemplated. In one embodiment, the supervision term(w) is constructed by projection perspective224. Projection perspective224treats w as a pure projection that maps the data onto a one-dimensional space. After the projection, the goal is to have the data pairs inbe distributed as compact as possible, while the data pairs in C are distributed as scattered as possible. One straightforward criterion is to minimize the overall pairwise distances for the data pairs inwhile maximizing the overall distances for the data pairs in C by minimizing the objective of equation (6). =1⁢⁢(wT⁢xi-wT⁢xj)2-1C⁢∑(xi,xj)∈𝒞⁢(wT⁢xi-wT⁢xj)2=∑ij⁢(wT⁢xi-wT⁢xj)2⁢Sprojij=wT⁢Xl⁡(E-S)⁢XlT⁢w,(6) where S is an l×l matrix and its (i, j)-th entry is provided as Sij={1,if⁢⁢(xi,xj)∈-1𝒞,if⁢⁢(xi,xj)∈𝒞0,otherwise(7) and |⋅ | denotes the cardinality of a set. E is an l×l diagonal matrix with Eii=ΣjSij. In another embodiment, the expert supervision term(w) is constructed by prediction perspective226. Prediction perspective treats the projection as a linear prediction function ƒ(x), such that the sign of ƒ(x) indicates the class of x. If it is assumed that the data in each node are centralized, then the bias b in ƒ(x) can be neglected such that ƒ(x)=wTx. The supervised termunder prediction perspective may be provided as follows: =1⁢⁢wT⁢xi⁢xjT⁢w-1𝒞⁢∑(xi,xj)∈𝒞⁢wT⁢xi⁢xjT⁢w=wT(1⁢⁢xi⁢xjT-1𝒞⁢∑(xi,xj)∈𝒞⁢xi⁢xjT)⁢w=wT⁢Xl⁢SXlT⁢w(8) where S∈l×lis a symmetric matrix with its (i,j)-th entry provided as in equation (7). Note that the above unsupervised components,and supervised componentapply maximization, while unsupervised componentand supervised componentapply minimization. To combine both components to form a semi-supervised scheme, the sign of the tradeoff parameter λ is adjusted to achieve consistence. For instance, ifis used as the data property component andas the supervision component, equation (1) can be rewritten as in equation (9): 𝒥=𝒥𝒮pred+𝒥𝒰PCA=wT⁡(XSXT+λ⁢⁢XXT)⁢w=wT⁢Aw,(9) where A=XSXT+λXXTis a constant matrix absorbing both data property and supervision information. In this example, the coefficient λ is nonnegative. Memory210also includes optimization module configured to optimize the objective of equation (1) based on the construction of the data property component(w) and the supervision component(w). No matter what the choices of(w) and(w) are, the final objectivecan always be rewritten in the form of=wTAw, as shown in one of the examples in equation (10). The matrix A∈d×dis some matrix having different concrete forms depending on the specific choices of both supervised and unsupervised components. maxwwTAw s.t.wTw=1.  (10) Equation (10) becomes a Rayleigh Quotient optimization problem, and the optimal solution w* can be obtained as the eigenvector of A whose corresponding eigenvalue is the largest (or smallest if a minimization problem is formed). Optimal partitioning is performed over the patient data for a given definition of patient similarity. Patient similarity may include any distance metric in the projected space, such as, e.g., the Euclidean distance. The similarity measure is different for each partition because of the different projections. Optimization is performed to recursively partition the patient database214until the sub-patient population becomes more homogenous (i.e., the projected patient vectors are more similar to each other), or the size of the population reaches a threshold value (e.g., 1 percent of the entire patient population). The optimization of the objective of equation (1) results in an indexing tree232of the patient database214, which may be part of output230. The space and time complexities of the data partitioning system202have been analyzed. For space complexity, the data median for each node is stored for partitioning, which has O(n) space. Moreover, additional storage is utilized to store the projection for each node, resulting in a complexity of O(dn). Thus, the complete space complexity is O((d+1)n). The computational cost of the data partitioning system202lies in the following aspects: 1) construction of the matrix A over the entire dataset; 2) extracting the projections by eigen-decomposition; and 3) computing the projected points. Here, the time cost for calculatingis omitted since it is assumed that l<<n for general semi-supervised settings. Specifically, it takes O(nd2) to compute the data covariance matrix (as in variance maximization218) or O(n2d) to compute data similarity matrix (as in cluster separation220). To decompose a matrixwith the size d×d to derive the principal projections, it takes O(d3). Finally, given the learned projection, it takes O(nd) to perform the multiplication to derive the one dimension projected points. Note that the above time complexity is estimated for the upper bound since as the recursive partition goes on, each node reduces exponentially. Performance of the indexing approaches discussed above may be evaluated by node purity and retrieval precision. Other evaluation approaches are also contemplated. Node purity measures the label consistency of the data points contained in individual tree nodes. Node purity may be expressed as in equation (11). Purity⁡(e)=maxc⁢xi∈e,li=c⁢/⁢e(11) where e is the set of data points in a node on the indexing tree and |e| is its cardinality. |xi∈e,li=c| indicates the number of data points in e with the same class label c. The computed node purity for leaf nodes is called leaf purity, which directly reflects the search accuracy. For a given query data point x, its nearest neighbors can be found by traversing x over the tree from the root node to the leaf node. The retrieval precision for the query x is provided in equation (12). Precision(x)=|xi∈e(x),li=l(x)|/ |e(x)|  (12) where e(x) is the set of nearest neighbors of x, which is obtained from the leaf node which x falls in. Let l(x) indicate the label of x. Then |xi∈e(x),li=l(x)| is the number of data points in e(x) with the same label as x. Referring now toFIG.3, a block/flow diagram showing a method for data indexing is illustratively depicted in accordance with one embodiment. In block302, an objective function is formulated to index a dataset. The dataset may include patient medical information, such as, e.g., EMR. A portion of the dataset includes supervision information while the remaining portion does not include supervision information. Supervision information may include pairwise must-links and cannot-links. Must-links identify a pair of data (e.g., patient) that are similar to each other. Cannot-links identify a pair of data that are dissimilar to each other. The objective function includes a supervised component, which only use the supervised portion of the data along with their supervision information, and a data property component, which uses both supervised and unsupervised data. The objective function also includes a tradeoff parameter to balance the contributions of the supervised component and the data property component. In block304, a data property component of the objective function is determined. The data property component utilizes a property of the dataset to group data of the dataset. The data property component uses both supervised and unsupervised portions of the dataset, but does not integrate any of the supervision information. The data property component may be determined by maximizing the variance306or separating clusters308. Other methods are also contemplated. In one embodiment, in block306, the data property component is determined by maximizing the variance of the dataset. The variance of the projected data is maximized such that the binary partition produces a balanced tree. Variance maximization may include applying Principal Component Analysis (PCA) to obtain the eigenvector from the data covariance matrix with the largest corresponding eigenvector. Variance maximization may also include applying maximum variance unfolding to maximize overall pairwise distances, which is mathematically equivalent to PCA. Other methods of variance maximization are also contemplated. In another embodiment, in block308, the data property component is determined by equally distributing data clusters around the median of the dataset. Other statistical measures may also be applied, such as, e.g., mean, weighted mean, etc. Clusters are separated such that two balanced data clusters are formed and equally distributed around the median of the projected data. In block310, a supervised component of the objective function is determined. The supervised component utilizes the supervision information to group data of the dataset. In one embodiment, in block312, the supervised component is determined by projection perspective. The goal of projection perspective is to distribute similar data pairs as compact as possible while distributing dissimilar data pairs as scattered as possible. Projection perspective includes minimizing pairwise distances of data identified as similar by the supervision information and maximizing pairwise distances of data identified as dissimilar by the supervision information. In another embodiment, in block314, the supervised component is determined by prediction perspective. The projections of patient vectors of the dataset can be viewed as linear predictions of their corresponding outcomes. In block316, the objective function is optimized based on the data property component and the supervised component to partition a node into a plurality of child nodes. Preferably, the plurality of child nodes is two child nodes. In block318, the objective function is recursively optimized at each node to provide a binary space partitioning tree. Partitioning is recursively performed until the sub-patient population becomes more homogeneous or the size of the population reaches a threshold. Having described preferred embodiments of a system and method for indexing of large scale patient set (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the invention as outlined by the appended claims. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
27,821
11860903
DETAILED DESCRIPTION In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that various embodiment of the present disclosure as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. 1. Overview Described herein are techniques for clustering data based on a visual model. In some embodiments, these techniques involve processing multiple text documents using a visual model. The visual model may be configured to detect and classify objects in documents. To process a text document, the document is converted into an image. The visual model is used to determine a vector representation of the document based on the image. Next, the documents are grouped into different groups based on the vector representations of the documents. In some embodiments, a random sample of documents can be selected from the different groups of documents in order to train a machine learning model that is used for automatedly annotating training data. 2. Clustering and Sampling Manager FIG.1illustrates a clustering and sampling manager100according to some embodiments. As shown, clustering and sampling manager100includes data converter105, vector manager110, clustering manager115, and data sampler120. Data converter105handles incoming documents. For instance,FIG.1shows clustering and sampling manager100receiving, as input, several documents125. In some embodiments, each of the documents125are text documents. For example, documents125may be portable document format (PDF) files. In this example, when clustering and sampling manager100receives documents125, clustering and sampling manager100sends them to data converter105. Upon receiving documents125, data converter105converts each document125into a set of images. In some cases, a document125can have several pages. For each page in a document125, data converter105converts the page into an image. For instance, if a document125has five pages, data converter105converts the document125into five images, one image for each page. After converting documents125into sets of images, data converter105sends the images to vector manager110for further processing. Vector manager110is configured to generate vector representations for documents125based on images of the document125that vector manager110receives from data converter105. To generate a vector representation for a document125, vector manager110may use a visual model configured to detect and classify objects in images. In some embodiments, the visual model can be implemented using a convolutional neural network (CNN). Examples of CNN architectures used to implement the visual model include a visual geometry group (VGG)-16 architecture, a VGG-19 architecture, a residual neural network (ResNet) architecture, a dense convolutional network (DenseNet) architecture, etc. FIG.2illustrates an example convolutional neural network200according to some embodiments. Specifically,FIG.2illustrates an example CNN that may be used to implement the visual model used by vector manager110to generate vector representations of documents125. As shown, CNN200includes convolutional layer205, activation layer210, pooling layer215, and fully connected layer220. Convolutional layer205may be a filter that is passed over image225and views several pixels at a time (e.g. 3×3 or 5×5). A convolution operation is performed by calculating a dot product of the original pixel values with weights defined in the filter. The results are summed up into one number that represents all the pixels observed by the filter. Convolutional layer205can generates a matrix that is smaller in size than the pixel resolution of image225. Activation layer210analyzes the matrix generated by convolutional layer205by introducing non-linearity so that CNN200can train itself using a backpropagation algorithm. In some embodiments, the activation function used in the backpropagation algorithm may be a rectified linear unit (ReLu) function. Pooling layer215may downsample and reduce the size of the matrix. A filter is passed over the results of the previous layer and selects one number out of each group of values (e.g., the maximum value). Pooling layer215allows CNN200to train faster by focusing on the most important information in each feature of the image. Fully connected layer220can be a multilayer perceptron structure. The input to fully connected layer220is a one-dimensional vector representing the output of the previous layers (e.g., convolutional layer205, activation layer210, and pooling layer215). The output of fully connected layer220is output vector230, a one-dimensional vector. Output vector230is the vector representation of image225. FIG.2illustrates a CNN with one group of layers that includes a convolutional layer, an activation layer, and a pooling layer. One of ordinary skill in the art will realize that, in some embodiments, any number of additional groups of such layers may be included and sequentially arranged in CNN200. For example, CNN200may include a first group of layers that includes a first convolutional layer, a first activation layer, and a first pooling layer, followed by a second group of layers that includes a second convolutional layer, a second activation layer, and a second, pooling layer, and so on and so forth. Additionally, CNN200can include any number of additional fully connected layers after the groups of layers that includes convolutional, activation layer, and pooling layers. The output of the last fully connected layer is output vector230. Returning toFIG.1, vector manager110can generate a vector representation for a document125by generating a vector (e.g., output vector230) for each image in the set of images into which data converter105converted the document125. In some embodiments, a vector of a page of a document is a numerical representation of the visual appearance of the page in terms of formatting, layout, styles, etc. After generating vectors for the pages of each document125, vector manager110sends the vectors to clustering manager115for additional processing. Clustering manager115is responsible for grouping (e.g., clustering) documents125into different groups (e.g., clusters) based on the vector representations of documents125. As illustrated inFIG.1, clustering manager115has grouped documents125into clusters130a-130k. In some embodiments, clustering manager115groups documents into clusters130a-130kby grouping each page of documents125into clusters130a-130kbased on the vectors of each page of the documents125. Clustering manager115groups documents (or pages of documents) with similar vectors into the same cluster130. In some embodiments, clustering manager115determines that two vectors are similar if the vector distance between the two vectors is less than a defined distance value. In some such embodiments, clustering manager115determines the vector distance between the two vectors by calculating the cosine similarity between the two vectors. Cosine similarity uses values between 0 and 1 to represent the similarity between two vectors, where 0 represents the least amount of similarity and 1 represents the most amount of similarity. As such, clustering manager115determines that two vectors are similar if the cosine similarity value is greater than a defined value (e.g., 0.8). Clustering manager115can use any number of different clustering algorithms to group documents125into clusters130a-130kbased on the vector representations of documents125. For example, in some embodiments, clustering manager115uses a hierarchical clustering algorithm to group documents125into clusters130a-130k. In other embodiments, clustering manager115uses a K-means algorithm to group documents125into clusters130a-130k. In yet some other embodiments, clustering manager115uses a density-based spatial clustering of applications with noise (DB SCAN) algorithm to group documents125into clusters130a-130k. As mentioned above, a vector of a page of a document is a numerical representation of the visual appearance of the page in terms of formatting, layout, styles. Thus, the documents125that clustering manager115groups in a particular cluster130are essentially all visually similar in terms of formatting, layout, styles, etc. Once clustering manager115finishes grouping documents125into clusters130a-130k, clustering manager115sends the groupings to data sampler120. Data sampler120is configured to sample data from the clusters determined by clustering manager115. For example, when data sampler120receives clusters130a-130kfrom clustering manager115, data sampler120determines a sample set of documents135from documents125based on the set of clusters130a-130k. Data sampler120can randomly select a defined number of documents (or pages of documents) from each of the clusters130a-130k. The selected documents from each of the clusters130a-130kform the sample set of documents135. In some embodiments, data sampler120randomly selects a first defined number of documents (e.g., five documents, ten documents, etc.) from a particular cluster130if the number of documents (or pages of documents) in the particular cluster130is greater than a defined threshold number of documents (e.g., twenty documents, fifty documents, etc.). If the number of documents in the particular cluster130is not greater than the defined threshold number of documents, data sampler120randomly selects a second defined number of documents (e.g., two documents, three documents, etc.) from the particular cluster130. FIG.3illustrates a process300for clustering and sampling data according to some embodiments. In some embodiments, clustering and sampling manager100performs process300. Process300begins by receiving, at310, a plurality of documents. Referring toFIG.1as an example, clustering and sampling manager100can receive a plurality of documents125. Once received, cluster and sampling manager100may send documents125to data converter105to convert to images. Next, process300uses, at320, a visual model to generate a vector representation for each document in the plurality of documents. Referring toFIGS.1and2as an example, vector manager110may use a visual model to generate vector representations for documents125. The visual model used by vector manager110can be implemented using CNN200. As illustrated inFIG.2, the values of pixels in image225are input into CNN200and propagated through the input layer and hidden layers1-n. The values output by neurons224-230of the last hidden layer, hidden layer n, form an output vector230that is the vector representation of image225. As such, vector manager110can generate vector representations for documents125using the images into which data converter105converted documents125. Process300then clusters, at330, the plurality of documents into a set of clusters based on the vector representations of the plurality of documents. Referring toFIG.1as an example, clustering manager115can cluster documents125into clusters130a-130kbased on the vector representations of documents125. Clustering manager115may group documents (or pages of documents) with similar vectors into the same cluster130. Finally, process300determines, at340, a sample set of documents from the plurality of documents based on the set of clusters. Referring toFIG.1as an example, data sampler120can determine a sample set of documents135from the plurality of documents125based on clusters130a-130k. Data sampler120may randomly select a defined number of documents (or pages of documents) from each of the clusters130a-130k. The selected documents collectively form the sample set of documents135. In some embodiments, data sampler120can randomly select a different defined number of documents from a particular cluster130based on the number of documents (or pages of documents) in the particular cluster130. For example, if the number of documents in the particular cluster130is greater than a defined threshold number of documents, data sampler randomly selects a first defined number of documents from the particular cluster130. Otherwise, data sampler120randomly selects a second defined number of documents, which is different than the first defined number of documents, from the particular cluster130. 2. Example Active Learning System The section above describes a clustering and sampling manager that clusters data (e.g., documents) based on a visual model and randomly samples the data based on the clustered data. The clustering and sampling manager can be used in any number of different scenarios. For example, in some embodiments, the clustering and sampling manager may be used in an active learning system that automates many of the active learning operations for in artificial intelligence (AI) and machine learning algorithms. FIG.4illustrates a system400for facilitating active learning according to some embodiments. In particular,FIG.4illustrates an example of a scenario in which a clustering and sampling manager may be used. As shown, system400includes production data405, active learning framework410, AI models storage435, client device440, and client device445. Active learning framework410may receive production data405. In some embodiments, production data405includes text documents (e.g., documents125, etc.) that is received from users of a document processing system (e.g., a medical document processing system) and stored in a data storage (e.g., a database). AI models storage435stores trained AI models that are configured to automatedly annotate documents (e.g., production data405) without human intervention. Client devices440and445are configured to interact and communicate with active learning framework410. For example, users of client devices440and445can access annotation platform420to annotate documents and review annotated documents. As illustrated inFIG.4, active learning framework410includes clustering and sampling manager415, annotation platform420, training pipeline425, and AI model manager430. Clustering and sampling manager415can be implemented by clustering and sampling manager100. As such, clustering and sampling manager415receives, as input, production data405. As mentioned above, in some embodiments, production data405includes text documents that is received from users of a document processing system and stored in a data storage. In some such embodiments, clustering and sampling manager415retrieves production data405from the data storage at defined intervals (e.g., once a week, once every two weeks, once a month, etc.). Then, clustering and sampling manager415uses a visual model to generate vector representations of production data405, cluster production data405into groups based on the vector representations, and determine a sample of production data405(sampled data450in this example) based on the groups of production data405. Clustering and sampling manager415can also store the vector representations of production data405in a data storage (e.g., a database). The, the next time clustering and sampling manager415receives production data that has not been annotated, clustering and sampling manager415can compare the similarity of the vector representations of the newly received production data with the vector representations of previously processed production data. Clustering and sampling manager415does not consider any production data in the newly received production data that is similar to previously processed production data when determining sampled data450. As such, when clustering and sampling manager415samples the newly received production data to generate sampled data450, clustering and sampling manager415samples from the remaining production data that does not include data that is similar to previously processed production data. Clustering and sampling manager415sends sampled data450to annotation platform420. Annotation platform420provides tools and services for a user (annotator447in this example) of client device445to annotate sampled data450. For instance, annotation platform420can organize sampled data450and provide client device445a graphical user interface (GUI) for presenting sampled data450to annotator447. In this way, annotator447is able to provide annotation platform420, via the GUI presented on client device445, annotations and/or labels to sampled data450. In some embodiments, the sampled data450that annotator447annotates has already been automatedly annotated by AI model manager430. In some such embodiments, annotator447reviews the automatedly annotated sampled data450and ensures that the annotations are correct. After annotator447is done annotating sampled data450, annotation platform may organize the annotated sampled data450and provide client device440a GUI for presenting the annotated sampled data450to expert442Expert442can review the annotated sampled data450to ensure that the annotations and/or labels are correct. Having an expert review sampled data450annotated by annotator447is particularly useful in certain domains (e.g., medial domain, legal domain, engineering domain, etc.) that require specialized knowledge. Once expert442has finished reviewing the annotated sampled data450, client device440can send the reviewed sampled data450to client device445for annotator447to view. This way, annotator447can see the corrections made to the annotated sampled data450and prevent such mistakes from happening for future annotations of data. In addition, after expert442has finished reviewing the annotated sampled data450, annotation platform420sends the reviewed and annotated sampled data450to training pipeline425for further processing. Training pipeline425is configured to train AI models. For instance, to train a new AI model, training pipeline425may generate the AI model and then train the AI model using the reviewed and annotated sampled data450that training pipeline425receives from annotation platform420. Once training pipeline425completes the training of the AI model based on the reviewed and annotated sampled data450, training pipeline425stores it in AI models storage435. When training pipeline425receives more sampled data from annotation platform420, training pipeline425retrieve the AI model from AI models storage435, trains it using the received sampled data, and stores the AI model back in AI models storage435after training pipeline425finishes training the AI model with the sampled data. In some embodiments, the AI model that is trained by training pipeline425and used by AI model manager430is the visual model of the sectionizer described in concurrently filed U.S. patent application Ser. No. ______, titled “Sectionizing Documents Based On Visual And Language Models,” filed on Dec. 3, 2019, which is herein incorporated by reference in its entirety. In other embodiments, the AI model that is trained by training pipeline425and used by AI model manager430an image classification model, an instance segmentation model, etc. AI model manager430is responsible for automatedly annotation data. For example, AI model manager430can receive from annotation platform420data and a request to annotate the data. In response to the request, AI model manager430retrieves the appropriate AI model from AI models storage435. In the case of data that is similar to production data405, AI model manager430retrieves the AI model from AI models storage435that has been trained using sampled data450and is configured to annotate data. Next, AI models manager430uses the retrieved AI model to automatedly annotate sampled data450received from annotation platform420. After automatedly annotating sampled data450, AI model manager430sends the annotated data to annotation platform420. 3. Example Systems FIG.5illustrates an exemplary computer system500for implementing various embodiments described above. For example, computer system500may be used to implement clustering and sampling manager100, active learning framework410, clustering and sampling manager415, annotation platform420, training pipeline425, AI model manager430, client device440, and client device445. Computer system500may be a desktop computer, a laptop, a server computer, or any other type of computer system or combination thereof. Some or all elements of data converter105, vector manager110, clustering manager115, data sampler120, clustering and sampling manager415, annotation platform420, training pipeline425, AI model manager430, or combinations thereof can be included or implemented in computer system500. In addition, computer system500can implement many of the operations, methods, and/or processes described above (e.g., process300). As shown inFIG.5, computer system500includes processing subsystem502, which communicates, via bus subsystem526, with input/output (I/O) subsystem508, storage subsystem510and communication subsystem524. Bus subsystem526is configured to facilitate communication among the various components and subsystems of computer system500. While bus subsystem526is illustrated inFIG.5as a single bus, one of ordinary skill in the art will understand that bus subsystem526may be implemented as multiple buses. Bus subsystem526may be any of several types of bus structures (e.g., a memory bus or memory controller, a peripheral bus, a local bus, etc.) using any of a variety of bus architectures. Examples of bus architectures may include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, a Peripheral Component Interconnect (PCI) bus, a Universal Serial Bus (USB), etc. Processing subsystem502, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system500. Processing subsystem502may include one or more processors504. Each processor504may include one processing unit506(e.g., a single core processor such as processor504-1) or several processing units506(e.g., a multicore processor such as processor504-2). In some embodiments, processors504of processing subsystem502may be implemented as independent processors while, in other embodiments, processors504of processing subsystem502may be implemented as multiple processors integrate into a single chip or multiple chips. Still, in some embodiments, processors504of processing subsystem502may be implemented as a combination of independent processors and multiple processors integrated into a single chip or multiple chips. In some embodiments, processing subsystem502can execute a variety of programs or processes in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can reside in processing subsystem502and/or in storage subsystem510. Through suitable programming, processing subsystem502can provide various functionalities, such as the functionalities described above by reference to process300, etc. I/O subsystem508may include any number of user interface input devices and/or user interface output devices. User interface input devices may include a keyboard, pointing devices (e.g., a mouse, a trackball, etc.), a touchpad, a touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice recognition systems, microphones, image/video capture devices (e.g., webcams, image scanners, barcode readers, etc.), motion sensing devices, gesture recognition devices, eye gesture (e.g., blinking) recognition devices, biometric input devices, and/or any other types of input devices. User interface output devices may include visual output devices (e.g., a display subsystem, indicator lights, etc.), audio output devices (e.g., speakers, headphones, etc.), etc. Examples of a display subsystem may include a cathode ray tube (CRT), a flat-panel device (e.g., a liquid crystal display (LCD), a plasma display, etc.), a projection device, a touch screen, and/or any other types of devices and mechanisms for outputting information from computer system500to a user or another device (e.g., a printer). As illustrated inFIG.5, storage subsystem510includes system memory512, computer-readable storage medium520, and computer-readable storage medium reader522. System memory512may be configured to store software in the form of program instructions that are loadable and executable by processing subsystem502as well as data generated during the execution of program instructions. In some embodiments, system memory512may include volatile memory (e.g., random access memory (RAM)) and/or non-volatile memory (e.g., read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, etc.). System memory512may include different types of memory, such as static random access memory (SRAM) and/or dynamic random access memory (DRAM). System memory512may include a basic input/output system (BIOS), in some embodiments, that is configured to store basic routines to facilitate transferring information between elements within computer system500(e.g., during start-up). Such a BIOS may be stored in ROM (e.g., a ROM chip), flash memory, or any other type of memory that may be configured to store the BIOS. As shown inFIG.5, system memory512includes application programs514, program data516, and operating system (OS)518. OS518may be one of various versions of Microsoft Windows, Apple Mac OS, Apple OS X, Apple macOS, and/or Linux operating systems, a variety of commercially-available UNIX or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as Apple iOS, Windows Phone, Windows Mobile, Android, BlackBerry OS, Blackberry10, and Palm OS, WebOS operating systems. Computer-readable storage medium520may be a non-transitory computer-readable medium configured to store software (e.g., programs, code modules, data constructs, instructions, etc.). Many of the components (e.g., data converter105, vector manager110, clustering manager115, data sampler120, clustering and sampling manager415, annotation platform420, training pipeline425, and AI model manager430) and/or processes (e.g., process300) described above may be implemented as software that when executed by a processor or processing unit (e.g., a processor or processing unit of processing subsystem502) performs the operations of such components and/or processes. Storage subsystem510may also store data used for, or generated during, the execution of the software. Storage subsystem510may also include computer-readable storage medium reader522that is configured to communicate with computer-readable storage medium520. Together and, optionally, in combination with system memory512, computer-readable storage medium520may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. Computer-readable storage medium520may be any appropriate media known or used in the art, including storage media such as volatile, non-volatile, removable, non-removable media implemented in any method or technology for storage and/or transmission of information. Examples of such storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), Blu-ray Disc (BD), magnetic cassettes, magnetic tape, magnetic disk storage (e.g., hard disk drives), Zip drives, solid-state drives (SSD), flash memory card (e.g., secure digital (SD) cards, CompactFlash cards, etc.), USB flash drives, or any other type of computer-readable storage media or device. Communication subsystem524serves as an interface for receiving data from, and transmitting data to, other devices, computer systems, and networks. For example, communication subsystem524may allow computer system500to connect to one or more devices via a network (e.g., a personal area network (PAN), a local area network (LAN), a storage area network (SAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a global area network (GAN), an intranet, the Internet, a network of any number of different types of networks, etc.). Communication subsystem524can include any number of different communication components. Examples of such components may include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular technologies such as 2G, 3G, 4G, 5G, etc., wireless data technologies such as Wi-Fi, Bluetooth, ZigBee, etc., or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments, communication subsystem524may provide components configured for wired communication (e.g., Ethernet) in addition to or instead of components configured for wireless communication. One of ordinary skill in the art will realize that the architecture shown inFIG.5is only an example architecture of computer system500, and that computer system500may have additional or fewer components than shown, or a different configuration of components. The various components shown inFIG.5may be implemented in hardware, software, firmware or any combination thereof, including one or more signal processing and/or application specific integrated circuits. FIG.6illustrates an exemplary computing device600for implementing various embodiments described above. For example, computing device600may be used to implement client device440and client device445. Computing device600may be a cellphone, a smartphone, a wearable device, an activity tracker or manager, a tablet, a personal digital assistant (PDA), a media player, or any other type of mobile computing device or combination thereof. As shown inFIG.6, computing device600includes processing system602, input/output (I/O) system608, communication system618, and storage system620. These components may be coupled by one or more communication buses or signal lines. Processing system602, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computing device600. As shown, processing system602includes one or more processors604and memory606. Processors604are configured to run or execute various software and/or sets of instructions stored in memory606to perform various functions for computing device600and to process data. Each processor of processors604may include one processing unit (e.g., a single core processor) or several processing units (e.g., a multicore processor). In some embodiments, processors604of processing system602may be implemented as independent processors while, in other embodiments, processors604of processing system602may be implemented as multiple processors integrate into a single chip. Still, in some embodiments, processors604of processing system602may be implemented as a combination of independent processors and multiple processors integrated into a single chip. Memory606may be configured to receive and store software (e.g., operating system622, applications624, I/O module626, communication module628, etc. from storage system620) in the form of program instructions that are loadable and executable by processors604as well as data generated during the execution of program instructions. In some embodiments, memory606may include volatile memory (e.g., random access memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, etc.), or a combination thereof. I/O system608is responsible for receiving input through various components and providing output through various components. As shown for this example, I/O system608includes display610, one or more sensors612, speaker614, and microphone616. Display610is configured to output visual information (e.g., a graphical user interface (GUI) generated and/or rendered by processors604). In some embodiments, display610is a touch screen that is configured to also receive touch-based input. Display610may be implemented using liquid crystal display (LCD) technology, light-emitting diode (LED) technology, organic LED (OLED) technology, organic electro luminescence (OEL) technology, or any other type of display technologies. Sensors612may include any number of different types of sensors for measuring a physical quantity (e.g., temperature, force, pressure, acceleration, orientation, light, radiation, etc.). Speaker614is configured to output audio information and microphone616is configured to receive audio input. One of ordinary skill in the art will appreciate that I/O system608may include any number of additional, fewer, and/or different components. For instance, I/O system608may include a keypad or keyboard for receiving input, a port for transmitting data, receiving data and/or power, and/or communicating with another device or component, an image capture component for capturing photos and/or videos, etc. Communication system618serves as an interface for receiving data from, and transmitting data to, other devices, computer systems, and networks. For example, communication system618may allow computing device600to connect to one or more devices via a network (e.g., a personal area network (PAN), a local area network (LAN), a storage area network (SAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a global area network (GAN), an intranet, the Internet, a network of any number of different types of networks, etc.). Communication system618can include any number of different communication components. Examples of such components may include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular technologies such as 2G, 3G, 4G, 5G, etc., wireless data technologies such as Wi-Fi, Bluetooth, ZigBee, etc., or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments, communication system618may provide components configured for wired communication (e.g., Ethernet) in addition to or instead of components configured for wireless communication. Storage system620handles the storage and management of data for computing device600. Storage system620may be implemented by one or more non-transitory machine-readable mediums that are configured to store software (e.g., programs, code modules, data constructs, instructions, etc.) and store data used for, or generated during, the execution of the software. In this example, storage system620includes operating system622, one or more applications624, I/O module626, and communication module628. Operating system622includes various procedures, sets of instructions, software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Operating system622may be one of various versions of Microsoft Windows, Apple Mac OS, Apple OS X, Apple macOS, and/or Linux operating systems, a variety of commercially-available UNIX or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as Apple iOS, Windows Phone, Windows Mobile, Android, BlackBerry OS, Blackberry10, and Palm OS, WebOS operating systems. Applications624can include any number of different applications installed on computing device600. Examples of such applications may include a browser application, an address book application, a contact list application, an email application, an instant messaging application, a word processing application, JAVA-enabled applications, an encryption application, a digital rights management application, a voice recognition application, location determination application, a mapping application, a music player application, etc. I/O module626manages information received via input components (e.g., display610, sensors612, and microphone616) and information to be outputted via output components (e.g., display610and speaker614). Communication module628facilitates communication with other devices via communication system618and includes various software components for handling data received from communication system618. One of ordinary skill in the art will realize that the architecture shown inFIG.6is only an example architecture of computing device600, and that computing device600may have additional or fewer components than shown, or a different configuration of components. The various components shown inFIG.6may be implemented in hardware, software, firmware or any combination thereof, including one or more signal processing and/or application specific integrated circuits. FIG.7illustrates an exemplary system700for implementing various embodiments described above. For example, one of client devices702-708may be used to implement client device440, one of client devices702-708may be used to implement client device445, and cloud computing system712may be used to implement active learning platform410. As shown, system700includes client devices702-708, one or more networks710, and cloud computing system712. Cloud computing system712is configured to provide resources and data to client devices702-708via networks710. In some embodiments, cloud computing system700provides resources to any number of different users (e.g., customers, tenants, organizations, etc.). Cloud computing system712may be implemented by one or more computer systems (e.g., servers), virtual machines operating on a computer system, or a combination thereof. As shown, cloud computing system712includes one or more applications714, one or more services716, and one or more databases718. Cloud computing system700may provide applications714, services716, and databases718to any number of different customers in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. In some embodiments, cloud computing system700may be adapted to automatically provision, manage, and track a customer's subscriptions to services offered by cloud computing system700. Cloud computing system700may provide cloud services via different deployment models. For example, cloud services may be provided under a public cloud model in which cloud computing system700is owned by an organization selling cloud services and the cloud services are made available to the general public or different industry enterprises. As another example, cloud services may be provided under a private cloud model in which cloud computing system700is operated solely for a single organization and may provide cloud services for one or more entities within the organization. The cloud services may also be provided under a community cloud model in which cloud computing system700and the cloud services provided by cloud computing system700are shared by several organizations in a related community. The cloud services may also be provided under a hybrid cloud model, which is a combination of two or more of the aforementioned different models. In some instances, any one of applications714, services716, and databases718made available to client devices702-708via networks710from cloud computing system700is referred to as a “cloud service.” Typically, servers and systems that make up cloud computing system700are different from the on-premises servers and systems of a customer. For example, cloud computing system700may host an application and a user of one of client devices702-708may order and use the application via networks710. Applications714may include software applications that are configured to execute on cloud computing system712(e.g., a computer system or a virtual machine operating on a computer system) and be accessed, controlled, managed, etc. via client devices702-708. In some embodiments, applications714may include server applications and/or mid-tier applications (e.g., HTTP (hypertext transport protocol) server applications, FTP (file transfer protocol) server applications, CGI (common gateway interface) server applications, JAVA server applications, etc.). Services716are software components, modules, application, etc. that are configured to execute on cloud computing system712and provide functionalities to client devices702-708via networks710. Services716may be web-based services or on-demand cloud services. Databases718are configured to store and/or manage data that is accessed by applications714, services716, and/or client devices702-708. For instance, AI models storage435may be stored in databases718. Databases718may reside on a non-transitory storage medium local to (and/or resident in) cloud computing system712, in a storage-area network (SAN), on a non-transitory storage medium local located remotely from cloud computing system712. In some embodiments, databases718may include relational databases that are managed by a relational database management system (RDBMS). Databases718may be a column-oriented databases, row-oriented databases, or a combination thereof. In some embodiments, some or all of databases718are in-memory databases. That is, in some such embodiments, data for databases718are stored and managed in memory (e.g., random access memory (RAM)). Client devices702-708are configured to execute and operate a client application (e.g., a web browser, a proprietary client application, etc.) that communicates with applications714, services716, and/or databases718via networks710. This way, client devices702-708may access the various functionalities provided by applications714, services716, and databases718while applications714, services716, and databases718are operating (e.g., hosted) on cloud computing system700. Client devices702-708may be computer system500or computing device600, as described above by reference toFIGS.5and6, respectively. Although system700is shown with four client devices, any number of client devices may be supported. Networks710may be any type of network configured to facilitate data communications among client devices702-708and cloud computing system712using any of a variety of network protocols. Networks710may be a personal area network (PAN), a local area network (LAN), a storage area network (SAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a global area network (GAN), an intranet, the Internet, a network of any number of different types of networks, etc. The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the present disclosure may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of various embodiments of the present disclosure as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the present disclosure as defined by the claims.
44,031
11860904
DETAILED DESCRIPTION The descriptions of the various embodiments of the present invention are being presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Embodiments may have the beneficial effect of providing an efficient method for assigning and propagating High-Level Classifications (HLC) assignments in a set of information assets. Thereby, an efficient and effective approach for assigning information assets to high-level classes may be implemented. Based on high-level classification assignments provided by applying the high-level classification assignment rules, embodiments of the present invention can provide high-level classification assignments to superordinate information assets. For this purpose, embodiments of the present invention can propagate high-level classification assignments provided using the high-level classification assignment rules upwards within the first containment hierarchy to superordinate information assets using the high-level classification propagation rules. For example, the set of information assets may be provided in form of a governed data lake. The assigning and propagating of HLC assignments may be implemented using an automatic approach. The term information asset refers to a set of data (i.e., a dataset) comprising information. Datasets may be organized according to a hierarchical structure (e.g., a containment hierarchy). The datasets may be used to organize data (e.g., data of a data lake). A dataset of a higher hierarchical level of the hierarchical structure may comprise datasets of a lower hierarchical level. Thus, an information asset of a higher hierarchical level (i.e., a superordinate information asset) may comprise information assets of a lower hierarchical level (i.e., subordinate information assets). A low-level class may be defined for classifying information assets of low hierarchical levels of the containment hierarchy of the set of information assets (e.g., information assets of the lowest hierarchical level). High-level classes may be used for classifying information assets of all hierarchical levels of the containment hierarchy of the set of information assets. In particular, high-level classes may be used for also classifying information assets of high hierarchical levels of the containment hierarchy of the set of information assets. For example, Low-level classes can define specific characteristics of the information comprised by an information asset. In further examples, High-level classes can define abstract characteristics of the information comprised by an information asset. At least some of the information assets of the set of information assets may be provided with low-level classification assignments. For example, information assets at a lowest hierarchical level of the containment hierarchy of the set information assets may be provided with low-level classification assignments. In various examples, low-level classification assignments to low-level classes can be assigned manually, semi-automatically, or fully automatically. Low-level classes may be described by low-level notions. For example, the low-level notions may include notions like “customer data” or “reference data.” In additional examples, the low-level notions describing the low-level classes can comprise so-called business terms. For managing low-level notions of a set of low-level classes, a glossary may be provided. In example embodiments, the HLC may use classes defined by or being based on definitions provided by regulations and laws (e.g., the Foreign Account Tax Compliance Act (FATCA), the Payment Card Industry Data Security Standard (PCI Compliance), the Health Insurance Portability and Accountability Act (HIPAA), the Financial Services Modernization Act of 1999 (GLBA), the Sarbanes-Oxley Act of 2002 (SOX), the Federal Rules of Civil Procedure, the General Data Protection Regulation (GDPR), or the California Consumer Privacy Act (CCPA)), etc. Thus, using HLC the information governance system may be configured to govern the information assets in compliance with those regulations and laws. For example, a regulation like GDPR may require a HLC taking into account classes like “Personally Identifiable Information” (“PII”) or “Sensitive Information” (“SI”). For those differentiations, embodiments of the present invention recognize that low-level classifications may not be suited. In particular, since terms of low-level classes may often be subject to changes. Even if low-level classes are organized in some sort of hierarchy or categories, there is often no single point in such a hierarchy which denotes a or may be associated with a high-level class of an HLC. For example, low-level classes may rather be orthogonal to each other (i.e., two different low-level classes may each be associated with a different high-level class, with the respective two different high-level classes not being subclasses of a common high-level class). Moreover, a HLC assignment may be required to be propagated from fine granular information assets (i.e., subordinate information assets, such as data fields or columns), to coarse granular information assets (i.e., superordinated information assets, such as tables, schemas or databases). For instance, in the case of PII, if a subordinate information asset (e.g., a column) comprises personally identifiable information and is assigned to the high-level class PII, then a superordinate information asset (e.g., a table comprising the respective column) also contains personally identifiable information and may have to be assigned to the high-level class PII as well. This kind of information may be important in order to comply with regulations, like GDPR or similar regulations and laws. Embodiments of the present approach may enable a propagation even in non-trivial cases, where a HLC assignment of a superordinate information asset (e.g., a table), is not automatically the combination of all HLCs assignments of subordinate information assets comprised by the respective superordinate information asset (e.g., such as the columns of the respective table). For example, high-level classes may comprise the aforementioned class of “Personally Identifiable Information” (PII) as well as classes like “Personally Identifiable Information in Public Domain” (PIIPD) (i.e., personally identifiable information that can be inferred from publicly available sources), “Sensitive Personally Identifiable Information” (SPII), and “Highly Sensitive Personally Identifiable Information” (HSPII). For example, a first column of a table may comprise personally identifiable information in public domain and be assigned with the HLC PIIDP. A second column of the table may comprise highly sensitive personally identifiable information and be assigned with the HLC HSPII. The table comprising the two aforementioned columns may have to be assigned with the HLC HSPII not PIIPD. Propagating HLCs upward may be a customized (e.g., user-defined) function that maps one or more HLC assignments of one or more information assets on a lower hierarchical level (e.g., columns) to one or more HLC assignments of an information asset on a higher hierarchical level (e.g., a table). For example, HLC assignments of subordinate information assets may result in a HLC assignment of a superordinate information asset that differs from the HLC assignments of subordinate information assets. For example, a first column A may be assigned a HLC assignment PIIPD, while a second column B may be assigned a HLC assignment SPII. A table comprising the two columns A and B may be assigned HLC assignment HSPII, because it may only be possible to uniquely identify a person using a combination of information in columns A and B. The method for governing a set of information assets described herein may introduce new concepts into the information governance system. A notion of high-level classes such as PII, PIIPD, SPII, HSPII may be introduced enabling HLCs. Furthermore, a mechanism of extensible rules (e.g., extensible automation rules) that runs at classification time and generates HLCs from low-level class assignments (e.g., from business term assignments and other information). The mechanisms of extensible rules may further be configured to propagate HLC assignments from fine granular information assets (e.g., a column) to a coarser granular information asset comprising the respective fine granular information assets (e.g., a table or database comprising the respective column). Thus, in some examples HLC assignments may be adapted automatically to deletions of information assets. In other examples, HLC assignments can be adapted automatically to an amendment of a low-level class assignment. For example, data comprised by an information asset may be amended, replaced or additional data may be added resulting in an amended low-level class assignment of the respective information asset. For example, in an information asset (e.g., a column) data of a different syntactic or semantic type may be inserted. Thus, the low-level class assignment of the respective information asset and, in consequence, the HLC assignment of the respective information asset may have to be amended as well. In various example embodiments, the set of information assets may be used to govern a data lake. For example, the information governance system for governing the set of information assets may be configured to provide the following characteristics and functionality. A meta data store may be provided comprising all the information assets of the set of information assets (e.g., identifiers of all the information assets). For example, the set of information assets may include all the information assets of an enterprise. Each information asset is assigned with an information asset type identifier identifying the information asset type of the respective information asset (e.g., column, table, column, file, etc.). The information assets are organized in a containment hierarchy (i.e., the set of information assets is an at least partially ordered set forming a containment hierarchy). For instance, a relational database may comprise schemas which comprise tables which comprise columns. In example embodiments, the information governance system may further comprise a glossary, such as a hierarchical glossary, of low-level classes to manage low-level classes and assigning information assets with low-level classification to the low-level classes. The glossary may define a set of low-level classes. The low-level classes may describe syntactic properties of information comprised by an information asset (e.g., names, addresses, postal codes, email addresses, phone numbers, credit card numbers, airport codes, etc.). The low-level classes may describe semantic properties of information comprised by an information asset (e.g., a business term defining business relevant properties of the respective information asset). For example, the glossary may be a business glossary. The information governance system may be configured for performing a data analysis of the data comprised by the information assets of the set of information assets. For example, the data may be analyzed through a pipeline of algorithms that perform a low-level classification of information assets. The low-level classification may comprise reading and classifying data and/or metadata of the respective information assets. The information assets may be assigned with low-level assignments to low-level classes defined by syntactic and/or semantic definition or terms provided by the glossary. The information governance system may comprise automation rules. The automation rules may define a condition under which they are executed and an action which is performed by execution the respective rules. For example, information governance system may take care that the action defined by an automation rule is executed whenever the condition defined by the respective automation rule is met. For instance, an automation rule may define: if an information asset of the information type column is assigned to the class X, then in addition assign the information asset to the class Y. For example, class X may be defined by a syntactic definition, while class Y may be defined by a semantic definition provided by the glossary. The concept of HLCs as a first-class concept in an information governance system may be introduced. For example, default HLC notions (e g., the aforementioned PII and SPII) may be provided. Additionally, or alternatively, customized (e.g., also user-defined) additional high-level classes may be provided for HLCs. For example, the HLCs may be hierarchical. For example, PII might have children PIIPD, SPII and/or HSPII, which may be more restrictive than PII. The HLC assignment rules may define how HLCs are created from low-level classifications and HLCs are assigned based on the low-level classifications (e.g., such as basic data classifications and/or term assignments). For example, a HLC assignment rule may use an information asset type together with one or more the low-level classifications as input and produces a list of one or more HLC assignments to one or more high-level classes as output. For example, HLC assignment rules may be applied after low-level classifications have been performed either manually or using an automated process. The HLC assignment rule may be of any formalism. For example, the rules may be provided using a simple, Java-script like programming language. For example, a HLC assignment rule may have a form similar to “if a column is assigned to the low-level classes ‘PersonName’ and ‘Customer’, then the column is to be assigned to the high-level class ‘PII’.” The HLC propagation rules may define how existing HLC assignments of information assets are propagated upward (i.e., to one or more superordinate information asset that contain the information asset in question). For instance, a column may be contained by a table which is, in turn, contained by a schema, which again is, in turn, contained by a database etc. A HLC propagation rule may use an information asset type and one or more HLC assignments of a subordinate information asset as an input and provide one or more HLC assignments of one or more superordinate information assets as an output. The HLC assignments according to the HLC propagation rules may be applied to information assets of superordinate information asset types that comprise the subordinate information asset type of the information asset provided as input to the respective HLC propagation rule. In other words, a propagation rule may define how HLC assignments propagate one level up in a containment hierarchy of the information assets. For example, HLC propagation rules may be applied after the HLC assignment rules have been applied or after a user has assigned or amended a HLC assignment to an information asset manually. As above, HLC propagation rules may be provided using a simple, Java-script like programming language. For example, a HLC propagation rule may have a form similar to “if there are one or more information assets of information asset type ‘column’ and assigned to the high-level classes ‘PII’ and ‘SPII’, then a superordinate information asset of information asset type ‘table’ containing the respective one or more information assets of information asset type ‘column’ has to be assigned to the high-level class ‘HSPII’.” For example, the HLC assignment rules and HLC propagation rules may be applied every time an information asset is classified to a low-level class, either through an automated analysis or manually by a user. For example, the HLC assignment rules and HLC propagation rules may be applied every time a low-level class is deleted or modified. For example, the applying of the one or more high-level classification propagation rules may be performed recursively hierarchical level by hierarchical level upwards through the first containment hierarchy. The application process may have the beneficial effect of successively assigning each information asset of the set of information assets to a high-level class. A HLC assignment of a superordinate information asset may depend on HLC assignments of subordinate information assets comprised by the superordinate information asset according to the containment hierarchy. Thus, the HLC assignments of superordinate information asset may be equally restrictive or more restrictive than the HLC assignments of the respective subordinate information assets. For example, the HLC assignment rules and the HLC propagation rules may be applied recursively from bottom to top of the containment hierarchy of the set of information assets. For instance, if a low-level class is deleted, all information assets assigned to the respective low-level class are determined, the HLC assignment rules are applied (again) and the HLC propagation rules are applied to the respective information assets as well as to all siblings (i.e., to all the information assets on the same hierarchical level). For example, the applying of the one or more high-level classification propagation rules may start with the one or more information assets of the set of information assets provided with the high-level classification assignments and may end with an upper most superordinate information asset at the top of the first containment hierarchy. The process may have the beneficial effect of that HLC assignment rules may be applied to information assets thorough the containment hierarchy. For example, each information asset may be provided with a HLC assignment. For example, low-level classifications assignments may only be applied to information assets assigned to particular hierarchical levels of the containment hierarchy (i.e., assigned with information assets type identifiers of the respective hierarchical levels of the containment hierarchy). For example, low-level classifications assignments may only be applied to information assets at the bottom of the containment hierarchy (e.g., to the lowest level or the lowest levels of the containment hierarchy). HLC assignment rules may only be applied to information asset assigned with a low-level classification assignment in order to assign the respective information assets with HLC assignments. Thus, HLC propagation rules may be used to assign all the remaining information assets assigned to higher hierarchical levels of the containment hierarchy (i.e., assigned with information assets type identifiers of the respective higher hierarchical levels of the containment hierarchy) with HLC assignments. For example, the applying of the one or more high-level propagation rules may comprise applying the one or more high-level propagation rules to all information assets of the set of information assets that are at a same hierarchical level within the first containment hierarchy and share a common superordinate information asset of the set of information assets. The process may have the beneficial effect of taking into account the HLC assignments of all the subordinate information asset comprised by a superordinate information asset, when propagating the HLC assignments upwards to the respective superordinate information asset. For example, the set of high-level classes may comprise one or more default high-level classes, which may have the beneficial effect of providing standardized high-level classes defined for satisfying particular requirements (e.g., defined by common regulations or necessities). For example, the providing of the set of high-level classes may comprise receiving one or more customized high-level classes, which may have the beneficial effect of providing individual high-level classes defined for a particular individual purpose. For example, the set of high-level classification assignment rules may comprise one or more default high-level classification assignment rules, which may have the beneficial effect of providing standardized high-level classification assignment rules defined for satisfying particular requirements (e.g., defined by common regulations or necessities). For example, the providing of the set of high-level classification assignment rules may comprise receiving one or more customized high-level classification assignment rules, which may have the beneficial effect of providing individual high-level classification assignment rules defined for a particular individual purpose. For example, the set of high-level classes may be an at least partially ordered set of high-level classes with at least some of the high-level classes comprising a hierarchical relationship to each other, which may have the beneficial effect that the hierarchical relationship may be used to determine HLC propagation rules (e.g., automatically). For example, a high-level class being superordinate to one or more subordinate high-level class according to the hierarchical relationship may be more restrictive than the one or more subordinate high-level classes. Thus, a combination of two or more subordinate high-level classes may result in a superordinate high-level class at a next higher level of the hierarchical relationship. In case different subordinate information assets comprised by a superordinate information asset are assigned to two or more high-level class at the same hierarchical level within the set of high-level classes, the superordinate information asset may be assigned to a superordinate high-level class at a next higher level of the hierarchical relationship above the aforementioned two or more high-level class at the same hierarchical level. For example, the at least partially ordered set of high-level classes may form a second containment hierarchy, which may have the beneficial effect of that all the high-level classes may be part of the containment hierarchy and HLC propagation rules may be determined using the second containment hierarchy. For example, if different subordinate information assets comprised by a superordinate information asset are assigned to two or more high-level class at the same hierarchical level of the second containment hierarchy, then the superordinate information asset may be assigned to a superordinate high-level class at a next higher level of second containment hierarchy above the aforementioned two or more high-level class at the same hierarchical level. In case all the high-level classes of the set of high-level classes are comprise by the second containment hierarchy, assignment propagation rules taking into account all the high-level classes of the second containment hierarchy may be determined using the hierarchical structure of the second containment hierarchy. For example, the providing of the set of high-level classification propagation rules may comprise determining one or more high-level classification assignment rules using an analysis of the second containment hierarchy. The process may have the beneficial effect that high-level classification propagation rules may be determined automatically using the second containment hierarchy. For example, the partially ordered set of high-level classes may form a lattice. For example, the partially ordered set of high-level classes may form a complete lattice with each subset of the set having a supremum. Consider a superordinate information asset comprising a set of subordinate information assets, which are assigned to a subset of high-level classes of the set of high-level classes comprised by the lattice structure. A HLC propagation rule may be determined which defines that the superordinate information asset is assigned to a high-level class of the set of high-level classes which is the supremum of the subset of high-level classes. Thus, by identifying suprema for different subsets of high-level classes HLC propagation rules for propagating HLC assignments may be identified. A lattice is a partially ordered set with each subset of the set having a least upper bound, also referred to as a supremum. In the present context, for example, if there is a subset S of a lattice with high-level classes, then there is always one high-level class (which is not necessarily a member of S) that is the most restrictive high-level class. For instance, the supremum of the high-level classes PII and HSPII may be HSPII. For instance, the supremum of the high-level classes PII and SPII may be HSPII. In the latter case, the supremum of two high-level classes is a different high-level class than the original two high-level classes. If there is such a structure, then HLC propagation rules may be inferred automatically, in that the high-level class of a table, say, is the supremum of the high-level classes of columns comprised by the respective table. Moreover, the high-level class of a superordinate information asset may be the supremum of all high-level classes of all subordinate information assets comprised by the respective superordinate information asset. Thus, HLC propagation rules may not need to be recursively called, but rather a single supremum identification operation may suffice. Thus, providing and/or applying HLC propagation rules determined based on a lattice structure of high-level classes may be much more efficient. For example, embodiments of the present invention can utilize customized (e.g., user-defined) HLC propagation rules and/or HLC propagation rules determined (e.g., automatically) using a lattice structure of the high-level classifications. For example, the providing of the set of high-level classification propagation rules may comprise using an ordering of the high-level classes within the lattice (e.g., complete lattice) for determining one or more high-level classification propagation rules of the set of high-level classification propagation rules. The process may have the beneficial effect that a supremum may be used for determining the high-level propagation rules. The high-level propagation rules may define that a superordinate information asset is provided with a high-level classification assignment to a high-level class of the set of high-level classes that is the supremum of the subset of high-level classes to which information assets comprised by the superordinate information asset are assigned. For example, an automated determining of HLC propagation rules may be enabled. For example, for each combination of high-level classes to which subordinated information assets comprised by a common superordinated information asset are assigned, a supremum may be determined. The HLC propagation rules may define to assign the respective common superordinated information asset to the high-level class being the supremum of the combination of high-level classes to which the subordinate information assets are assigned. For example, the high-level classification propagation rules may be applied to a plurality of information assets of different hierarchical levels using a single supremum-based operation. The process may have the beneficial effect that determining suprema may be used to determine propagations of HLC assignments within the first containment hierarchy of set of information assets to arbitrary hierarchical levels of the containment hierarchy directly. For example, the set of high-level classification propagation rules may comprise receiving one or more user-defined high-level classification assignment rules. Thus, embodiments of the present invention can provide customized high-level classification assignment rules that are optimized for the need of a particular usage. For example, embodiments of the present invention can perform the applying of the one or more high-level classification assignment rules and the applying of the one or more high-level classification propagation rules upon detecting a triggering event. For example, the triggering event may be one of the following: adding an information asset to the set of information assets, amending an information asset of the set of information assets, deleting an information asset from the set of information assets. For example, amending an information asset may comprise amending a content of the information asset, amending a low-level classification assignment of the information asset. For example, amending a low-level classification assignment may comprise adding an additional low-level classification assignment, deleting a low-level classification assignment, amending a low-level class of a low-level classification assignment. For example, amending definition of low-level classes and/or high-level classes may require immediate action to re-classify all impacted information assets to assure that classifications, in particular HLCs, are always up-to-date. For example, processing of the information assets may be restricted based high-level classification assignments of the respective information assets. The processing can include one or more of the following: storing, archiving, deleting, and accessing. The processing may have the beneficial effect that a processing of information assets may depend on the high-level classes to which the respective information assets are assigned. For example, two information assets may each per se comprise information based on which no individual persons may be identified, or which cannot be related to individual persons. However, in combination the information provided by the two information assets may allow to identify individual persons or may be related to individual persons. For example, the two information assets may be stored at different (e.g., independent) storage location to reduce the risk that individual persons can be identified in case a storage location is compromised. For example, the access rights may define one or more of the following permissions: read permission, change permission, write permission, delete permission. The definition of access rights may have the beneficial effect that embodiments of the present invention can grant access rights for information assets based on high-level classes to which the respective information assets are assigned. For example, the types of information assets identified by the information asset type identifiers may comprise one more of the following: data field, column, table, schema, database, machine, cluster. For example, the types of information assets identified by the information asset type identifiers may comprise files and/or folders. A folder, also be referred to as a directory, comprise a set of subordinate folders and/or a set of files. Folders allow to group files into separate collections. The directories may be organized in form of a directory structure, such as hierarchical tree structure of directories and files of a file system. For example, the high-level classes may comprise one or more of the following classes: personally identifiable information, personally identifiable information in public domain, sensitive personally identifiable information, highly sensitive personally identifiable information. In example embodiments, implementations of the method described herein may provide a notion of information assets ordered by a containment hierarchy. Low-level classes (e.g., such as business terms and/or data specific classes) may be provided for classifying information assets. For example, the low-level classes may at least be used to classify information assets at a low hierarchical level of the containment hierarchy (e.g., information assets comprise no or only few other types of information assets). In further example embodiments, implementations of the method described herein may provide a notion of HLCs used for classifying information assets, such as PII and SPII (e.g., inspired by regulations). In additional example embodiments, implementations of the method described herein may provide a mechanism of extensible automation rules that may assign information assets to HLCs based on low-level classes and other information associated with the respective information assets, which may propagate HLC assignments from subordinate information assets to superordinate information assets in the containment hierarchy, and that may automatically adapt HLC assignments to changes of the low-level assignments and other information. For example, the computer program product further comprises machine-executable program instructions configured to implement any of the embodiments of the method for governing a set of information assets using an information governance system described herein. For example, the computer system further is configured to execute any of the embodiments of the method for governing a set of information assets using an information governance system described herein. FIG.1depicts an exemplary computer system100configured for governing a set of information assets using an information governance system, in accordance with embodiments of the present invention. It will be appreciated that the computer system100described herein may be any type of computerized system comprising a plurality of plurality of processor chips, a plurality of memory buffer chips and a memory. The computer system100may for example be implemented in form of a general-purpose digital computer, such as a personal computer, a workstation, or a minicomputer. In exemplary embodiments, in terms of hardware architecture, as shown inFIG.1, the computer system100includes a processor105, memory (main memory)110coupled to a memory controller115, and one or more input and/or output (I/O) devices10(or peripherals),145that are communicatively coupled via a local input/output controller135. The input/output controller135can be, but is not limited to, one or more buses or other wired or wireless connections, as is known in the art. The input/output controller135may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor105is a hardware device for executing software, particularly that stored in memory110. The processor105can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the computer system100, a semiconductor-based microprocessor (in the form of a microchip or chip set), a macroprocessor, or generally any device for executing software instructions. The memory110can include any one or combination of volatile memory modules (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory modules (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), or programmable read only memory (PROM)). Note that the memory110can have a distributed architecture, where additional modules are situated remote from one another, but can be accessed by the processor105. The software in memory110may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions, notably functions involved in embodiments of this invention. The executable instructions may further be configured for governing a set of information assets using an information governance system. For example, the executable instructions may be configured to generate and/or apply HLC assignments rules and HLC propagation rules. The software in memory110may further include a suitable operating system (OS)111. The OS111essentially controls the execution of other computer programs, such as possibly software112. If the computer system100is a PC, workstation, intelligent device or the like, the software in the memory110may further include a basic input output system (BIOS)122. The BIOS is a set of essential software routines that initialize and test hardware at startup, start the OS111, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the computer system100is activated. When the computer system100is in operation, the processor105is configured for executing software112stored within the memory110, to communicate data to and from the memory110, and to generally control operations of the computer system100pursuant to the software. The methods described herein and the OS111, in whole or in part, but typically the latter, are read by the processor105, possibly buffered within the processor105, and then executed. Software112may further be provided stored on any computer readable medium, such as storage120, for use by or in connection with any computer related system or method. The storage120may comprise a disk storage such as HDD storage. The information assets governed using the information governing system may be stored on the computer system100using an internal storage, like storage120, or a peripheral storage, like storage medium145. Alternatively, or additionally, information assets may be stored on other computer system, (e.g., such as server200), accessible for the computer system100via a network (e.g., such as network210). Alternatively, or additionally, definitions of the information assets and their identifiers as well as their assignments may be stored on or accessible for the computer system100. For example, a conventional keyboard150and mouse155can be coupled to the input/output controller135. Other output devices such as the I/O devices10may include input devices, for example but not limited to a printer, a scanner, microphone, and the like. Finally, the I/O devices10,145may further include devices that communicate both inputs and outputs, for instance but not limited to, a network interface card (NIC) or modulator/demodulator (for accessing other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, and the like. The I/O devices10,145may be any generalized cryptographic card or smart card known in the art. The computer system100can further include a display controller125coupled to a display130. For example, the computer system100can further include a network interface for coupling to a network210, such as an intranet or the Internet. The network can be an IP-based network for communication between the computer system100and any external server, such as server200, other client and the like via a broadband connection. The network210transmits and receives data between the computer system100and server200. For example, network210may be a managed IP network administered by a service provider. The network210may be implemented in a wireless fashion, e.g., using wireless protocols and technologies, such as Wi-Fi, WiMAX, etc. The network210may also be a packet-switched network such as a local area network, wide area network, metropolitan area network, Internet network, or other similar type of network environment. The network may be a fixed wireless network, a wireless local area network (LAN), a wireless wide area network (WAN) a personal area network (PAN), a virtual private network (VPN), intranet or other suitable network system and includes equipment for receiving and transmitting signals. FIG.2depicts an exemplary set of information assets221, in accordance with embodiments of the present invention. The information assets may be organized to form a containment hierarchy. For example, columns220may be comprised by a table222. Tables222may be comprised by a schema224. Schemas224may be comprised by a database226. Databases226may be comprised by a machine228. Machines228may be comprised by a cluster229. For example, all the information assets on all hierarchical levels may be provided with information asset type identifiers identifying the information asset type of the respective information assets. For example, the information assets on the lowest hierarchical level (e.g., the columns) may be assigned with low-level classification assignments. FIG.3depicts an exemplary set of information assets221, in accordance with embodiments of the present invention. A plurality of columns220may be provided. The columns220may be grouped forming tables222. The tables222may be grouped forming schemas224. The schemas224may be grouped forming databases226. In example embodiments, a plurality of databases226may be hosted on a machine228. In further examples, the machines228may be grouped to form a cluster229. FIG.4depicts an exemplary set of high-level classes230, in accordance with embodiments of the present invention. In example embodiments, the high-level classes of the set of high-level classes230may comprise a class232with personally identifiable information in public domain “PIIPD”, a class234with personally identifiable information “PII”, a class236with sensitive personally identifiable information “SPII”, and a class238with highly sensitive personally identifiable information “HSPII”. In an example embodiment, the set of high-level classes230is organized hierarchically. The arrows denote the order of the high-level classes. In case of the exemplary types of classes depicted inFIG.4, the arrows may indicate “requires more attention” (i.e., PII data requires more attention than PIIPD). For example, HLC assignments may be propagated recursively level-by-level through the hierarchy of the information assets (e.g., from the level of columns to the level of tables and subsequently to the levels of schemas, databases, host machines and beyond). For each information asset a high-level class may be determined based on the high-level classes to which the subordinate information assets comprised by the respective information assets are assigned using the hierarchical structure of the set of high-level classes230. For example, the last high-level class in the direction of the arrows (i.e., the one requiring most attention), to which one of the subordinate information assets is assigned may be selected. However, also more complex relationships between high-level classes may be taken into account (e.g., organizing the high-level classes in a complete lattice). FIG.5depicts an exemplary set of high-level classes231organized in a lattice, in accordance with embodiments of the present invention. In example embodiments, the high-level classes of the set of high-level classes231may comprise a class232with personally identifiable information in public domain “PIIPD”, a class234with personally identifiable information “PII”, a class236with sensitive personally identifiable information “SPII”, and a class238with highly sensitive personally identifiable information “HSPII”. The high-level classes of the set of high-level classes231form a lattice. A lattice refers to a mathematical structure of a set elements with two elements always having a unique least upper bound, also referred to as a supremum, and a unique greatest lower bound, also referred to as an infimum. A special case of a lattice is a so-called complete lattice in which all subsets of elements have a unique supremum and infimum. Using the lattice structure of the set of high-level classes231may enable a direct determination of HLC assignments. Using the lattice, the HLC assignments of information assets (e.g., such as columns, tables, etc.) may be taken and a most applicable HLC assignment may be determined in one step. If the HLC assignments are organized as a complete lattice, then the HLC assignment of an information asset (e.g., a database), may be the supremum of all the HLC assignments of all the corresponding children (i.e., of all the information assets, such as schemas, tables, columns), comprised by the respective information asset to be provided with a HLC assignment. Considering the lattice structure of the set of high-level classes231depicted inFIG.5, the arrows denote the order of the high-level classes. In case of the exemplary types of classes depicted inFIG.5, the arrows may indicate “requires more attention” (i.e., PII data requires more attention than PIIPD). To find the supremum of a subset of N elements a unique element which is reachable from all N elements is determined (i.e., the upper bound), and which is not reachable from any other upper bound (i.e., the least upper bound). For example, the supremum of “PIIPD” and “PII” according to the lattice structure shown inFIG.5is “PII”, the supremum of “PIIPD” and “SPII” is “SPII”, the supremum “PII” and “SPII” is “HSPII”, the supremum of “PII”, “SPII”, “HSPII”, and “PIIPD” is “HSPII”. “HSPII” is an upper bound of “PII” and “PIIPD”, but not the least, since “PII” is on a lower level of the order. FIG.6depicts an exemplary information governance system300, in accordance with embodiments of the present invention. The information governance system300may comprise a set of high-level classes306, a set of HLC assignment rules302, and a set of HLC propagation rules. The set of high-level classes306may comprise a plurality of high-level classes. For example, the set of high-level classes306may be an at least partially ordered set of high-level classes. For example, the high-level classes of the set of high-level classes306may be hierarchically ordered. For example, the high-level classes of the set of high-level classes306may form a containment hierarchy. For example, the high-level classes of the set of high-level classes306may form a lattice (e.g., a complete lattice). The HLC assignment rules of the set of HLC assignment rules302may be configured for assigning information assets to the high-level classes of the set of high-level classes306. The HLC assignment rules302may assign an information asset to a high-level class of the set of high-level classes based on an information asset type identifier identifying the information asset type of the respective information asset (e.g., column, etc.). Furthermore, HLC assignment rules302may use low-level classification assignments of the respective information assets to provide HLC assignments. The HLC propagation rules of the set of HLC propagation rules304be configured for propagating HLC assignments of information assets of superordinate information assets provided with HLC assignments to one or more superordinate information assets comprising the respective superordinate information assets. For identifying a hierarchical level of a hierarchy of the to which the respective subordinate and superordinate information assets are assigned, information asset type identifiers may be used. For example, the HLC assignment rules may be used to provide HLC assignments for information assets at a lowest hierarchical level of a hierarchical set of information assets. The HLC propagation rules may be used to provide HLC assignments to information assets at higher levels of a hierarchical set of information assets (i.e., to propagate lower level HLC assignments, such as the HLC assignments provided using the HLC assignment rules, to higher levels of a hierarchical set of information assets). FIG.7depicts a schematic flow diagram of an exemplary method, method700, for governing a set of information assets, in accordance with embodiments of the present invention. In various embodiments, an information governance system (e.g., information governance system300operating on or in conjunction with a computing system, such as computer system100) can execute processing steps of method700, accordance with embodiments of the present invention. In example embodiments, the set of information assets is an at least partially ordered set, which forms a containment hierarchy. The information assets are provided with information asset type identifiers. Furthermore, at least some of the information assets are provided with low-level classification assignments to low-level classes. For example, at least information assets on the lowest hierarchical level of the containment hierarchy may be assigned with low-level classification assignments. For example, only the information assets on the lowest hierarchical level of the containment hierarchy may be assigned with low-level classification assignments. In block400of method700, the information governance system may provide a set of high-level classes. In block402of method700, the information governance system may provide a set of HLC assignment rules. The HLC assignment rules may be configured for assigning the information assets of the set of information assets to the high-level classes of the set of high-level classes using the information asset type identifiers and the low-level classification assignments of the respective information assets. In block404of method700, the information governance system may provide a set of one or more HLC propagation rules. The HLC rules may be configured for propagating HLC assignments of subordinate information assets to superordinate information assets comprising the respective subordinate information assets. In block406of method700, the information governance system may apply the HLC assignment rules to information assets. For example, information governance system applies the HLC assignment rules to information assets at a lowest hierarchical level of the containment hierarchy of the information assets. Information asset type identifiers and low-level classification assignments of the respective information assets may be used as input to provide HLC assignments for the respective information assets to high-level classes of the set of high-level classes as output. In block408of method700, the information governance system may apply the HLC propagation rules to information assets. In example embodiments, the information governance system applies the HLC propagation rules to information assets provided with the high-level classification assignments in order to propagate the HLC assignments of the respective information assets upwards within the containment hierarchy of the information assets. For example, the information governance system can propagate the HLC assignments of the respective information assets to one or more superordinate information assets. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Possible combinations of features described above may be the following: 1. A method for governing a set of information assets using an information governance system, the set of information assets being an at least partially ordered set forming a first containment hierarchy, the information assets being provided with information asset type identifiers and at least some of the information assets being provided with low-level classification assignments to low-level classes, the method comprising using the information governance system for:providing a set of high-level classes,providing a set of high-level classification assignment rules for assigning the information assets of the set of information assets to the high-level classes of the set of high-level classes using the information asset type identifiers and the low-level classification assignments of the respective information assets,providing a set of one or more high-level classification propagation rules for propagating the high-level classification assignments of one or more information assets of the set of information assets, which are subordinate to one or more superordinate information assets of the set of information assets, to the one or more superordinate information assets,applying one or more high-level classification assignment rules of the set of high-level classification assignment rules to one or more information assets of the set of information assets using the information asset type identifiers and the low-level classification assignments of the respective one or more information assets as input to provide one or more high-level classification assignments of the respective one or more information assets to one or more of the high-level classes of the set of high-level classes as output,applying one or more high-level classification propagation rules of the set of high-level classification assignment rules to the one or more information assets of the set of information assets provided with the high-level classification assignments for propagating the respective high-level classification assignments upwards within the first containment hierarchy to one or more superordinate information assets of the set of information assets. 2. The method of item 1, wherein the applying of the one or more high-level classification propagation rules is performed recursively hierarchical level by hierarchical level upwards through the first containment hierarchy. 3. The method of any of the previous items, wherein the applying of the one or more high-level propagation rules comprises applying the one or more high-level propagation rules to all information assets of the set of information assets which are at a same hierarchical level within the first containment hierarchy and which share a common superordinate information asset of the set of information assets. 4. The method of any of the previous items, wherein the set of high-level classes comprises one or more default high-level classes. 5. The method of any of the previous items, wherein the providing of the set of high-level classes comprises receiving one or more customized high-level classes. 6. The method of any of the previous items, wherein the set of high-level classification assignment rules comprises one or more default high-level classification assignment rules. 7. The method of any of the previous items, wherein the providing of the set of high-level classification assignment rules comprises receiving one or more customized high-level classification assignment rules. 8. The method of any of the previous items, wherein the set of high-level classes is an at least partially ordered set of high-level classes with at least some of the high-level classes comprising a hierarchical relationship to each other. 9. The method of item 8, wherein the at least partially ordered set of high-level classes forms a second containment hierarchy. 10. The method of any of items 8 and 9, wherein the partially ordered set of high-level classes forms a complete lattice with each subset of the set having a supremum. 11. The method of item 10, wherein the providing of the set of high-level classification propagation rules comprises using an ordering of the high-level classes within the complete lattice for determining one or more high-level classification propagation rules of the set of high-level classification propagation rules. 12. The method of any of items 8 to 11, wherein the high-level classification propagation rules are applied to a plurality of information assets of different hierarchical levels using a single supremum-based operation. 13. The method any of the previous items, wherein the set of high-level classification propagation rules comprises receiving one or more user-defined high-level classification assignment rules. 14. The method any of the previous items, wherein the applying of the one or more high-level classification assignment rules and the applying of the one or more high-level classification propagation rules is performed upon detecting a triggering event. 15. The method of item 14, wherein the triggering event is one of the following: adding an information asset to the set of information assets, amending an information asset of the set of information assets, deleting an information asset from the set of information assets. 16. The method of any of the previous items, wherein processing of the information assets is restricted based high-level classification assignments of the respective information assets, wherein the processing comprises one or more of the following: storing, archiving, deleting and accessing. 17. The method of any of the previous items, wherein the types of information assets identified by the information asset type identifiers comprising one more of the following: data field, column, table, schema, database, machine, cluster. 18. The method of any of the previous items, wherein the high-level classes comprise one or more of the following classes: personally identifiable information, personally identifiable information in public domain, sensitive personally identifiable information, highly sensitive personally identifiable information. 19. A computer program product comprising a non-volatile computer-readable storage medium having machine-executable program instructions embodied therewith for governing a set of information assets using an information governance system, the set of information assets being an at least partially ordered set forming a containment hierarchy, the information assets being provided with information asset type identifiers and at least some of the information assets being provided with low-level classification assignments to low-level classes,execution of the program instructions by a processor of a computer system causing the processor to control the computer system to use the information governance system for:providing a set of high-level classes,providing a set of high-level classification assignment rules for assigning the information assets of the set of information assets to the high-level classes of the set of high-level classes using the information asset type identifiers and the low-level classification assignments of the respective information assets,providing a set of one or more high-level classification propagation rules for propagating the high-level classification assignments of one or more information assets of the set of information assets, which are subordinate to one or more superordinate information assets of the set of information assets, to the one or more superordinate information assets,applying one or more high-level classification assignment rules of the set of high-level classification assignment rules to one or more information assets of the set of information assets using the information asset type identifiers and the low-level classification assignments of the respective one or more information assets as input to provide one or more high-level classification assignments of the respective one or more information assets to one or more of the high-level classes of the set of high-level classes as output,applying one or more high-level classification propagation rules of the set of high-level classification assignment rules to the one or more information assets of the set of information assets provided with the high-level classification assignments for propagating the respective high-level classification assignments upwards within the containment hierarchy to one or more superordinate information assets of the set of information assets. 20. A computer system for governing a set of information assets using an information governance system, the set of information assets being an at least partially ordered set forming a containment hierarchy, the information assets being provided with information asset type identifiers and at least some of the information assets being provided with low-level classification assignments to low-level classes, the computer system comprising a processor and a memory storing machine-executable program instructions,execution of the program instructions by the processor causing the processor to control the computer system to use the information governance system for:providing a set of high-level classes,providing a set of high-level classification assignment rules for assigning the information assets of the set of information assets to the high-level classes of the set of high-level classes using the information asset type identifiers and the low-level classification assignments of the respective information assets,providing a set of one or more high-level classification propagation rules for propagating the high-level classification assignments of one or more information assets of the set of information assets, which are subordinate to one or more superordinate information assets of the set of information assets, to the one or more superordinate information assets,applying one or more high-level classification assignment rules of the set of high-level classification assignment rules to one or more information assets of the set of information assets using the information asset type identifiers and the low-level classification assignments of the respective one or more information assets as input to provide one or more high-level classification assignments of the respective one or more information assets to one or more of the high-level classes of the set of high-level classes as output,applying one or more high-level classification propagation rules of the set of high-level classification assignment rules to the one or more information assets of the set of information assets provided with the high-level classification assignments for propagating the respective high-level classification assignments upwards within the containment hierarchy to one or more superordinate information assets of the set of information assets.
70,476
11860905
This disclosure includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. A “computer system configured to scan” is intended to cover, for example, a computer system has circuitry that performs this function during operation, even if the computer system in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible. Thus, the “configured to” construct is not used herein to refer to a software entity such as an application programming interface (API). The term “configured to” is not intended to mean “configurable to.” An unprogrammed FPGA, for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function and may be “configured to” perform the function after programming. Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, none of the claims in this application as filed are intended to be interpreted as having means-plus-function elements. Should Applicant wish to invoke Section 112(f) during prosecution, it will recite claim elements using the “means for” [performing a function] construct. As used herein, the terms “first,” “second,” etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless specifically stated. For example, references to “first” and “second” sets of scan objectives would not imply an ordering between the two unless otherwise stated. As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect a determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is thus synonymous with the phrase “based at least in part on.” As used herein, the term “platform” refers to an environment that includes a set of resources that enables some functionality (for example, in the context of the present disclosure, automated decision making). In some cases, this set of resources may be software resources, such that a platform may be said to be constituted solely of software. In other instances, the set of resources may include software and the hardware on which the software executes. Still further, the resources may constitute specialized hardware that performs the functionality; such specialized hardware may, in some cases, utilize firmware and/or microcode in order to execute. (“Modules” are one type of resource; a given module is operable to perform some portion of the overall functionality of a platform.) The term “platform” is thus a broad term that can be used to refer to a variety of implementations. Unless otherwise stated, use of the term “platform” in this disclosure will be understood to constitute all possible types of implementations unless otherwise stated. Note that a platform need not be capable by itself of performing the specified functionality. Rather, it need only provide the capability of performing the functionality. For example, an automated decision-making platform according to the present disclosure provides resources for performing automated decision making; users may utilize the platform to carry out instances of automated decision making. Embodiments of the automated decision-making platform described herein thus enable the functionality of automated decision making to be performed. As used herein, a “module” refers to software and/or hardware that is operable to perform a specified set of operations. A module may in some instances refer to a set of software instructions that are executable by a computer system to perform the set of operations. Alternatively, a module may refer to hardware that is configured to perform the set of operations. A hardware module may constitute general-purpose hardware as well as a non-transitory computer-readable medium that stores program instructions, or specialized hardware such as a customized ASIC. DETAILED DESCRIPTION Identifying a particular type of information stored in a numerous datastores such as sensitive information can be a complex problem that frequently has many strategic concerns that must be balanced, most notably scan quality versus scan duration and resource usage. Moreover, complying with data governance requirements relating to PII adds an additional level of complexity. An intensive, high quality scan having high scan coverage and a high statistical confidence level may identify all sensitive information in a datastore, but it may take too long to perform on available computer resources to be practical. Conversely, a shorter scan may result in a statistical confidence level that is too low to adequately ensure compliance with regulations or benchmarks. Moreover, data records stored in datastores to be scanned may be widely varied with many different kinds of data stored in many different ways. Further, the amount of data that is stored may far exceed processing power necessary to scan all of the data in a reasonable amount of time, so different sampling techniques may useful to more efficiently utilize existing resources to perform a reasonably high-quality scan in a reasonable amount of time. Finally, because in many instances new data records are added to datastores constantly, it is important to repeat scans to continue compliance. The techniques disclosed herein enable a user to balance scan quality with scan duration in view of the resources that are available to perform a scan, to tailor the scan to the type of data records being scanned, to manage repeated scans that utilize the results of previous scans to improve subsequent scans, and to generate reports and metrics that are indicative of scan coverage, classification quality, and statistical confidence level of the scan as well as performance metrics of the scanning process. Accordingly, the disclosed techniques enable users to scan datastores for date of interested (e.g., PII) to perform various actions to be taken on this data (e.g., to implement a “right to be forgotten” as required by relevant regulations). FIG.1is a block diagram illustrating an embodiment of a computer system100configured to scan for sensitive information. Computer system100is one or more computer systems operable to implement user interface102and scanner104. In various embodiments, computer system100also implements a data management module108. In various embodiments, computer system100is implemented by software running on a computer system (e.g., a desktop computer, a laptop computer, a tablet computer, a mobile phone, a server) or a plurality of computer systems (e.g., a network of servers operating as a cloud). In other embodiments, computer system100is implemented in specialized hardware (e.g., on an FPGA) or in a combination of hardware and software. In various embodiments, security computer system100is operable to perform other functions in addition to implementing user interface and scanner104. User interface102is operable to present information to user110and receive information from user110. In various embodiments, user interface102includes one or more input/output devices including but not limited to one or more visual displays (e.g., a monitor, a touchscreen), one or more speakers, one or more microphones, a haptic interface, a pointing device (e.g., mouse, trackball, trackpad, etc.), a keyboard, or any combination. Scanner104is a platform that enables the preparation and implementation of one or more scanning plans106to identify information stored in datastores120. As discussed herein, scanner104is operable to identify any particular type of information in various embodiments including but not limited to sensitive information such as personally identifiable information (PII). As used herein, “personally identifiable information” is any data that could potentially be used to identify a particular person. Examples include but are not limited to a full name, a Social Security number, a driver's license number, a bank account number, a passport number, and an email address. As discussed herein, scanner104is operable to receive indications of one or more datastores120to be scanned for a particular type of information during a scan, to determine one or more classifiers to apply to the one or more datastores during the scan to identify the particular type of information, and to determine a plurality of scan objectives for the scan. In various embodiments, scan objectives include but are not limited to a target confidence level for the scan, one or more sampling strategies for the scan, indications of portions of a PII scan logic library to be used in the scan (e.g., indications of scan logic corresponding to one or more particular PII regulations, indications of one or more data categories of data to be scanned), etc. Further, scanner104is operable to determine available computer resources (e.g., processing power, memory, etc. of computer system100) to perform the scan and estimate scan quality metrics and estimated execution duration for the scan based on the scan objectives and the available computer resources. Using user interface102, scanner104is also operable to present user110with indications of the estimated scan quality metrics and estimated execution duration for the scan and receive one or more commands from user110. Scanner104is also operable to perform the scan in response to one or more commands from user110. In various embodiments, scanner104is operable to receive modifications to the scanning plan106from user110, update the estimated scan quality metrics and/or estimated execution duration, and present the updated estimates to user110. In some embodiments, estimated scan quality metrics and/or estimated execution duration corresponding to multiple different scanning plan106sare generated and presented to user110, user110selects or modifies one of these scanning plan106s, and the selected scanning plan106is used to conduct a scan of datastores120. In various embodiments, user110is presented with a schedule indicative of repeated iterations of a particular scanning plan106. Scanner104is also operable to track various metrics related to the performance of various scanning plans106that are useable to generate reports to prove compliance with various PII regulations, data security audits, or other requirements in various embodiments. As discussed herein, scanner104is operable to generate one or more scanning plans106. The individual scanning plans106indicate what and where to scan (e.g., which datastores120, which folders or tables within datastore120, etc.), when to scan (e.g., one holistic scan, two or more repeated scans, scanning when a datastore120reaches a threshold amount of unscanned data records), why to scan (e.g., which regulations are applicable), and/or how to scan (e.g., a target confidence level, sampling strategies to employ, what metrics to record). As discussed herein, in various embodiments, various scanning plans106are performed iteratively with subsequent scans using the results of previous scans to adjust the subsequent scan. As discussed herein, subsequent scans using a particular scanning plan106may be performed using different classifiers and/or scan objectives than a previous scan using the same particular scanning plan106. For clarity, as used herein, the term “proposed scanning plan” refers to a scanning plan106that has been specified but has not been executed and may be subject to input by user110in which scanning objectives are changed. Scanner104and its various components are discussed herein in reference toFIGS.2-7and15, various methods of using scanner104to create and perform scanning plan106sare discussed herein in reference toFIGS.8,13, and14, and example screens from user interface102presented in connection with the disclosed techniques are discussed herein in reference toFIGS.9-12. In various embodiments, computer system100includes a data management module108. Data management module108is operable to prepare reports for user110that are indicative of sensitive information identified during one or more scans and/or to manage data records containing sensitive information (including deletion of such data records). In various embodiments, for example, data management module108is operable to provide to user110a report indicative of PII corresponding to user110that is stored in datastores120(e.g., as required in various PII regulations such at the GDPR). In various embodiments, data management module is operable to delete some or all PII corresponding to a user110from one or more datastores120in response to receiving a deletion request (e.g., as required by a “right to be forgotten” clause in a PII regulation). In various embodiments, a deletion report is generated for user110once the PII has been deleted as requested. User110is an individual who seeks to configure a scanning plan106using scanner104, and in various instances, run a scan according to the scanning plan106, or to access information (or reports) generated by scanner104or data management module108. User150may be a natural person, a group of natural persons, an organization in various embodiments. In various instances, user150controls, maintains, and/or services one or more datastores120. In various instances, user150is a chief privacy officer, reports to a chief privacy officer, or is otherwise is tasked with ensuring compliance with one or more PII regulations such as the EU's GDPR, the California Consumer Privacy Act (CCPA), etc. One or more datastores120are any of a number of electronic datastores implemented on any of a number of electronic storage media useable to store information. In various embodiments, datastores120may be stored on any suitable memory device or system in various embodiments including a single memory (e.g., a hard drive, solid-state drive, etc.), an array of memories, or a storage computer system. In some embodiments, the one or more datastores120include one or more restricted datastores that are configured to permit only local access to the information stored thereon. For example, such restricted datastores120permit access via a shared I/O bus or via requests from computer systems on the same LAN, but not over a WAN. In some of such embodiments, restricted datastores are installed within computer system100and only accesses by computer system100are permitted. In either embodiment, scanner104is operable to prepare a scanning plan to scan such restricted datastores120and to execute the scanning plan on the restricted datastores120. In other embodiments, datastores120may be a storage service remote from computer system100. The various datastores120may store any type of information including structured data, unstructured data, and media data (e.g., image, audio, video). In various embodiments, datastores120store massive amounts of data (e.g., hundreds or thousands of terabytes) and new data records may be added to datastores120at high velocity (e.g., thousands or millions or more data records added per day). For example, datastores120might include any of a wide array of different types of records including but not limited to records of chat logs from customer service interactions, names and addresses, sales records, social media comments, images uploaded in customer reviews, etc. Some datastores120may be maintained by user110(e.g., a structured database of names and address) but others may simply store information as it is input by customers, service representatives, or others. In some embodiments, some data records are persistent and stay in datastore120for a relatively long period of time (e.g., months or years) whereas other data records are temporary (e.g., deleted after 30 or 60 days). According to the techniques discussed herein, computer system100is operable to provide user110with a comprehensive user experience to prepare, compare, modify, and perform scanning plans106to identify a particular type of information stored in datastores120. As discussed herein, in various embodiments various scanning plans106are useable to identify PII in order to comply with various PII regulations and to maintain compliance by repeating scans according to a schedule. Additionally, computer system100enables a user110to receive reports about sensitive information corresponding to them that is stored on datastore120and, in various instances, to request the deletion of such information using data management module108. The metrics generated by scanner104may be used to demonstrate compliance with various regulations, and identified information may be provided to third-parties (e.g., government inspectors, individuals requesting a record of their stored PII, etc.). Because all of these capabilities are incorporated into scanner104, user110is able to perform various data security tasks using a single user interface. Moreover, because scanner104is implemented as a platform, additional capabilities to comply with additional regulations (e.g., additional PII regulations) or requirements may be added. Further, because scanner104may be implemented on as an installed application or script running in a data center storing sensitive information, such sensitive information need not be exposed to the Internet or an unsecure WAN in various embodiments. Scanner104also provides flexibility that enables various different users110with different requirements to generate and execute scanning plans106that meet such requirements. In various instances, for example, PII regulations in different jurisdictions may differ greatly in scope. Scanner104provides a flexible platform that can provide various target information classifiers (e.g., classifiers206discussed herein) that are operable to satisfy different obligations under different PII regulations, and under different risk appetites of different users110. Further, scanner104is operable to prepare a scanning plan106that balances the hardware capabilities of computer system100with the level of scan quality indicated by user110such that the highest quality scan can be performed in an acceptable amount of time and resource usage. Accordingly, a user110who prepares scanning plan106does not necessarily have to have technical training or configure computer system100and instead can focus on requirements like what scan quality is acceptable and what datastores120to scan rather than how the scan will be performed. FIG.2is an expanded block diagram of scanner104in accordance with various embodiments. In various embodiments, scanner104includes a plurality of processor modules200, a plurality of controller modules230, a visualization module250, and a metadata service module260. As defined herein, scanner104is a platform implemented by computer system100, and the various components illustrated inFIG.2are implemented as modules. While the various modules shown inFIG.2are represented as discrete modules, it will be understood that the various operations performed by various modules may be subdivided into additional modules or combined with other modules in various configurations. In various embodiments, processor modules200include a uniform data model and management module202, a data ingestion module204having one or more connectors205, a pluggable classification container (PCC)210operable to receive one or more classifier modules206(e.g., classifier A206A, classifier B206B, classifier n206n), a data enhancement module208, a regional detection module212, and an identification module214. In various embodiments, uniform data model and management module202is operable to provide a unified access interface for scanning results generated as by the various scanning plans106. As discussed herein, the scanning results are retained in storage (e.g., stored in a datastore120), and in various embodiments uniform data model and management module202is an API that is operable to read these scanning results and present a summary to user110. Data ingestion module204is operable to perform pre-processing on data records stored in datastores120to facilitate scanning by scanner104. In various embodiments, data ingestion module204include a plurality of connectors205that are useable by data ingestion module204to facilitate ingestion of information from particular datastores120(e.g., a first connector205useable to facilitate ingestion of data records in a first datastore120, a second connector205useable to facilitate ingestion of data records in a second datastore120, etc.). PCC210is a standardized interface into which a plurality of classifiers206can be plugged. As discussed further in connection toFIG.5, PCC210provides a standardized interface that defines the kinds of calls or requests that can be made by classifiers206, how to make calls and requests, the data formats that should be used, the conventions to follow, etc. Accordingly, PCC210provides an extension capability in the ability of scanner104to classify data records that user110can use to add additional classifiers206as desired. The various classifiers206are different classification algorithms that can be applied to data records in datastores120being scanned to detect a particular type of information (e.g., PII or other sensitive information). As shown inFIG.2, any number of classifiers206may be present including classifier A206A, classifier B206, a classifier n206n. Such classifiers206may use various classification strategies to detect a particular type of information including but not limited to linear classification (e.g., logistic regression algorithms, naïve Bayes algorithms), quadratic classification, stochastic gradient descent algorithms, kernel estimation (e.g., k-nearest neighbor algorithms), decision tree algorithms, random forest algorithms, support vector machines algorithms, champion-challenger algorithms, and neural network classification (e.g., Enigma classification). In some embodiments, one or more classifiers206uses natural language processing algorithms to determine whether unstructured data contains target information. In various embodiments, some classifiers206may be included as part of scanner104as a default, but other classifiers206may be added by user110. Such added classifiers206may be third-party classifiers206(e.g., classifiers206that user110has licensed for use in scanner104) or may be built by user110(or on behalf of user110). A user110may use knowledge of the datastores120he or she manage to prepare a classifier206tailored to the datastore120. For example, if user110suspects that a datastore mostly contains records of customer service interactions, a custom classifier206built by user110for scanning this datastore120might favor classifications of names, address, and email addresses over classifications of images of faces or recognition of credit card numbers. The various functions and sub-components of PCC210are discussed in further detail in reference toFIGS.3and5. Regional detection module212is operable to attempt to determine a geographic region associated with various records in datastores120. In various instances, regulations such as PII governance regulations are applicable to information associated with a particular region. For example, the EU's GDPR applies to residents of the EU and information stored in the EU. Thus, in various instances if a record includes PII or other sensitive information and corresponds to particular geographic region, regional detection module212is operable to flag that data record as containing information that is subject to regulations for that particular region. In various embodiments, regional detection module212includes scan logic for various regions (and regulations that are applicable to such regions) including but not limited to the EU, California, Brazil, Japan, South Korea, Argentina, and Kenya. As discussed herein, in various embodiments, one or more scan objectives may indicate a particular region (and therefore a particular regulation) to scan for in a particular scanning plan106. Accordingly, scan logic corresponding to these regions would be included in scanning plan106. Alternatively or additionally, scan objectives may indicate a particular region to exclude from a particular scanning plan106(e.g., user110knows that no Brazilian data records are in datastores120). Accordingly, scan logic corresponding these excluded regions would be left out of scanning plan106. In various embodiments, such inclusions and/or exclusions may be input by user110(seeFIGS.10A and10B). Identification module214is operable to attempt to determine a particular individual (or entity) to whom sensitive information is associated. In instances where the sensitive information is PII, identification module214is operable to identity the particular person to whom the PII is associated. In various embodiments, identification module214build a dossier for various particular individuals and references such dossiers to associate a particular record with a particular person. For example, a dossier for John Smith might include his name, address, telephone number, email address, and user account name, and identification module214is operable to associate data records with John Smith based on matches between such data records and the dossier. Additionally, in various embodiments identification module214employs machine learning techniques such as classification algorithms, clustering algorithms, and fuzzy logic to generate approximate matches between sensitive information and a particular individual. In various embodiments, controller modules230include a scan central planning unit module (SCPU)232, a performance measurement framework module (PMF)234, a quality measurement framework module (QMF)236, a backend service and API gateway module238, and a scanner admin console240. SCPU232is operable to coordinate the various functions of the various processor modules200, controller modules230, and metadata service module260. In various instances, SCPU232is operable to take in inputs from the various processor modules200, controller modules230, and metadata service module260and attempt to balance between scan quality and performance to determine a scanning plan106. In various instances, SCPU232is operable to propose a scanning plan106based on scan objectives input by user110such as a maximum scan duration and a target sampling confidence level. For example, user110may input scan objectives relating to maximum scan duration, number of times to iterate a scan, sampling strategies, focus regions, particular classifiers206to employ (e.g., classifies for phone numbers, emails, Social Security Number), selection of datastores120that must be scanned, and various thresholds (e.g., sampling thresholds discussed in connection toFIG.4). From these scan objectives, SCPU232is operable to propose a physical execution plan (e.g., what data records to scan and in what order) for the scanning plan106. In such embodiments, SCPU232is operable to propose one or more scanning plans106that meets such scan objectives and present the one or more proposed scanning plans106to user110for the user110to select. In various embodiment, the various proposed scanning plans106differ in various ways including but not limited to employing different sampling strategies, using different classifiers206, using different scan logic (e.g., omitting scan logic for particular geographic regions, omitting identification scan logic on one or more initial iterations of a scan), and employing different scan priorities (e.g., a hierarchy in which processing resources are allocated between concurrently running scanning plans106). SCPU232is also operable to perform a scan of datastore120using a particular scanning plan106and record the results (e.g., identified data records, metrics for the scanning plan106, etc.). SCPU232is also operable to use the results of prior scans to improve subsequent scans (e.g., by determining to skip a datastore120that has not changed since the last iteration). Such improvements may be rule-based (e.g., if X is above a threshold, then skip datastore120in a subsequent scan) or may be based on machine learning models. The various functions and sub-components of SCPU232are discussed in further detail in reference toFIGS.3and4. PMF234is operable (a) to estimate scan performance metrics based on a particular proposed scanning plan106and/or (b) to collect scan performance metrics for an ongoing scan that is being performed according to a scanning plan106and to evaluate the scan performance metrics for a completed scanning plan106(or a completed iteration of a repeating scanning plan106). Such scan performance metrics includes but are not limited to metadata collection, scan progress statistics, scan health status (e.g., whether a scan has incurred an error and had to be terminated prematurely), system computer resource usage (e.g., degree of parallelism, number of processors used in a single scan iteration and/or in total, size of memories used in a single scan iteration and/or in total), and scan velocity. PMF234is also operable to calculate estimated scan duration for a proposed scanning plan106based on scanning objectives for the proposed scanning plan106. In various embodiments, PMF234is operable to send estimated and/or collected scan performance metrics to QMF236. In various embodiments, PMF234is operable to collect scan performance metrics during an ongoing scan and make such scan performance metrics available to visualization module250directly (or via SCPU232) such that visualization module250is operable to generate a visualization of the progress of an ongoing scan (e.g., a visualization showing scan velocity of an ongoing scan system computer resource usage, and an estimated remaining scan duration). Similarly, PMF234is operable to make scan performance data (or evaluations thereof) of a completed scan (or a completed iteration of a scan) available to visualization module250directly, or via SCPU232such that visualization module250is operable to generate a visualization of the performance of the completed scan or scan iteration (e.g., an indication of the system computer resource usage, total scan duration, average scan quality, and scan quality statistics). The various functions and sub-components of PMF234are discussed in further detail in reference toFIGS.3and7 QMF236is operable (a) to estimate scan quality metrics based on a particular proposed scanning plan106and/or (b) to prepare scan quality metrics for an ongoing scan that is being performed according to a scanning plan106and to evaluate scan quality metrics for a completed scanning plan106(or a completed iteration of a repeating scanning plan106). Such scan quality metrics include but are not limited to precision, recall, negative predictive value, sampling confidence, detection confidence, scanning coverage, and scan accuracy (e.g., F1 score). Such scan quality metrics may be aggregated at various levels (e.g., model level, dataset level). In various embodiments, QMF236is operable receive user-confirmation of scan results to establish benchmarks, to receive scan performance metrics from PMF234, and to analyze the scan performance metrics against benchmarks to generate the scan quality metrics. In some embodiments, such benchmarks may also be used to provide estimated scan quality metrics. In various instances, such analysis includes but is not limited to performing various statistical analyses, using machine-learning algorithms to extrapolate scan quality metrics based on prior scan quality metrics, and/or applying scan performance metrics to preexisting models that output scan quality metrics. QMF236is operable to provide the estimated scan quality metrics based on a first set of scan objectives to SCPU232, receive one or more indications of changes to the scan objectives that result in a second set of scan objectives, and then provide estimated scan quality metrics based on the second set of scan objectives. In various embodiments, QMF236is operable to collect scan quality metrics during an ongoing scan and make such scan quality metrics available to visualization module250directly, or via SCPU232such that visualization module250is operable to generate a visualization of the progress of an ongoing scan. Similarly, QMF236is operable to make scan performance data (or evaluations thereof) of a completed scan (or a completed iteration of a scan) available to visualization module250directly, or via SCPU232such that visualization module250is operable to generate a visualization of the performance of the completed scan or scan iteration (e.g., an indication of the actual sampling confidence, actual detection confidence, and accuracy of the completed scan). The various functions and sub-components of QMF236are discussed in further detail in reference toFIGS.3and6. In various embodiments, backend service and API gateway module238enables access to scanner104via other applications (e.g., user interface102). In various embodiments, backend service and API gateway module238is operable to provide an API to other applications, receive API requests for scanner104to perform a function, and return a result to the other application. In various embodiments, scanner104includes a scanner admin console240that is operable to provide administrator-level access to scanner104. In various embodiments, administrator-level access enables a user110to manage scanning plans106(e.g., adding, changing, copying, or deleting scanning plans106), configure various aspects of scanner104(e.g., by adding or removing components such as classifiers206, adjusting the API), accessing logs or error reports generated by various components, monitor the execution of currently-running scanning plans106, etc. Visualization module250is operable to convert output from controller modules230into representations to present to user110(e.g., on user interface102). In various embodiments, such representations may be visual (e.g., charts, tables, etc.), audible (e.g., computer-generated speech), tactile, or any combination. Metadata service module260is operable to prepare metadata corresponding to datastores120that are identified for scanning and update such metadata to reflect the results of ongoing and completed scans. Metadata service module260is operable to store metadata about the various scanning plans106including but not limited to information about how many iterations have been performed for a given scanning plan106, how many iterations remain for a given scanning plan, whether various ongoing scanning plans106have encountered errors or other issues, how many datastores120have been ingested, and how many tasks are running. Such information may be made available to various components of scanner104as discussed herein. Metadata service module260is also operable to store metadata related to datastores120. Such metadata for a particular datastore120includes but is not limited to data source types (e.g., type of database such as MySQL or ORACLE), data item types (e.g., string, number, CLOB, BLOB, etc.), number of objects/tables/columns/rows in a particular datastore120, data length/size, and a number of preexisting scanning plans106that scan the particular datastore120. After a particular datastore120has been scanned at least once, information learned from the previous scans may also be recorded with metadata service module260including but not limited to a list of data classes detected in the particular datastore120, a list of regions previous identified for the particular datastore120, a number of linking fields previously identified in the particular datastore120, the resource usage of the previous scan, and the execution duration of the previous scans. FIG.3is a flowchart illustrating information flows between PCC210, SCPU232, PMF234, and QMF236in accordance with various embodiments. As illustrated by line300, PCC210and SCPU232exchange information to prepare a proposed scanning plan106or to determine how to perform a subsequent scan using a preexisting scanning plan106. Such information includes but is not limited to determining which classifiers206to use during a scan, adjusting the functions of one or more classifiers (e.g., revising decision trees, adjusting weights of models), adjusting how samples are taken from datastores120(e.g., by increasing or decreasing sampling rates, by skipping certain portions of data stores). As discussed herein, a determination of which classifiers206to use during a subsequent scan performed using a particular scanning plan106may be based on prior results (e.g., scan performance metrics and/or scan quality metrics from a previous scan). For example, if SCPU232determines that one or more previous scans of a particular datastore120indicated that a particular type of data record (e.g., images, data records with particular types of information like names and addresses) are more or less prevalent in datastore120than previously expected, the classifiers206used during a subsequent scan may be changed accordingly (e.g., using a classifier206more attuned to image processing, using a classifier206that is more attuned to separating names and addresses that are PII from names and addresses that are not such as names and addresses of businesses). As a second example, the function of classifiers206and/or the sampling of data records may be adjusted. For example, if SCPU232determine that determines that one or more previous scans of a particular datastore120indicated that a particular type of data record is more or less prevalent in datastore120than previous excepted, the sampling rate may be increased (e.g., if there is more PII than expected) or may be decreased (e.g., if there was less PII than expected). Further, if a portion of a datastore120is unchanged from the previous scan, that portion may be skipped and not sampled at all during a subsequent scan. As illustrated by line302, PCC210and QMF236exchange information. In particular, PCC210is operable to provide one or more measurements of classification quality to QMF236. Such measurements of classification quality include but are not limited to true positive rate, true negative rate, and false positive rate. As illustrated by line304, QMF236exchanges information with SCPU232. In particular, QMF236sends scan quality metrics to SCPU232to prepare a proposed scanning plan106or to determine how to perform a subsequent scan using a preexisting scanning plan106. As a first example, estimated scan quality metrics may be sent to SCPU232for SCPU232to use to determine (with or without input from user110) whether scan objectives should be adjusted to meet scan quality and scan duration preferences input by user110. As a second example, the scan quality metrics from one iteration of a scanning plan106may be sent to SCPU232to use to make adjustments for subsequent scans using that scanning plan106(or other scanning plans106). As illustrated by line306, PCC210and PMF234exchange information. In particular, in various instances PCC210may send scan performance information collected as a result of sampling datastores120and/or applying classifiers206to PMF234. Such scan performance information may include but is not limited to: metadata collection, scan progress statistics (e.g., what datastores120have been sampled, how much has been sampled, how much sampled data has been successfully classified), scan health status, system computer resource usage by classifiers206, and scan progress statistics (e.g., what datastores120have been sampled, how much has been sampled, how much sampled data has been successfully classified), scan velocity, and scan quality statistics. As illustrated by line308, PMF234exchanges information with QMF236. In particular PMF234sends various collected scan performance metrics to QMF236for use in the calculation of scan quality metrics as discussed herein in reference toFIG.6. As illustrated by line310, PMF234exchanges information with SCPU232. In particular, PMF234sends estimated and calculated scan performance metrics to SCPU232including but not limited to providing a scan velocity and/or scan duration estimation to SCPU232for use in forming a proposed scanning plan106and providing an indication to user110of an estimated duration for a particular proposed scanning plan106. FIG.4is an expanded block diagram of SCPU232in accordance with various embodiments. As defined herein, SCPU232and its various components are implemented as modules. In various embodiments, SCPU232includes a scan plan definition layer module400, an execution engine410, and an intelligent planning engine420. In various embodiments, scan plan definition layer400includes a scan plan manager402, an execution estimation manager404, a scan execution manager, and an execution monitor408. As discussed herein, scan plan definition layer400is operable to perform the various actions relating to preparing a proposed scanning plan106and to managing and monitoring existing scanning plans106. Scan plan manager402is operable to perform various actions associated with saving, modifying, and/or deleting information defining various scanning plans. Execution estimation manager404is operable to determine various estimates for the execution of a proposed scanning plan106based on information received from PCC210, PMF234, and/or QMF236such that these estimates may be presented to user110. For example, execution estimation manager404is operable to determine an estimated scan duration for a proposed scanning plan based on scan velocity information received from PMF234and estimated scan quality metrics received from QMF236. Scan execution manager406is operable to assemble the various tasks associated with performing a scan (e.g., when to run which classifier206on which datastore120) into a scanning plan106. Execution monitor408is operable to monitor the health and status of existing scanning plans106(e.g., monitor whether a scanning plan106has completed all scans, monitoring whether a scanning plan106has had to be prematurely terminated). Execution engine410is operable to execute the various tasks associated with performing the scan according to a scanning plan106at runtime. In various instances, this includes running the scanning tasks performed with classifiers206, regional detection module212, and identification module214. In various embodiments, intelligent planning engine420includes a result/management intake module422, a strategy definition module424, a learning and algorithm module426, and a strategy execution module428. As discussed herein, intelligent planning engine420is operable to perform the various actions relating to using the results of previous scans (or previous iterations of scans) to inform or improve the performance of subsequent scan according to a particular scanning plan106. Results/measurements intake module422is operable to receive scan performance metrics from PMF234, and received scan quality metrics from QMF236. Strategy definition module424is operable to store definitions of various strategies for execution of a scan iteration strategy. For example, strategy definition module424is operable to store the various sampling strategies discussed in connection toFIG.10B. Learning and algorithm module426is operable to adjust the performance of subsequent scans using machine learning techniques based on results of prior scans. For example, a prediction model may be generated from the results of prior scans, an indication of changes made after the prior scans, and the results of subsequent scans. This prediction model may be used to evaluate the results of scan made with a particular scanning plan106and predict how changes in subsequent scans (e.g., changing classifiers206that are used, changing sampling rates, changing sampling strategies) would affect subsequent scans with the particular scanning plan106. Strategy execution module428is operable to apply the metrics received by results/measurements intake module422to the various strategies maintained by strategy definition module424to compile the logical execution plan for the scan iteration strategy. In various embodiments, the generation of the scan iteration strategy is rule-based (e.g., if X then Y) and/or may be generated using machine-learning techniques (e.g., when conditions A, B, and C, then a machine-learning model indicates D). As discussed herein, in many instances a particular scanning plan106is iteratively performed. In various instances, SCPU232coordinates such iterative scans. In various embodiments, SCPU232is operable (using strategy definition module424and strategy execution module428) to apply various post-scanning strategies to iterative scans in an effort to reduce the amount of effort spent rescanning data records. For example, SCPU232may employ a first strategy to reduce heavy identification efforts or a second strategy to reduce re-scanning previously-scanned data records: Example Strategy 1—Reduce Heavy Identification Efforts In the first example strategy in which a reduction in heavy identification efforts is prioritized, a particular datastore120(or particular portion of a datastore120) that is included in a particular scanning plan106has been fully or partially scanned (e.g., with sampling). When the scan is iterated, SCPU232uses the prior results to determine if that particular datastore120(or particular portion of a datastore120) has been classified as “None” (i.e., having no sensitive information). If the particular datastore120(or particular portion of a datastore120) has been classified as “None,” SCPU232checks the detection confidence calculated by QMF236. If the detection confidence is above a threshold (e.g., 60%), the particular datastore120(or particular portion of a datastore120) is skipped in the next iteration, but if it is below a threshold, it is re-sampled and scanned in the next iteration. Conversely, if the particular datastore120(or particular portion of a datastore120) has not been classified as “None,” then SCPU232uses the prior results to determine what percentage of the sensitive information is of a particular type (e.g., the sensitive information is PII that falls under a particular PII governance regulation such as being PII of EU residents). If the percentage is above a first threshold (e.g., 50% of the identified data records correspond to EU residents) then the particular datastore120(or particular portion of a datastore120) is fully scanned to attempt to identify all relevant data records in the next iteration. If the percentage is below a second threshold (e.g., 10% of the identified data records correspond to EU residents), then the particular datastore120(or particular portion of a datastore120) is skipped in the next iteration. The percentage is between the first threshold and second threshold, the particular datastore120(or particular portion of a datastore120) is re-sampled and scanned in the next iteration. Example Strategy 2—Reduce Re-Scanning Efforts In the second example strategy in which a reduction of re-scanning efforts is prioritized, a particular datastore120(or particular portion of a datastore120) that is included in a particular scanning plan106has been fully or partially scanned (e.g., with sampling). When the scan is iterated, SCPU232uses the prior results to determine if there has been a schema change for the particular datastore120(or particular portion of a datastore120). If there has been no schema change, SCPU232determines if the particular datastore120(or particular portion of a datastore120) has had a change in its row count. If the row count has been changed, the particular datastore120(or particular portion of a datastore120) is rescanned on the next iteration. If the row count has not changed, then SCPU232checks the sampling confidence determined by QMF236. If the sampling confidence is above a threshold, the particular datastore120(or particular portion of a datastore120) is skipped in the next iteration, but if the sampling confidence is below the threshold the particular datastore120(or particular portion of a datastore120) is rescanned in the next iteration. Conversely, if the schema (e.g., data structure) for the particular datastore120(or particular portion of a datastore120) has changed, SCPU232determines if the particular datastore120(or particular portion of a datastore120) also has had a change in its row count. If the row count has been changed, the particular datastore120(or particular portion of a datastore120) is rescanned on the next iteration. If the row count has not changed, then SCPU232checks the sampling confidence determined by QMF236. If the sampling confidence is above a threshold, SCPU232determines that a new column has been added particular datastore120(or particular portion of a datastore120), adjusts metadata about the particular datastore120accordingly, and rescans the particular datastore120on the next iteration using the updated metadata. If the sampling confidence is below the threshold the particular datastore120(or particular portion of a datastore120) is simply rescanned in the next iteration. As discussed herein, SCPU232is operable to act as the “brain” of scanner104, and coordinates the various components of scanner104to propose and perform scanning plans106. As discussed in further detail in reference toFIG.8, SCPU232receives indications of scanning scope, receives metadata for the one or more datastores120and results from prior scans, and determines scan objectives based on input form user110and from its own determinations based on the results from prior scans. SCPU232is also operable to receive information from PMF234that is indicative of resource availability, resource usage, resource management, and scanning performance estimation; receive information from PCC210indicative of the classifiers206selected for a scan and of the execution of classifications; and receive from QMF236information indicative of confidence levels (e.g., scanning confidence, sampling confidence) and of accuracy of a scan. Using this received information, SCPU232is operable to present information to user110as discussed here, to execute scans, and to present results of scans to user110in various reports. FIG.5is an expanded block diagram of PCC210in accordance with various embodiments. As defined herein, PCC210and its various components are implemented as modules. In various embodiments, PCC210receives a plurality of classifier modules206using a standardized classifier integration interface layer500. In various embodiments, PCC210also includes a runtime execution management layer510. In various embodiments, standardized classifier integration interface layer500is operable to provide a software interface for any number of various classifiers206, and includes a model interface502, an input data interface504, and a results interface506. In various embodiments, model interface502provides an interface for models provided by classifiers206, input data interface504facilitates passing information from data records from datastores120(e.g., data records that have been sampled for classification) to the models provided by classifiers206, and results interface506facilities receiving results determined by the models provided by classifiers206from the input data. In various embodiments, runtime execution management layer510includes a classifier selection controller512, a model data controller514, a classification execution controller516, a data manager518, a parallel controller520, and a resource controller522. Classifier selection controller is operable to determine one or more classifiers206to apply to the datastores120during an initial scan. Such a determination may be based, for example, on metadata corresponding to datastore120and scan objectives input by user110in various instances. Classifier selection controller512is also able to determine one or more to apply to the datastores120during an iteration of a scan in various embodiments. As discussed herein, if the results of a previous scan indicate that applying additional or alternative classifiers in the next iteration of the scan would improve scan performance, classifier selection controller512is operable to determine which classifiers206to apply. For example, if a first scan indicates the presence of name and address data in a datastore120, then classifier selection controller512may replace the previously-used classifiers206with a classifier206attuned to recognition of sensitive data in data records including name and address information. Further, the determination of classifiers206to apply in a subsequent iteration of a scan may also be based on input from user110(e.g., a selection of a particular classifier206to apply in a subsequent scan, a selection of a user command to increase detection confidence at the cost of increased scan duration on the next iteration, etc.). FIG.6is an expanded block diagram of QMF236in accordance with various embodiments. As defined herein, QMF236and its various components are implemented as modules. In various embodiments, QMF236includes a quality metrics producer module600, a dataset manager module610, and a validation manager module630. As discussed herein, QMF236is operable to receive scan performance metrics from PMF234and classification quality measurements from PCC210. In various instances, QMF236is operable to calculate various scan quality metrics based on the received scan performance metrics and/or classification quality metrics. In various embodiments, quality metrics producer module600includes a sampling confidence calculator602, a detection confidence calculator604, a quality metrics generator606, and a quality metrics aggregator608. Sampling confidence calculator602and detection confidence calculator604are useable to calculate the sampling confidence level (i.e., the confidence level of the sampling technique used to take samples from datastores120which may indicate whether the sampling has provided enough information to be considered representative of the larger dataset) and detection confidence level (i.e., the confidence level of the detection performed by classifiers206, for example if a classifier206has determined that a column contains phone numbers, how confident classifier206is of that classification), respectively, for a particular iteration of a scan and/or multiple iterations of the scan. In various embodiments, quality metrics generator606is operable to generate various scan quality metrics including but not limited to negative predictive value (NPV), positive predictive value (PPV, also referred to as precision), recall, and F1-score (also referred to as accuracy). NPV is useable to measure a confidence level of completeness of detection for the particular type of target information (e.g., PII). NPV is useable by SCPU232and user110to evaluate a likelihood that regulations (e.g., PII governance regulation) are complied with and to guard against false negatives. PPV is useable to measure the cost-efficiency (in terms of computer system resources and time spent scanning) of protecting the particular type of target information. PPV is useable by SCPU232and user110to determine how much of the data protection efforts are being spent to protect target information and to guard against false positives. Recall is useable to measure what percentage of positively identified data records (i.e., data records identified as including the particular type of target information) were predicted correctly by the classifiers206used from a full set of the actually positive items. This metric, for example, can be used by SCPU232to determine to remove a low-performing classifier206from a subsequent iteration of a particular scanning plan106. F-1 score represents the harmonic mean of recall and precision, and is useable to evaluate the accuracy of a particular scanning plan106. As discussed herein, quality metrics generator606is operable to calculate these metrics using the results of a scan (or an iteration of a scan) compared with the labeled data sources provided by end-users. Quality metrics aggregator608is operable to aggregate scan quality metrics calculated at a first level of analysis into a second, different level of analysis. For example, calculated F1 score, precision, recall, and NPV calculated from a particular set of data records can be aggregated into analyses of an entire datastore120at a macro level or into analyses across and entire model (e.g., a model in a classifier206). Moreover, macro-level data may be divided into more micro-level analyses such as by data records sharing one or more common characteristics (e.g., data records indicative of shared periods of time). In various embodiments, dataset manager module610includes a label dataset sampler612, a label results manager614, a benchmark manager616, a metadata manager618, and a formula manager620. Label dataset sampler612is operable to enable end-users (e.g., users110) to randomly sample portions of scan results (e.g., a record that has been flagged as containing target information, and in particular Jane Doe's phone number) perform manual confirmation of the sampled scan results, Label results manager614is operable to coordinate the reception of user-confirmed scanning results (e.g., confirming whether the sampled scan result does or does not include target information) to enable the generation the actual scan quality metrics. In various instances, label results manager614receives user confirmations, maps the user confirmation to the portions of scan results sampled by label dataset sampler612, and stores the results. Benchmark manager616is operable to maintain benchmark scan quality metrics useable to evaluate scan quality metrics calculated by QMF236. Such benchmarks are also usable to estimate a target scan quality level when scanning plan106is being defined. Metadata manager618is operable to manage metadata used by QMF236including but not limited to metadata for data records in datastore120, metadata for the sampled portions of scan results, metadata for user-confirmed scanning results, and metadata for calculated scan quality metrics. Formula manager620is operable to receive and maintain the formulas useable to calculate scan quality metrics (e.g., the formula for F1 score). In various instances, additional formulas may be added to formula manager620, enabling additional scan quality metrics to be calculated. Validation manager module630includes end-user labeling module632. Validation manager module630is operable to facilitate user confirmation of sampled scan results (e.g., a sample generated by label dataset sampler612) with end-user labeling module632operable to present samples to the user via user interface102and to receive user input indicative of user confirmations. FIG.7is an expanded block diagram of PMF234in accordance with various embodiments. As defined herein, PMF234and its various components are implemented as modules. In various embodiments, PMF234includes a performance estimator module700, a performance calculator module710, a performance metrics collection layer720, and an operator layer740. In various embodiments, performance estimator module700includes one or more scan speed benchmarks702, a scan speed estimator module704, an iteration duration estimator706, and a resource manager708. In various embodiments, performance estimator module700receives information from PCC210about the resources used by the classifiers206that are selected to run in a scan. Scan speed estimator module704and iteration duration estimator706are operable to use this information, along with the scan speed benchmarks702and an indication of the available resources for the scan generated by resource manager708, to generate estimations of scan velocity for the scan and a total duration for the scan (or an iteration of the scan) according to a particular scanning plan106. As used here, “scan velocity” refers to one or more performance metrics of a scan per unit of time (e.g., data records scanned per minute, bytes scanned per second, etc.). Performance metrics collection layer720is operable to collect metrics indicative of an ongoing or completed scan conducted according to a particular scanning plan106. In various embodiments, performance metrics collection layer720includes: (a) a metadata collection module722that collects metadata generated during the scan; (b) a scan progress statistical collection module724that collects information indicative of how much of the scan has been performed and how much of the datastores120to be scanned have been scanned; (c) a scan health status collection module726which monitors the execution of the scan for errors, faults, or other runtime issues; (d) a system resources usage collection module728that interfaces with operator layer740to log information indicative of the amount of system resources (e.g., computer processor cycles, size of memory) used during a scan; and (e) a scan quality statistical collection module730that is operable to collect information indicative of the True Positive rate, True Negative rate, False Positive rate, and False Negative rate of the scan that are useable by QMF236to prepare scan quality metrics (e.g., F1, precision, etc.). Performance calculator module710is operable use the information collected by performance metrics collection layer720to calculate scan performance metrics including but not limited to scan progress statistics, scan health status (e.g., whether a scan has incurred an error and had to be terminated prematurely), system computer resource usage (e.g., degree of parallelism, number of processors used in a single scan iteration and/or in total, size of memories used in a single scan iteration and/or in total), and scan velocity. Operator layer740is operable to monitor computer system100as various operations performed by during a scan are performed. In various embodiments, operator layer740include a plurality of operator packages750that include various modules running along with a particular operator756. In various embodiments, operator packages750include a memory/CPU usage module752, a pre-collection module754, and a post-collection module758. As shown inFIG.7, any number of operator packages750may be present (e.g., operator package750A, operator package750n) to run any number of various modules (e.g., memory/CPU usage module752A, memory/CPU usage module752n) that operate along with any number of operators756(e.g., operator756A, operator756n). As discussed herein, various tasks are performed during a scan. These tasks are represented as operators756. For example, if a particular classifier206is being run, various operations are performed to attempt to classify data records as discussed herein. In various embodiments, operator packages750are generated dynamically as operations are performed. As the operator756runs, memory/CPU usage module752is operable to capture system resource usage including but not limited to memory usage and CPU usage incurred by operator756in various embodiments. In various embodiments, pre-collection module754and post-collection module758are operable to collect snapshots of computer system100just prior to and just after operator756runs, respectively, including but not limited to the data records taken as inputs and data records produced as outputs. Accordingly, the various modules752,754,758gather information relating to the memory usage of operator756, the CPU usage of operator756, information about the data records accessed by operator756as inputs, and information about the data records produced by operator756as outputs. This information is reported to performance metrics collection layer720. Using this information, PMF234is operable to understand on a per-operation level, the resource usage level of various operations and to understand that with a given amount of computer system resources, a certain number of operations can be processed. In various instances, this information is usable to estimate the amount of time needed to perform various operations given the amount of resources. Further, this information can be used for resource planning to determine whether to speed up a scan by making additional resources available or to slow down a scan by allocating some resources to other processes. Further, the framework provided by operator packages750is flexible such that additional data-collection modules can be added per operator756to gather additional information as desired. In various embodiments in which the target information for a scan is PII, PMF234and QMF236are operable to generate a set of scan performance metrics and scan quality metrics usable by user110to evaluate the efficacy and efficiency of a PII scan performed according to a particular scanning plan106. These metrics include metrics that are indicative of the detection and identification of sensitive information and metrics that are indicative of operation and administration of scans. Detection and Identification Metrics In various embodiments, detection and identification metrics include two categories: (a) asset level privacy measurements and (b) privacy and individual measurements. Asset level privacy measurements include asset volume analyses and privacy data analyses in various embodiments. Any of these measurements may be broken down into separate metrics for structured, unstructured, and image data records. Asset level privacy metrics are indicative of the number of individual environments, databases, tables, records, columns, files etc. in datastore120to be scanned. Asset level privacy metrics are usable to show the overall scope of data assets that will be scanned. Privacy data analyses are indicative of a number of types of PII detected (e.g., names, addresses, email addresses are separate PII types), a number of data objects by privacy types and distribution at different levels of datastores120(e.g., levels of a hierarchical data structure stored in datastore120), and a percentage of columns or objects with PII to overall data at different levels of datastores120. In various instances, PII may be detected at high level of granularity with some tables, columns, or rows identified as containing PII but other tables, columns, or rows being identified as not having PII. In various instances in which unstructured data is scanned for PII, PII might be detected within a specific folder, a specific file, or even a specific portion of a specific file according to some embodiments. Privacy data analyses are usable to show a high-level number of privacy data detected, a distribution of privacy data distribution, and a percentage of overall data that includes PII. Privacy and individual measurements include privacy data object regional ratio analyses, individual regional ratio analyses, and individual privacy profiling analyses in various embodiments. Any of these measurements may be broken down into separate metrics for structured, unstructured, and image data records. Privacy data object regional ratio analyses are indicative of a number of data objects including PII broken down by region and the percentage (e.g., 50% EU PII, 25% California PII, 25% South Korea PII) and a region ratio distribution by ratio range. Individual regional ratio analyses are indicative of the number of identified individuals by region and by percentage (e.g., 100 EU individuals or 20% and 50 California individuals of 10% in a corpus of 500 total individuals). Individual privacy profiling analyses are useable to show a number of individuals broken down by different types of PII detected and different percentages of individuals associated with different types of PII, and a number of individuals broken down into segments by the number of data objects that include PII corresponding to the various individuals (e.g., John Smith is associated with 100 data objects, Jane Johnson is associated with 200 data objects). Operation and Administration Metrics In various embodiments, operation and administration metrics includes four categories: (a) scanning health, progress and coverage measurements, (b) scanning quality performance and confidence measurements, (c) cost efficiency analysis measurements, and (d) system health monitoring measurements. Scanning health, progress and coverage measurements include scanning health monitoring, scanning progress reports, and scanning coverage analyses in various embodiments. Any of these measurements may be broken down into separate metrics for structured, unstructured, and image data records. Scanning health monitoring includes is indicative of a total number of scanning jobs, a number of successfully completed scanning jobs, a number of failed scanning jobs, and a success rate. In various embodiments, scanning health monitoring is also indicative of scanning jobs that have been running longer than the historical average. Scanning progress reports are indicative of a number of finished scanning iterations broken down by individual scanning plan106and the percentage of the overall scan that has been completed (e.g., a scanning plan106for which 50% of the iterations have been completed). Scanning progress reports are also indicative of the total number of active scanning plans106, finished scanning plans106, and disabled scanning plans106in various embodiments. Scanning coverage analysis are indicative of the number of data sources scanned, the number of scanned data objects at different level as a percentage of the target for that level, and the percentage of scanned data objects in an overall data volume or datastore120. In various embodiments, scanning quality performance and confidence measurements include scanning quality performance reports and scanning confidence analyses. Scanning quality performance reports can be broken down by classifier quality performance and dataset scanning quality performance, both of which may be further broken down into separate metrics for structured, unstructured, and image data records. Classifier quality performance is indicative of NPV, PPV, recall, and F-1 score broken down by individual classifier206. Dataset scanning quality is indicative of NPV, PPV, recall, and F-1 score broken down by individual datastore120(or portions of datastore120). Scanning confidence analysis is only applicable to structured data records in various embodiments. Scanning confidence analysis is indicative of a percentage and distribution by range of confidence level and a percentage of high/middle/low confidence level data objects. Scanning confidence analysis is useable to show the overall scanning confidence distribution for all classifiers206for structured data records and to identify scanning gaps from the confidence level distribution. In various embodiments, cost efficiency analysis is indicative of a number of tables fully scanned, sampling scanned, not scanned, reports indicative of estimated scanning durations for scanning plans106, a list of a number of longest running jobs broken down by classifier category (e.g., classifiers for structured, unstructured, and image data records), a list of a number of jobs that have increased in duration broken down by classifier category (e.g., classifiers for structured, unstructured, and image data records). In various embodiments, system health monitoring measurements are indicative an accounting of computer system resources including a number of total computer systems, active computer systems, and inactive computer systems, a number of total services, active service, and inactive services, a number of live datastores120, system resource usage percentage (CPU/memory/disk etc.), a number of API call broken down by services and period of time. No. of requests (API calls) by services by time bucket. Accordingly, system health monitoring measurements are usable to shown the overall environmental health of computer system100and datastores120. As defined herein, scanner104is implemented as a platform and the various components shown inFIGS.2-7are implemented as modules. The separation of the various components into discrete modules inFIG.2is non-limiting and is merely for the purposes of discussion: various modules represented as separate components could be implemented together as one module, for example. Moreover, the operations performed by a particular module may be further divided into various sub-modules. FIG.8is a flowchart illustrating an embodiment of a scanning plan generation and implementation method800in accordance with various embodiments. In the embodiment shown inFIG.8, the various actions associated with method800are implemented by scanner104. At block802, a user110defines the scope of the scanning plan106including indicating one or more datastores120to be scanned. In various instances, datastores120are indicated by name, IP address, or network location. In various embodiments, user110selects the datastores120to be scanned as shown inFIG.9. At block804, SCPU232proposes one or more classifiers206to apply during the proposed scanning plan106(or a subsequent iteration of a particular scanning plan106). In various instances, the classifiers206to apply during the scan is determined based on information from PCC210about the classifiers206that are installed and available for use. In various embodiments, the determination of which classifiers206to apply during a scan is based on metadata about datastores120to be scanned. Such metadata includes but is not limited to the types of data sources in datastores120(e.g., tables, databases, etc.); the data item types in datastores120(e.g., string, number, CLOB, BLOB, etc.); a number of data objects, tables, columns, and/or rows to be scanned, a length and size of the data to be scanned, and a number of preexisting scanning plans106that scan the data to be scanned. In some embodiments where the scan is an iteration of a previously-performed scanning plan106, the determination of which classifiers206to apply during a scan is also based on results from one or more prior scans including but not limited to a list of data classes detected during the previous scan(s); a number of different regions detected (e.g., different regions covered by different PII regulations); a number of linking fields detected; the computer system resource usage of the prior scan(s); and the execution durations of the prior scan(s). At block806, scan objectives are defined. In some instances, one or more scan objectives are input by user110via user interface102as shown inFIGS.10A and10B. In some instances, scan objectives are also defined based on the results from one or more prior scans including but not limited to a list of data classes detected during the previous scan(s); a number of different regions detected (e.g., different regions covered by different PII regulations); a number of linking fields detected; the computer system resource usage of the prior scan(s); and the execution durations of the prior scan(s). At block808, PMF234collects system-wide parameters that are available as resources to be used to perform the scan and sends this information to SCPU232. This information about available resources includes but is not limited to the current system workout (e.g., the number of scan jobs currently running and/or scheduled to run when the scan is scheduled to run), the available computer system resources (e.g., number of CPUs/GPUs, total size of memories, number and specifications of available nodes, number of idle CPUs/GPUs in each node in upcoming time periods, and a size of free memories in each not in upcoming time periods), and environmental parameters (e.g., the install state of scanner104(cluster versus standalone), available time windows for data ingestion, and the size of the driver dataset (e.g., a dataset of individual dossiers with names, address, etc. of individuals used to facilitate identification of a particular person with identified PII). In various embodiments in which scanner104is implanted using a distributed framework, collected system-wide parameters may also include a number and specifications of worker nodes, and external distributed resource management computer environment specifications. At block810, SCPU232receives estimates for the scan performance metrics and scan quality metrics of the proposed scanning plan106(or the iteration of an existing scanning plan106) including the accuracy of the scan, scan coverage of the scan, sampling confidence of the scan, detection confidence of the scan, and duration of the scan based on the currently-selected scan objectives. In some embodiments, SCPU232receives estimates for the scan performance metrics and scan quality metrics of the proposed scanning plan106based on one or more different sets of scan objectives as well. In various embodiments, user110is presented (e.g., via user interface102) with indications of estimated scan performance metrics (including scan duration) and estimated scan quality metrics for the currently-selected scan objections and, in various instances, other alternative scan objectives as shown inFIGS.10A and10B. In various embodiments, user110is presented with a visual representation of the estimated scan performance metrics and scan quality metrics in a graph such as the radar graphs shown inFIGS.10A and10B. At block812, user110and/or SCPU232adjust the scan objectives based on the estimated scan performance metrics and/or scan quality metrics. For example, user110may determine that the estimated scan duration is too long or the scan coverage is too small to meet the user's needs and may change scan objectives accordingly. Alternatively, if user110has already set parameters such as a maximum scan duration, maximum number of iterations and/or minimum scan quality (e.g., minimum scan coverage, minimum detection confidence level), then SCPU232may adjust scan objectives to balance scan quality with scan performance. At block814, the proposed scanning plan106(or the iteration of an existing scanning plan106) is finalized, and the scan is initiated. In some instances, the scan is performed according to the original scan objectives determined at block806, but in other instances the scan is performed according to the modified scan objectives determined at block812. In various instances, the proposed scanning plan106(or the iteration of an existing scanning plan106) is initiated in response to a command from user110. In some embodiments, user110inputs a schedule for the start time and number of iterations for the scanning plan106as shown inFIG.11. When the scan has been finalized, the user is presented with a success screen as shown inFIG.12, and the scan is performed as requested. As discussed herein, the results of a scan are record and fed back into the scan planning process at block804to affect the planning and execution of subsequent scans. FIGS.9,10A,10B,11, and12are screenshots of visual information presented on user interface102during an example process of preparing a scanning plan106in accordance with various embodiments. As discussed herein, user interface102is operable to present information (e.g., visual information, audible information) to user110and receive input from user110to facilitate the preparation of a scanning plan106. Referring now toFIG.9, a selection screen900usable by user110to select one or more datastores120is shown according to various embodiments. Screen900includes a progress indicator902that represents progress through the process of preparing a scanning plan106. OnFIG.9, progress indicator902shows that the process is at the first step, which corresponds to block802ofFIG.8. Screen900includes a selection region910. Selection region910includes at least two sub-regions, data source selection region912and datastore selection region914. In data source selection region912, user110is provided with a list of data sources (e.g., physical memories or data storage clouds) on which datastores120are provisioned. After selecting various data sources, user110is provided with a list of datastores120corresponding to the selected date sources that may be scanned during the scanning plan106in datastore selection region914. A user may select some or all of available datastores120and select “Next” to proceed to the next step. Referring now toFIG.10A, a scan objective selection screen1000is shown in accordance with various embodiments. Scan objective selection screen1000is useable by user110to input one or more scan objectives and to compare the estimated performance of various alternative scanning plans106having different sets of scan objectives. Progress indicator902now shows that the process is in the “Scan Objective” phase, which corresponds with blocks806,808,810, and812ofFIG.8. Screen1000includes a scan objective set selection region1002, an objective compare performance radar1010, and an objective compare table1020. In various embodiments, scan objective set selection region1002allows user110to add one or more sets of scan objectives to objective compare table1020and/or to add the sets of scan objectives to the scanning plan106by selecting the particular set of scan objectives. In various embodiments, user110is able to select from among customized sets1004and/or built-in sets1006of scan objectives. In various embodiments, built-in sets1006include a “fast touch” set that is useable to detect target information (e.g., privacy data, PII) without determining regionality, an “auto mode” set that is useable to scan for target information and determine regionality on samples from selected datastore120, and a “full scale” set that is useable to scan all of the select datastores120. Objective compare performance radar1010includes a visual representation of estimated scan quality metrics and/or estimated scan performance metrics for various selected sets of scan objectives. In some embodiments, this visual representation is a radar graph. In the embodiment shown, the radar graph includes representations of F-1 score, detection coverage, sampling confidence, detection confidence, and duration with various sets of scan objects plotted on the radar graph. For example, plot1012represents the “auto mode” set of scan objectives and plot1014represents the “Small Business8” set of scan objectives. As can be seen quickly by inspecting the radar graph, comparing plot1012to plot1014indicates that the “auto mode” set of scan objectives results in a shorter duration than the “Small Business8” set of scan objectives Objective compare table1020includes a list of selected sets of scan objectives for comparison with additional details1022. Additional details1022include the target sampling confidence levels, selected data categories for scanning (e.g., image, unstructured, structured, or a combination), scan priority, and iteration duration for the various sets of scan objectives. Additional details1022also indicate whether region detection is enabled or disabled, the generation of an account privacy profile is enabled for the sets of scan objectives. In various embodiments, account privacy profiles are pointers to various identified PII that can be used for fast location in a subsequent access (e.g., an access to delete a data record with PII). While each of the iteration durations is shown as “0 Hours” in the example shown inFIG.10A, this is because this screen shot is merely an example. In various instances, the iteration duration for the fast touch set of scan objectives would be the shorter than the iteration duration for auto mode, and the iteration duration for “Small Business8” may be the longest. In various instances, the sampling confidence level and iteration duration are correlated such that when sampling confidence level increase the iteration duration also increase. Referring now toFIG.10B, scan objective customization screen1040is shown in accordance with various embodiments. Scan objective customization screen1040is useable by user110to fine-tune a custom set of scan objectives. Screen1040includes an objective compare performance radar1010that is operable to compare the previously-saved version of the set of scan objectives to an estimate reflecting the current revisions. Sampling confidence level selection region1042enables user110to select between commonly-used target sampling confidence levels (e.g., 50%, 99%. 100%) or enter a custom sampling target confidence level. Data category selection region1044enables user110to select whether to scan structured, unstructured, and/or image data in the selected datastores120. Scan priority selection region1046enables user110to indicate the priority that computer system100would use to process this set of scan objectives (i.e., a higher priority scan is allocated resources over lower priority scans). Account privacy profile selection region1048enables user110to select whether the set of scan objectives will generate one or more account privacy profiles. Region detection selection region1050enables user110to enable or disable region detection. Region1050also enables user110to manually input regions that user110believes may be present in selected datastore120such that scanner104focuses on these regions. In some embodiments, user110is able to exclude particular regions from the scan (e.g., including the EU but excluding South Korea). Sampling strategy selection region1060enables user110to select various sampling strategies to apply during scans performed according to the set of scan objectives. Such sampling strategies include but are not limited to: strategy1062(do not scan empty tables), strategy1064(full scan for tables for which the table size is smaller than the sampling size), strategies1066(apply the selected sampling method to small, midsize, large, or huge tables), strategy10668(refine new iterations based on source schema changes, source row count changes, etc.). Referring now toFIG.11, a scheduling screen1100is shown. Progress indicator902now shows that the process is in the “Schedule” phase, which corresponds with block814ofFIG.8. Region1102is useable by user110to enter in a start time and number of durations for the scanning plan. Region1102also includes an indication of the protected completion of detection, and (when enabled) the projected completion of the account privacy profile. Region1104includes a graphical representation of various portions of each iteration including data ingress, detection, and account privacy profile generation portions. Referring now toFIG.12, scan planning process complete screen1200indicates that the scanning plan106has been completed and will proceed as scheduled. FIGS.13and14illustrate various flowcharts representing various disclosed methods implemented with computer system100. Referring now toFIG.13, a flowchart depicting a scanning plan generation and implementation method1300is depicted. In the embodiment shown inFIG.13, the various actions associated with method1300are implemented by computer system100. At block1302, computer system100receives indications of one or more datastores120to be scanned for a particular type of information during a first scan. At block1304, computer system100determines one or more classifiers206to apply to the one or more datastores120during the first scan to identify the particular type of information (e.g., PII). At block1306, computer system100determines a first plurality of scan objectives for the first scan, wherein the first plurality of scan objectives include a target sampling confidence level for the first scan and one or more sampling strategies for the first scan. At block1308, computer system100determines available computer resources to perform the first scan. At block1310, computer system100estimates one or more scan quality metrics and an estimated execution duration for the first scan based on the scan objectives and the available computer resources. At block1312, computer system100presents to user110, indications of the one or more estimated scan quality metrics and estimated execution duration for the first scan. At block1314, in response to one or more commands from user110, computer system100performs the first scan. Referring now toFIG.14, a flowchart depicting a scanning plan generation method1400is depicted. In the embodiment shown inFIG.14, the various actions associated with method1400are implemented by computer system100. At block1402, computer system100prepares a personally identifiable information (PII) scanning plan106. At block1404, computer system100determines classifiers206for use in the PII scanning plan106. At block1406, computer system100determines scan objectives for the PII scanning plan106. At block1408, computer system100calculates one or more estimated performance metrics of the PII scanning plan106. At block1410, computer system100calculates one or more estimated quality metrics of the PII scanning plan106. At block1412, computer system100presents, to user110, estimated results of the PII scanning plan106based on the classifiers, scan objectives, estimated performance metrics, and estimated quality metrics According to the disclosed techniques, as user110(who may be a data protection compliance manager for a plurality of datastores120) desires to scan some of these datastores120. A first datastore120includes unstructured logs of customer service interactions in which users may have disclosed PII. A second datastore120includes scans of driver's licenses. A third datastore120includes a names and address and a plurality of tables titled “cust.” User110selects all three datastores120and commences to input scan objectives. User110observes that the fast touch scan is faster but user110determines that he would also like to have regional detection enabled. Accordingly, user110selects the auto mode scan, and choses to perform 10 iterations at an estimated 4 hours each. As the auto mode scan is performed, scanner104stores results. After a few iterations, scanner104has determined that the tables titled “cust” are customer record tables that are full of PII. Due to the high percentage of PII in these tables, scanner104adjusts the sampling strategy on subsequent scans to fully scan each of these tables. Additionally, scanner104determines that the first datastore120includes many volumes that do not include any PII. Based on the 99% target sampling confidence of the auto mode, scanner104determines that these volumes without PII can be skipped in subsequent scans. Accordingly, in various instances subsequent iterations have a shorter duration than prior iterations as scanner104learns more about the information in the three datastores120. After the scan iterations have been completed, user110elects to perform the scans every month going forward to capture subsequently recorded PII. The user110also receives a report indicative of the results of the scan broken down by type of PII, region, and number of individuals. Exemplary Computer System Turning now toFIG.15, a block diagram of an exemplary computer system1500, which may implement the various components of computer system100(e.g., user interface102, scanner104, datastore120) is depicted. Computer system1500includes a processor subsystem1580that is coupled to a system memory1520and I/O interfaces(s)1540via an interconnect1560(e.g., a system bus). I/O interface(s)1540is coupled to one or more I/O devices1550. Computer system1500may be any of various types of devices, including, but not limited to, a server system, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, tablet computer, handheld computer, workstation, network computer, a consumer device such as a mobile phone, music player, or personal data assistant (PDA). Although a single computer system1500is shown inFIG.15for convenience, computer system1500may also be implemented as two or more computer systems operating together. Processor subsystem1580may include one or more processors or processing units. In various embodiments of computer system1500, multiple instances of processor subsystem1580may be coupled to interconnect1560. In various embodiments, processor subsystem1580(or each processor unit within processor subsystem1580) may contain a cache or other form of on-board memory. System memory1520is usable to store program instructions executable by processor subsystem1580to cause computer system1500perform various operations described herein. System memory1520may be implemented using different physical memory media, such as hard disk storage, floppy disk storage, removable disk storage, flash memory, random access memory (RAM—SRAM, EDO RAM, SDRAM, DDR SDRAM, RAMBUS RAM, etc.), read only memory (PROM, EEPROM, etc.), and so on. Memory in computer system1500is not limited to primary storage such as system memory1520. Rather, computer system1500may also include other forms of storage such as cache memory in processor subsystem1580and secondary storage on I/O Devices1550(e.g., a hard drive, storage array, etc.). In some embodiments, these other forms of storage may also store program instructions executable by processor subsystem1580. I/O interfaces1540may be any of various types of interfaces configured to couple to and communicate with other devices, according to various embodiments. In one embodiment, I/O interface1540is a bridge chip (e.g., Southbridge) from a front-side to one or more back-side buses. I/O interfaces1540may be coupled to one or more I/O devices1550via one or more corresponding buses or other interfaces. Examples of I/O devices1550include storage devices (hard drive, optical drive, removable flash drive, storage array, SAN, or their associated controller), network interface devices (e.g., to a local or wide-area network), or other devices (e.g., graphics, user interface devices, etc.). In one embodiment, computer system1500is coupled to a network via a I/O device1550that provides a network interface (e.g., a device configured to communicate over WiFi, Bluetooth, Ethernet, etc.). Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.
97,213
11860906
When practical, similar reference numbers denote similar structures, features, or elements. DETAILED DESCRIPTION A data table can be divided into multiple partitions. Partitioning a table generally becomes necessary when the data stored in the table exceeds a certain size. Partitioning the table can also be used in other situations, such as when a table is distributed over several hosts. Each partition generally has three fragments: a main fragment, a delta1fragment, and a delta2fragment. The main fragment may be a read-optimized fragment that holds the majority of the data. The delta1fragment is a small write-optimized fragment that contains all updates to the data since the last delta merge during which all changes from the delta1fragment are committed to the main fragment. The delta2fragment is another small write-optimized fragment that is used to contain updates to the data that were made during an ongoing delta merge. The delta merge operation may create a new main fragment from tuples in the current main fragment and from committed changes in the delta1fragment. In a production environment, the main fragment is generally much larger than the delta1fragment and the delta2fragment. In a database, such as a column-store database or a column-oriented database, the values in the columns of a table may be compressed using a dictionary or dictionary encoding on each fragment (e.g., the main fragment, the delta1fragment, the delta2fragment, etc.). Generally, the dictionaries include columns that store an index value referencing the actual data. The stored index values are value identifiers (also referred to herein as valueIds or VIDs), such as numeric values and/or identifiers that include a numeric value, to map onto the data so it is not necessary to search the data itself, which would be cumbersome and computationally expensive. Instead, a column of the dictionary may be a list of index values (e.g., the VIDs). Each VID in the column represents a position in the dictionary. For example, a VID with a value of “5” points to a sixth value in the corresponding dictionary, since the VIDs begin with the value “0” pointing to a first value in the corresponding dictionary. In order to avoid having to materialize the data values, a GroupBy operator may use the VIDs for grouping whenever possible and beneficial. The GroupBy operator combines all rows that have the same value (e.g., GroupBy column x groups all columns with x). To execute the GroupBy operator, all values from all rows in each fragment can be compared to determine which rows have the same value and can be grouped. Since different VIDs from separate fragments on a partition can translate into the same data value, a post-grouping step is generally needed to group the result of the operator by value in each fragment. In this scenario, VID grouping would generally only be beneficial in scenarios where the output is very small compared to the input, making the post-grouping step cheap with respect to computation time. As a worst case scenario, such as if VID grouping does not reduce the input size, in order to group the data values across fragments storing a large amount of data, each of the values would need to be read and compared because different VIDs from separate fragments on a partition can translate into the same data value. This would be impractical and significantly increase computing time and resources. Accordingly, in some instances, VID grouping can reduce the size of the data for searching. But in other instances, it does not, such as when there are many distinct values stored in the fragment. In either case, the decision to group values would need to be made based on limited estimated information and before the data values are known. Thus, grouping the data values may reduce efficiency and increase computing time. The query processing system consistent with embodiments of the current subject matter provides mapping from fragment-local VIDs to partition-local partition value identifiers (also referred to herein as partition value identifiers, PartVIDs, or PartValueIds) that are maintained by an attribute engine. The partition value identifiers generated by the query processing system may allow for mapping across fragments and eliminates issues that would arise from mismatched VIDs across the fragments in a particular partition. Accordingly, the query processing system described herein may reduce needed computational resources for processing queries and may improve processing speed when executing a query that includes certain query operators, such as GroupBy operators. FIG.1depicts a system diagram illustrating a query processing system100, in accordance with some example embodiments. Referring toFIG.1, the query processing system100may include an attribute engine110, a client120, and a database140. As shown inFIG.1, the attribute engine110, the client120, and the database140may be communicatively coupled via a network130. The network130may be any wired and/or wireless network including, for example, a wide area network (WAN), a local area network (LAN), a virtual local area network (VLAN), a public land mobile network (PLMN), the Internet, and/or the like. As described herein, the attribute engine110may process queries received from the client120, such as requests to group data stored in a partitioned table stored on the database140, requests that require data stored in the partitioned table to be grouped, or the like. The attribute engine110may include at least one processor and/or at least one memory storing instructions for execution by the at least one processor. The attribute engine110may generate a partition value identifier (as described in more detail below), maintain the database140and data stored thereon, receive and execute queries, and/or the like. In some example embodiments, the client120may be a mobile device including, for example, a smartphone, a tablet computer, a wearable apparatus, and/or the like. However, it should be appreciated that the client120may be any processor-based device including, for example, a laptop computer, a workstation, and/or the like. In some implementations, the client120includes an application, such as a mobile application, which may be a type of application software configured to run on a mobile device or any processor-based device. Moreover, the application of the client may be a web application configured to provide access, at the client120, to the attribute engine110. In some embodiments, the client120includes a graphical user interface. The user may interact with the graphical user interface. The database140may be any type of database including, for example, a graph database, an in-memory database, a relational database, a non-relational (NoSQL) database, and/or the like. The database140may include one or more (e.g., one, two, three, four, five, or more) databases. To further illustrate,FIG.2depicts a block diagram illustrating another example of the query processing system100, in accordance with some example embodiments. As shown inFIG.2, a partitioned table200may be stored on the database140. The partitioned table200may relate to an attribute. The partitioned table200may store data, such as one or more data vectors, string values, and/or the like. The partitioned table200may include at least one partition202, such as a single partition, two partitions, three partitions, four partitions, five partitions, ten partitions, one hundred partitions or the like. Consistent with embodiments of the current subject matter, the partition202(e.g., each partition of the at least one partition202) may include a plurality of fragments. For example, the partition202may include a first fragment, such as a main fragment204, a second fragment, such as a delta1fragment206, and/or a third fragment, such as a delta2fragment208. The main fragment204may be a read-optimized fragment that holds the majority of the data stored in the partitioned table200. The delta1fragment206may be a small write-optimized fragment that contains all updates to the data since the last delta merge operation, during which all changes from the delta1fragment206are written to the main fragment204. The delta2fragment208may include another small write-optimized fragment that is used to contain updates to the data that were made during an ongoing delta merge operation. The delta merge operation may create a new main fragment (not shown) from tuples in the current main fragment and including committed changes from the delta1fragment206. The data values stored in each of the main fragment204, the delta1fragment206, and the delta2fragment208may be compressed using a dictionary or dictionary encoding on each fragment. For example, the main fragment204may include and/or be communicatively coupled to a main dictionary210, the delta1fragment206may include and/or be communicatively coupled to a first delta dictionary212, and the delta2fragment208may include and/or be communicatively coupled to a second delta dictionary214. The main dictionary210, the first delta dictionary212, and the second delta dictionary214may encode data values (e.g., data vectors, string values, or the like) of the data stored on each respective fragment as value identifiers and may store the mapping between the value identifiers and the data values. As described herein, the value identifiers indicate a position of the values of the data in the dictionary and generally allow for quickly searching each fragment. FIG.3shows an example of the main dictionary210and the first delta dictionary212according to some example embodiments. As shown inFIG.3, the main dictionary210may include a first column310that includes the value identifiers for the main fragment204and a second column312that includes the encoded data values of the main fragment204. The main dictionary210also includes a plurality of rows320. The plurality of rows320each include the data values and corresponding value identifier. For example, the value “AAA” is encoded using the main dictionary210value identifier “0.” The other values “BB”, “CC”, and “DEF” are also encoded using the main dictionary210. Again referring toFIG.3, the first delta dictionary212may include a first column314that includes the value identifiers for the delta1fragment206and a second column316that includes the encoded data values of the delta1fragment206. The first delta dictionary212also includes a plurality of rows322. The plurality of rows322each include the data values and corresponding value identifier. For example, the value “CC” is encoded using the first delta dictionary212value identifier “1.” The other value “XYZ” is also encoded using the first delta dictionary212. It should be appreciated that the main dictionary210and the first delta dictionary212may include any number of rows. It should also be appreciated that while only the first delta dictionary212is shown, the second delta dictionary214may be the same as the first delta dictionary212, but include the same and/or different data values corresponding to each value identifier. Referring back toFIG.2, the partition202may include a mapping220. The mapping220may store an association between a partition value identifier generated by the attribute engine110and at least one value identifier (e.g., a local value identifier) from the main dictionary210(e.g., a main value identifier from the main dictionary), the first delta dictionary212(e.g., a delta or first delta value identifier from the first delta dictionary), and/or the second delta dictionary214(e.g., a delta or a second delta value identifier from the second delta dictionary). The mapping between the fragment-local value identifiers to partition-local partition value identifiers (partVIDs) may additionally or alternatively be generated and/or maintained by the attribute engine110. The attribute engine110may update the mapping220at the beginning of a query, prior to processing or execution of the query, and/or after or during a delta merge. For example, after a delta merge, the value identifiers in the main dictionary210, the first delta dictionary212, and/or the second delta dictionary214may be reset and/or the encoded data values may be updated. As a result, the partition value identifiers may also be reset. The attribute engine110may generate the partition value identifiers. The partition value identifiers may be a numeric value and/or include a numeric value. In some embodiments, the attribute engine110generates a partition value identifier for each unique data value stored across the fragments (e.g., the main fragment204, the delta1fragment206, and/or the delta2fragment208) and maps the generated partition value identifier to the one or more value identifiers corresponding to each unique data value. For example, the attribute engine110may generate the partition value identifiers based on the encoded data values and/or the value identifiers across fragments (e.g., the main fragment204, the delta1fragment206, and/or the delta2fragment208). In some embodiments, the attribute engine110assigns or sets the value identifier for a particular encoded value stored in the main fragment204as the partition value identifier when the particular encoded value is stored only in the main fragment204(e.g., the data value is not stored on any other fragment such as the delta1fragment206). Additionally and/or alternatively, the attribute engine110assigns or sets the value identifier for a particular encoded value stored in the main fragment204as the partition value identifier when the particular encoded data value is the same in the compared fragments (e.g., the main fragment204and the delta1fragment206or the delta2fragment208) or otherwise is stored in each of the compared fragments. Additionally and/or alternatively, the attribute engine110computes a partition value identifier in other scenarios, such as when a particular encoded data value is stored only on the delta1fragment206or the delta2fragment208, rather than on the main fragment204. In particular, the attribute engine110may assign or set the partition value identifier as a summation of a size of the main dictionary210and an additional value based on a determination that the particular encoded data value is stored only on the delta1fragment206or the delta2fragment208, rather than on the main fragment204. The size of the main dictionary210may be defined as a maximum quantity of the rows320of the main dictionary210, a maximum quantity of the value identifiers stored in the main dictionary210, and/or the like. In some embodiments, the additional value added to the size of the main dictionary210is the value identifier corresponding to the particular encoded data value stored only on the delta1fragment206or the delta2fragment208. Accordingly, the partition value identifier can be generated for each of the encoded data values (e.g., unique data values) stored across the main fragment204, the delta1fragment206, and/or the delta2fragment208. As an example,FIG.3shows an example of the mapping220. In the mapping220, the partition value identifiers300that have been generated by the attribute engine110based on the main dictionary210and the first delta dictionary212shown inFIG.3. In this example, the attribute engine110set the value identifier “0” from the main dictionary210as the partition value identifier “0” because the value “AAA” corresponding to the value identifier “0” is stored only in the main fragment204. Similarly, the attribute engine110set the value identifier “1” from the main dictionary210as the partition value identifier “1” because the value “BB” corresponding to the value identifier “1” is stored only in the main fragment204. The attribute engine110also set the value identifier “3” from the main dictionary210as the partition value identifier “3” because the value “DEF” corresponding to the value identifier “3” is stored only in the main fragment204. As shown inFIG.3, the attribute engine110set the value identifier “2” from the main dictionary210as the partition value identifier “2” based on a determination that the value “CC” is the same in both the main fragment204and the delta1fragment206. Thus, even though the value identifier corresponding to the value “CC” is stored as “1” in the first delta dictionary212, the partition value identifier associated with the value “CC” is assigned as “2” by the attribute engine110to match the value identifier corresponding to the same value “CC” from the main dictionary210. Referring again toFIG.3, the attribute engine110determined that the value “XYZ” is stored only on the delta1fragment206. As a result, the attribute engine110generated the corresponding partition value identifier as a summation of a size of the main dictionary210and the value identifier “2” corresponding to the value “XYZ” stored in the first delta dictionary212. In this example, the size of the main dictionary210is 4 because the maximum quantity of the rows320and/or the maximum quantity of the value identifiers stored in the main dictionary210is 4. Accordingly, the attribute engine110set the partition value identifier as “4+2” or “6” as the summation of the size “4” and the corresponding value identifier “2” from the first delta dictionary212. FIG.4illustrates an input to a query operator, such as a GroupBy operator (e.g., a query to group by the partition value identifiers and aggregate a count of the partition value identifiers). As shown inFIG.4, an input221may be provided to the query operator. The input221may include the generated partition value identifiers from the mapping220and one or more value identifiers that correspond to the generated partition value identifiers. For example, if the generated partition value identifier corresponds to an encoded data value that is stored only on the main fragment204or an encoded data value that is stored only on the delta1fragment206or the delta2fragment208, the association between the generated partition value and only the value identifier corresponding to that encoded data value would be included in the input221. However, if the generated partition value identifier corresponds to an encoded data value that is stored on both the main fragment204and at least one of the delta1fragment206and the delta2fragment208, the association between the generated partition value and all of the value identifiers from each respective dictionary corresponding to the particular encoded data value in each respective fragment would be included in the input221. In some embodiments, the input221includes the generated partition value identifier in a row of a plurality of rows228. The row of the input221may also include the corresponding value identifier and/or a source of the corresponding value identifier. The source may include the main dictionary210, the first delta dictionary212, and/or the second delta dictionary214from which the corresponding value identifier was stored. In some embodiments, each row of the input221may include the generated partition value identifier and each of the corresponding value identifiers associated with the generated partition value identifier. In this example, each row of the input221would include the generated partition value identifier and the associated value identifier from the main dictionary210and the associated value identifier from the first delta dictionary212or the second delta dictionary. As described herein, the input221includes a plurality of columns and a plurality of rows228. The plurality of columns includes a value identifier222from the main dictionary210, the first delta dictionary212, and/or the second delta dictionary214, a source224(e.g., the main dictionary210, the first delta dictionary212, and/or the second delta dictionary214) of the value identifier222, and the generated partition value identifier226. As shown in the example input221, a first row includes the value identifier “1” from the source “Main” or “main dictionary” and the corresponding generated partition value identifier “1”. As another example, a third and fifth row of the input221both include the partition value identifier “2”, but include the associated value identifiers from each of the dictionaries (e.g., value identifier “2” from the “main” source and value identifier “1” from the “delta1” source). This allows for only the partition value identifiers to be searched in a particular partition and across fragments within the partition without needing to read the actual values of the encoded data values in each fragment, significantly improving computing speed, decreasing required computing resources, and improving computing efficiency. As shown inFIG.4, an aggregation table400may be generated as an output of the query operator. For example, a query may include the following query operator: “GroupBy(PartVID) +count(*) aggregation”. In this example, the query includes a request to group by the partition value identifiers and aggregate a count of the partition value identifiers. The aggregation table400may be generated based on the input221and/or the mapping220to include a first column232listing the generated partition value identifiers and a second column234listing a count. The count may include a total number of input rows containing a particular partition value identifier. As an example, the third row has a count of “2” because two of the input rows in the input221include a partition value identifier of “2”. As described herein, a delta merge may occur to commit the changes stored in the delta1fragment206to the main fragment204. In doing so, the delta fragment and the main fragment would be combined. When the delta merge occurs, the data stored in the main fragment changes and the main fragment becomes a “new” main fragment. The delta2fragment208also takes the place of the delta1fragment206. As a result, new value identifiers are assigned to the encoded data values stored in each of the resulting fragments and the new value identifiers are stored in corresponding dictionaries of the resulting fragments. Accordingly, after a delta merge occurs, the attribute engine110resets the partition value identifiers stored in the mapping220and generates new partition value identifiers. In some embodiments, the attribute engine110generates a new mapping220to store the newly generated partition value identifiers. FIG.5depicts a flowchart illustrating a process500for executing a query using a generated partition value identifier, in accordance with some example embodiments. Referring toFIGS.1-4, one or more aspects of the process500may be performed by the query processing system100, such as the client120and/or the attribute engine110and in some embodiments, using the database140. The query processing system100described herein may efficiently and quickly process queries, such as GroupBy operators. For example, the query processing system100may process queries without pre-grouping and/or post-grouping data stored in various fragments of a table. The query processing system100may additionally and/or alternatively process the queries without reading or accessing the data stored in each fragment. Instead, the query processing system100may generate the partition value identifier and use the partition value identifier to reference value identifiers from each fragment that in turn is associated with values stored in each fragment. At502, the query processing system (e.g., via the attribute engine110) may generate a partition value identifier for a partitioned table (e.g., the partitioned table200). In some embodiments, the partitioned table includes only a single partition. In other embodiments, the partitioned table includes a plurality of partitions. The partitioned table may include a plurality of fragments. The plurality of fragments may include a main fragment (e.g., the main fragment204) and at least one delta fragment (e.g., the delta1fragment206and/or the delta2fragment208). The main fragment may include a main dictionary (e.g., the main dictionary210) storing a plurality of first values and a plurality of main value identifiers corresponding to the plurality of first values. In some embodiments, the main fragment may include a main dictionary storing a first value (e.g., of the plurality of first values) and a main value identifier (e.g., of the plurality of main value identifiers) corresponding to the first value. The delta fragment may include a delta dictionary (e.g., the first delta dictionary212and/or the second delta dictionary214) storing a plurality of second values and a plurality of delta value identifiers corresponding to the plurality of second values. In some embodiments, the delta fragment may include a delta dictionary storing a second value (e.g., of the plurality of second values) and a delta value identifier (e.g., of the delta value identifiers) corresponding to the second value. The partition value identifier may be generated by setting the partition value identifier based at least in part on the first value (e.g., the plurality of first values) and the second value (e.g., the plurality of second values). In some embodiments, generating the partition value identifier includes at least one of setting the main value identifier as the partition value identifier based on a determination that the first value is the same as the second value or the first value is stored only in the main fragment, and setting the partition value identifier as a summation of a size of the main dictionary and an additional value based on a determination that the second value is stored only on the delta fragment. This allows for unique partition value identifiers to be generated and used for referencing value identifiers across fragments of a partition of a table. The size of the main dictionary may include a maximum quantity of the main value identifier and/or a number of rows stored in the main dictionary. In some embodiments, the additional value is the delta value identifier corresponding to the second value, such as a numeric value of the delta value identifier. At504, the query processing system (e.g., via the attribute engine110) may maintain a mapping between the generated partition value identifier and a corresponding value identifier. For example, the generated partition value identifier and a corresponding one of the main value identifier and the delta value identifier may be stored in the mapping. The mapping may include a plurality of rows each including at least the generated partition value identifier and corresponding value identifier. In some embodiments, each row includes a single value identifier corresponding to each partition value identifier or a plurality of value identifiers corresponding to each partition value identifier. In some embodiments, the input includes a source of the first value or the second value associated with the corresponding one of the main value identifier and the delta value identifier. In this example, the source may include the corresponding main fragment or delta fragment. At506, the query processing system may receive a query including a request to group data stored in the partitioned table across the fragments of the partitioned table. For example, the query processing system may receive a query including a request to group data stored in the partitioned table across the main fragment and the delta fragment. The query may include a query operator, such as a GroupBy query operator that requires data from various fragments to be grouped. In some embodiments, the query includes a GroupBy (PartVID) operator to allow for grouping of data based on the generated partition value identifier. At508, the query processing system may execute the query by at least using the mapping. The generated partition value identifier allows for quick and efficient grouping of data stored in the table without needing to first read the data in each fragment and/or later perform a post-processing operation to group the data for efficient query processing. The query processing system may execute the query by grouping the data according to the query operator and query criteria and/or perform another action based on the query criteria. In some embodiments, the query processing system may execute the query operator and send the result of the execution to another system or component of the query processing system for further processing, such as during execution of a query plan. In view of the above-described implementations of subject matter this application discloses the following list of examples, wherein one feature of an example in isolation or more than one feature of said example taken in combination and, optionally, in combination with one or more features of one or more further examples are further examples also falling within the disclosure of this application: Example 1: A system, comprising: at least one data processor; and at least one memory storing instructions which, when executed by the at least one data processor, result in operations comprising: generating a partition value identifier for a partitioned table comprising: a main fragment including a main dictionary storing a first mapping between a first value and a main value identifier corresponding to the first value; and a delta fragment including a delta dictionary storing a second mapping between a second value and a delta value identifier corresponding to the second value, wherein the delta value identifier is different from the main value identifier; wherein generating the partition value identifier comprises: setting the partition value identifier based at least in part on the first value and the second value; maintaining a mapping between the generated partition value identifier and a corresponding one of the main value identifier and the delta value identifier; receiving a query including a request to group data stored in the partitioned table across the main fragment and the delta fragment; and executing the query by at least using the mapping. Example 2: The system of example 1, wherein the generating the partition value identifier further comprises at least one of setting the main value identifier as the partition value identifier based on a determination that the first value is the same as the second value or the first value is stored only in the main fragment; and setting the partition value identifier as a summation of a size of the main dictionary and an additional value based on a determination that the second value is stored only on the delta fragment. Example 3: The system of example 2, wherein the additional value is the delta value identifier corresponding to the second value. Example 4: The system of example 2, wherein the size of the main dictionary is a maximum quantity of the main value identifier stored in the main dictionary. Example 5: The system of any one of examples 1 to 4, wherein the partitioned table comprises a single partition. Example 6: The system of any one of examples 1 to 5, wherein the operations further comprise: combining the delta fragment and the main fragment during a delta merge; and resetting, based on the delta merge, the partition value identifier. Example 7: The system of any one of examples 1 to 6, wherein the query is further executed by at least using a source of the first value or the second value associated with the corresponding one of the main value identifier and the delta value identifier. Example 8: The system of any one of examples 1 to 7, wherein the source is the corresponding main fragment or delta fragment. Example 9: A computer-implemented method, comprising: generating a partition value identifier for a partitioned table comprising: a main fragment including a main dictionary storing a first mapping between a first value and a main value identifier corresponding to the first value; and a delta fragment including a delta dictionary storing a second mapping between a second value and a delta value identifier corresponding to the second value, wherein the delta value identifier is different from the main value identifier; wherein generating the partition value identifier comprises: setting the partition value identifier based at least in part on the first value and the second value; maintaining a mapping between the generated partition value identifier and a corresponding one of the main value identifier and the delta value identifier; receiving a query including a request to group data stored in the partitioned table across the main fragment and the delta fragment; and executing the query by at least using the mapping. Example 10: The method of example 9, wherein the generating the partition value identifier further comprises at least one of setting the main value identifier as the partition value identifier based on a determination that the first value is the same as the second value or the first value is stored only in the main fragment; and setting the partition value identifier as a summation of a size of the main dictionary and an additional value based on a determination that the second value is stored only on the delta fragment. Example 11: The method of example 10, wherein the additional value is the delta value identifier corresponding to the second value. Example 12: The method of example 10, wherein the size of the main dictionary is a maximum quantity of the main value identifier stored in the main dictionary. Example 13: The method of any one of examples 9 to 12, wherein the partitioned table comprises a single partition. Example 14: The method of any one of examples 9 to 13, further comprising: combining the delta fragment and the main fragment during a delta merge; and resetting, based on the delta merge, the partition value identifier. Example 15: The method of any one of examples 9 to 14, wherein the query is further executed by at least using a source of the first value or the second value associated with the corresponding one of the main value identifier and the delta value identifier. Example 16: The method of any one of examples 9 to 15, wherein the source is the corresponding main fragment or delta fragment. Example 17: A non-transitory computer-readable medium storing instructions, which when executed by at least one data processor, result in operations comprising: generating a partition value identifier for a partitioned table comprising: a main fragment including a main dictionary storing a first mapping between a first value and a main value identifier corresponding to the first value; and a delta fragment including a delta dictionary storing a second mapping between a second value and a delta value identifier corresponding to the second value, wherein the delta value identifier is different from the main value identifier; wherein generating the partition value identifier comprises: setting the partition value identifier based at least in part on the first value and the second value; maintaining a mapping between the generated partition value identifier and a corresponding one of the main value identifier and the delta value identifier; receiving a query including a request to group data stored in the partitioned table across the main fragment and the delta fragment; and executing the query by at least using the mapping. Example 18: The non-transitory computer-readable medium of example 17, wherein the generating the partition value identifier further comprises at least one of setting the main value identifier as the partition value identifier based on a determination that the first value is the same as the second value or the first value is stored only in the main fragment; and setting the partition value identifier as a summation of a size of the main dictionary and an additional value based on a determination that the second value is stored only on the delta fragment. Example 19: The non-transitory computer-readable medium of any one of examples 17 to 18, wherein the partitioned table comprises a single partition. Example 20: The non-transitory computer-readable medium of any one of examples 17 to 19, wherein the operations further comprise: combining the delta fragment and the main fragment during a delta merge; and resetting, based on the delta merge, the partition value identifier. FIG.6depicts a block diagram illustrating a computing system600consistent with implementations of the current subject matter. Referring toFIGS.1and6, the computing system600can be used to implement the attribute engine110, the query processing system100, and/or any components therein. As shown inFIG.6, the computing system600can include a processor610, a memory620, a storage device630, and indication/output devices640. The processor610, the memory620, the storage device630, and the indication/output devices640can be interconnected via a system bus650. The processor610is capable of processing instructions for execution within the computing system600. Such executed instructions can implement one or more components of, for example, the attribute engine110, the query processing system100, and/or any components therein. In some example embodiments, the processor610can be a single-threaded processor. Alternately, the processor610can be a multi-threaded processor. The processor610is capable of processing instructions stored in the memory620and/or on the storage device630to display graphical information for a user interface provided via the indication/output device640. The memory620is a computer readable medium such as volatile or non-volatile that stores information within the computing system600. The memory620can store data structures representing configuration object databases, for example. The storage device630is capable of providing persistent storage for the computing system600. The storage device630can be a floppy disk device, a hard disk device, an optical disk device, a tape device, a solid state device, and/or other suitable persistent storage means. The indication/output device640provides indication/output operations for the computing system600. In some example embodiments, the indication/output device640includes a keyboard and/or pointing device. In various implementations, the indication/output device640includes a display unit for displaying graphical user interfaces. According to some example embodiments, the indication/output device640can provide indication/output operations for a network device. For example, the indication/output device640can include Ethernet ports or other networking ports to communicate with one or more wired and/or wireless networks (e.g., a local area network (LAN), a wide area network (WAN), the Internet). In some example embodiments, the computing system600can be used to execute various interactive computer software applications that can be used for organization, analysis and/or storage of data in various formats. Alternatively, the computing system600can be used to execute any type of software applications. These applications can be used to perform various functionalities, e.g., planning functionalities (e.g., generating, managing, editing of spreadsheet documents, word processing documents, and/or any other objects, etc.), computing functionalities, communications functionalities, etc. The applications can include various add-in functionalities or can be standalone computing products and/or functionalities. Upon activation within the applications, the functionalities can be used to generate the user interface provided via the indication/output device640. The user interface can be generated and presented to a user by the computing system600(e.g., on a computer screen monitor, etc.). One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs, field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one indication device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example, as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide indication to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and indication from the user may be received in any form, including acoustic, speech, or tactile indication. Other possible indication devices include touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive track pads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” Use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.
45,523
11860907
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Distributed storage (i.e., cloud storage) has been increasingly used to store tables of massive size. It is not uncommon for a table to have a size of multiple terabytes or even petabytes and to include millions of entries (i.e., data blocks). Clustered data structures (e.g., a column data store) are increasingly being used to reduce query cost and improve query performance by clustering data into non-overlapping data blocks. With clusters of data blocks, data blocks are typically sorted by a clustering key, with each data block including a range of clustering key values. Typically, the range of the clustering key values associated with each data block do not overlap any other data block within the clustered data blocks. When new data is appended to the clustered data blocks, often the ranges of the clustering key values of the new data blocks will have some overlap with the original data blocks, and to maintain an optimal clustering state, the data blocks must be reclustered. This is normally accomplished by shuffling the data, which involves writing some or all of the data out to a new location, which is computationally expensive and slow. Implementations herein are directed toward a data block reclusterer that reclusters data without requiring shuffling. The data block reclusterer receives a first and second group of clustered data blocks sorted by a clustering key value. The data block reclusterer generates one or more split points for partitioning the first and second group of clustered data blocks into a third group of clustered data blocks. The data block reclusterer partitions, using the one or more split points, the first and second groups of clustered data blocks into the third group of clustered data blocks. Referring now toFIG.1, in some implementations, an example system100includes a remote system140. The remote system140may be a single computer, multiple computers, or a distributed system (e.g., a cloud environment) having scalable/elastic computing resources144(e.g., data processing hardware) and/or storage resources142(e.g., memory hardware). A data store146(i.e., a remote storage device146) may be overlain on the storage resources142to allow scalable use of the storage resources142by one or more of the client or computing resources144. The data store146includes a data block data store150configured to store a plurality of data blocks152,152a-nwithin a group158,158a-nof clustered data blocks152. The data store150may store any number of groups158of clustered data blocks152at any point in time. In some examples, the clustered data blocks are stored within a columnar database table or clustered table159. Each group of clustered data blocks is sorted by a clustering key value154,154a-n. For example, in the clustered table159(i.e., one or more groups158of clustered data blocks152), one or more columns of the table159is selected to represent the clustering key with each row of the table159having a corresponding clustering key value154. The data of the clustered table159is organized around the clustering key to, for example, co-locate related data, as large tables159are typically split into multiple data blocks152stored on multiple different servers. Each data block152in the group158of clustered data blocks152includes a range of clustering key values154that do not overlap with any of the ranges of clustering key values154of the other data blocks152in the same group158. The remote system140is configured to receive tabled data14. For example, the remote system140receives the tabled data14from a user device10associated with a respective user12in communication with a remote system140via a network112. The user device10may correspond to any computing device, such as a desktop workstation, a laptop workstation, or a mobile device (i.e., a smart phone). The user device10includes computing resources18(e.g., data processing hardware) and/or storage resources16(e.g., memory hardware). In another example, the remote system140receives the tabled data14from another a different table stored on the data store150or from another remote system140. In some implementations, the remote system140generates a first group158aof clustered data blocks152from the tabled data14to form a clustered table159. The remote system140organizes the tabled data14based on a clustering key15and splits the tabled data14into a plurality of clustered data blocks152with each clustered data block152including a respective range of the clustering key values154that do not overlap with any of the ranges of clustering key values154of the other clustered data blocks152in the first group158aof clustered data blocks152. That is, each clustered data block152stores a portion of the tabled data14within the clustered table159. The first group158ais stored at the data block data store150. In some examples, the remote system140receives (e.g., from the user device10) additional tabled data14to add to the clustered table159. The remote system140generates a second group158bof clustered data blocks152from the additional tabled data14. Each clustered data block152in the second group158bincludes a respective range of clustering key values154that do not overlap with any of the ranges of clustering key values154of the other clustered data blocks152in the second group158b. However, the respective range of clustering key values154of one or more of the clustered data blocks152in the second group158bmay overlap with the respective range of clustering key values154of at least one of the clustered data blocks152in the first group158aof clustered data blocks152. That is, at least one data block152of the second group158bmay have a range of clustering key values154that overlaps with a range of clustering key values154of a data block152of the first group158aof the clustered table159. The remote system140executes a data block reclusterer160to recluster the first group158aand second group158bof data blocks152. As discussed in more detail below, a split point generator170of data block reclusterer160receives the first and second groups158a,158band generates one or more split points310,310a-n(FIG.2B) for partitioning the first and second groups158a,158bof clustered data blocks152into a third group158cof clustered data blocks152. Each split point310defines an upper limit or a lower limit for the respective range of clustering key values154of one of the clustered data blocks152in the third group158cof clustered data blocks152. The split point generator170passes the first and second groups158a,158band the one or more split points310to a data block partitioner180. The data block partitioner180partitions, using the one or more generated split points310, the first and second groups158a,158bof clustered data blocks152into the third group158cof clustered data blocks152. Each clustered data block152in the third group158cincludes a respective range of clustering key values154that do not overlap with any of the ranges of clustering key values154of the other clustered data blocks152in the third group158c. That is, the data block partitioner180reclusters the data blocks152of the first and second groups158a,158b(using the split points310) such that there is no longer overlap in the ranges of clustering key values154among any of the data blocks152. The data block partitioner180partitions the first and second groups158a,158bof clustered data blocks152into the third group158cof clustered data blocks152without performing any shuffling operation on the data blocks152in the first and second groups158a,158bso that performance characteristics of clustered tables is maintained without the associated cost of shuffling data. The data block partitioner180stores the data blocks152of the third group158cinto the data store150. Referring now toFIG.2A, a graph200ashows an exemplary first group158aand second group158bof clustered data blocks152plotted along an x-axis of clustering key values154. The first group158aconsists of data blocks152a-dwhile the second group158bconsists of data blocks152e-g. Each data block152a-gincludes a range210,210a-gof clustering key values154. While none of the ranges210within each respective group158a,158boverlap, there is overlap between ranges210of data blocks152across the groups158a,158b. For example, the range210eof data block152eoverlaps the ranges210a,210bof data blocks152a,152b. Thus, simply including all of the data blocks152a-ginto a single group158would result in performance loss due to the overlap. Referring now toFIG.2B, a graph200billustrates the exemplary first group158and second group158bofFIG.2Agraphed by clustering key values154. Here, the split points310generated by the split point generator170partition some of the data blocks152a-g. For example, a split point310apartitions data block152awhile a split point310bpartitions data block152a(in the first group158a) and the data block152e(in the second group158b). Similarly, a split point310cpartitions data block152b, a split point310dpartitions data block152c, and a split point310epartitions data block152dand data block152e. Referring now toFIGS.2C and2D, in some implementations, the data block partitioner180identifies which clustering key values154in the first group158bof clustered data blocks152and the second group158bof clustered data blocks152that fall between adjacent split points310. For each clustered data block152in the third group158cof clustered data blocks152, the data block partitioner180merges the identified clustering key values154that fall within the corresponding adjacent split points310. For example, a graph200cindicates the portions of the data blocks152a-gthat fall within adjacent split points310a-e. Here, the graph200cillustrates that adjacent split points310a,310bpartition data block152ainto portions152aa,152ab,152acand data block152einto portions152ea,152eb(FIG.2C). Similarly: data block152bis split into portions152ba,152bb; data block152cis split into portions152ca,152cb; data block152dis split into portions152da,152db; and data block152gis split into portions152ga,152gb. Note that data block152fis not split into any portions, as no split points310pass through the data block152f. Each pair of adjacent split points310and end split point310a,310eform a range410,410a-fof clustering key values154. In some examples, the data block partitioner180only reads the rows within each partition410and writes each data block152of the third group158c(based on the read partition410) to the data block data store150. In some examples, the data block partitioner180only reads the column(s) that include the clustering key value154instead of the entire clustered table159to greatly reduce the total amount of data read. Optionally, only portions of data blocks152that overlap are read by data block partitioner180. For example, the data block partitioner180does not read data block152fwhen generating the data blocks152of the third group158cas no split points310intersect with the data block152f. As shown by graph200dofFIG.2D, each data block152h-nof the third group158cis formed from the merged partitions of groups158a,158bwithin the same adjacent pair of split points310. Here, because split point310ais the left-most split point310, the split point310adoes not have an adjacent split point310to the left, and therefore portion152aaof data block152aforms data block152hof group158calone. Adjacent split points310a,310bbracket portions152aband152ea, which are merged to form data block152i. Similarly, adjacent split points310b,310cbound portions152ac,152ba,152eband merge to form152j. Likewise, adjacent split point310c,310dbound portions152bb,152caand data block152fand merge to form data block152k. Adjacent split points310d,310ebound portions152cb,152da,152gaand merge to form data block152m. Because split point310eis the right-most split point310, data block portions152db,152gbmerge to form data block152n. Thus, in some examples, at least one clustered data block152in the third group158cof clustered data blocks152(e.g., data block152h) includes a portion (e.g., portion152aa) of the respective range210from one of the data blocks152of the first or second groups158a,158bof clustered data blocks152that does not overlap with any of the respective ranges of the other clustered data blocks of the other one of the first or second groups158a,158bof clustered data blocks152. In some implementations, at least one clustered data block152in the third group158cof clustered data blocks152includes a portion (e.g., portion152ab) of the respective range210from one of the data blocks152of the first or second groups158a,158bof clustered data blocks152and one of the data blocks152(e.g., portion152ea) from the other one of the first or second groups158a,158of clustered data blocks152. Optionally, at least one clustered data block152in the third group158cof clustered data blocks152includes a portion (e.g., portions152ac,152ba) of the respective range210from two of the data blocks152of the first or second groups158a,158bof clustered data blocks152and one of the data blocks152(e.g., portion152eb) from the other one of the first or second groups158a,158bof clustered data blocks152. That is, the split points310may partition the data blocks152into any number of portions and the data block partitioner180any merge any number of portions or data blocks152from the first group158aor the second group158binto data blocks152of the third group158c. Referring now toFIG.2E, in some implementations, the split point generator170generates the one or more split points310by determining a plurality of quantiles610,610a-nfor the first and second groups158a,158bof clustered data blocks152. A quantile is a cut point that divides the range of a distribution into intervals with each interval having an equal or approximately equal distribution. For example, as illustrated by graph200e, given a range of clustering key values154defined by a minimum clustering key value154MIN and maximum clustering key value154MAX (determined, in this example, by the minimum and maximum clustering key values154of the data blocks152a-gof groups158a,158b), a first, second, and third quantile610a-cdivides the range620of the clustering key values158into four sub-ranges612a-d. The first range612arepresents 25 percent (i.e., one fourth) of the distribution of clustering key values154, and each of the other ranges612b-calso represent 25 percent of the distribution of clustering key values154. Each quantile610may represent a location for a split point310, and thus the number of quantiles610is equivalent to the number of split points310. That is, each split point310of the one or more split points310corresponds to a different quantile610of the plurality of quantiles610. The split point generator170may determine any number of quantiles610(and thus split points310). The split point generator170may determine a number of the one or more split points310generated based on a number of data blocks152in the first and second groups158a,158bof clustered data blocks152and a size of each of the data blocks152. In some examples, each data block152is a configurable size (e.g., 32 MB to 256 MB) and the split point generator170determines the number of quantiles610by determining a total size of the first group158aand the second group158bdivided by the configured data block size. In the example shown, the split point generator170determines three quantiles610a-610c(corresponding to three split points310f-h) to divide the range620of clustering key values154into four sub-ranges612a-dwhich each correspond to a data blocks152h-kof the third group158cof clustered data blocks152. In some examples, the split point generator determines one or more quantiles610of the data blocks152of the first and second groups158a,158bbased on sampling the data of the data blocks152. That is, due to the potentially enormous size of the clustered table159, sampling the data allows the split point generator170to determine the quantiles610in a more efficient and scalable manner. In some implementations, the split point generator170uses weighted sampling to approximate one or more quantiles of the data blocks152of the first group158aand the second group158bof clustered data blocks152. Alternatively, the split point generator170may generate the split points310using other means, such as ordered code. Ordered code provides a byte encoding of a sequence of typed items. The resulting bytes may be lexicographically compared to yield the same ordering as item-wise comparison on the original sequences. That is, ordered code has the property that comparing the ordered code yields the same result value as comparing values one by one. Optionally, after partitioning the data blocks152into the third group158c, the data block partitioner180determines a first sum of data values associated with the first and second groups158a,158bof clustered data blocks152and determines second sum of data values associated with the third group158cof clustered data blocks152. The data block partitioner verifies that the first sum is equivalent to the second sum. That is, to ensure that there was no data corruption during the partitioning process, the data block partitioner180verifies that values associated with the first and second groups158a,158b(e.g., summing a number of rows of the clustered table159) is the same as the corresponding value of the third group158c. These values will match when no data has been corrupted or misplaced. The total number of rows in the third group158cshould be equivalent to the total number of rows in the first group158asummed with the total number of rows in the second group158b. Examples herein illustrate the data block reclusterer160performing shuffle-less reclustering of two groups158of clustered data blocks152. However, this is exemplary only and any number of groups may be reclustered simultaneously. In some examples, the respective range of clustering key values154of the clustered data blocks152in the second group158bdo not overlap with the respective range of clustering key values154of the clustered data blocks152in the first group158aof clustered data blocks152. In this scenario, the data block reclusterer160may merge the data blocks without generating split points310. FIG.3is a flowchart of an exemplary arrangement of operations for a method300of shuffle-less reclustering of clustered tables. The method300includes, at operation302, receiving, at data processing hardware144, a first group158aof clustered data blocks152sorted by a clustering key value154. Each clustered data block152in the first group158aof clustered data blocks152includes a respective range210of the clustering key values154that do not overlap with any of the ranges210of clustering key values154of the other clustered data blocks152in the first group158aof clustered data blocks152. At operation304, the method300includes receiving, at the data processing hardware144, a second group158bof clustered data blocks152sorted by the clustering key value154. Each clustered data block152in the second group158bof clustered data blocks152includes a respective range210of clustering key values154that do not overlap with any of the ranges210of clustering key values154of the other clustered data blocks152in the second group158bof clustered data blocks152. The respective range210of clustering key values154of one or more the clustered data blocks152in the second group158bof clustered data blocks152overlaps with the respective range210of clustering key values154of at least one of the clustered data blocks152in the first group158aof clustered data blocks152. The method300, at operation306, includes generating, by the data processing hardware144, one or more split points310for partitioning the first and second groups158a,158bof clustered data blocks152into a third group158cof clustered data blocks. At operation308, the method300includes partitioning, by the data processing hardware144, using the one or more generated split points310, the first and second groups158a,158bof clustered data blocks152into the third group158cof clustered data blocks152. Each clustered data block152in the third group158cof clustered data blocks152includes a respective range210of clustering key values154that do not overlap with any of the ranges210of clustering key values154of the other clustered data blocks152in the third group158cof clustered data blocks152. Each split point310of the one or more generated split points310defines an upper limit or a lower limit for the respective range210of clustering key values154of one of the clustered data blocks152in the third group158cof clustered data blocks152. FIG.4is schematic view of an example computing device400that may be used to implement the systems and methods described in this document. The computing device800is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device400includes a processor410, memory420, a storage device430, a high-speed interface/controller440connecting to the memory420and high-speed expansion ports450, and a low speed interface/controller460connecting to a low speed bus470and a storage device430. Each of the components410,420,430,440,450, and460, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor410can process instructions for execution within the computing device400, including instructions stored in the memory420or on the storage device430to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display480coupled to high speed interface440. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices400may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory420stores information non-transitorily within the computing device400. The memory420may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory420may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device400. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. The storage device430is capable of providing mass storage for the computing device400. In some implementations, the storage device430is a computer-readable medium. In various different implementations, the storage device430may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory420, the storage device430, or memory on processor410. The high speed controller440manages bandwidth-intensive operations for the computing device400, while the low speed controller460manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller440is coupled to the memory420, the display480(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports450, which may accept various expansion cards (not shown). In some implementations, the low-speed controller460is coupled to the storage device430and a low-speed expansion port490. The low-speed expansion port490, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device400may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server400aor multiple times in a group of such servers400a, as a laptop computer400b, or as part of a rack server system400c. Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. A software application (i.e., a software resource) may refer to computer software that causes a computing device to perform a task. In some examples, a software application may be referred to as an “application,” an “app,” or a “program.” Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
30,504
11860908
DETAILED DESCRIPTION It is to be understood that at least some of the figures and descriptions of the invention have been simplified to illustrate elements that are relevant for a clear understanding of the invention, while eliminating, for purposes of clarity, other elements that those of ordinary skill in the art will appreciate also comprise a portion of the invention. However, because such elements do not facilitate a better understanding of the invention, a description of such elements is not provided herein. Applicant has discovered a method, apparatus, and computer-readable medium for quantitatively grouping a set of persons into a plurality of groups of three or more persons using computational clustering that improves computer technology in the field of computational matching and ranking. Unlike existing matching and ranking technologies, which are some form of a matching process that matches each user with another user or each user with a particular interest or activity, the presently disclosed process is configured to optimize the grouping of persons within a population. This process is far more computationally complex than simple matching, due to the exponentially increasing number of permutations as the population size increases. For example, given a population of 100 people having certain attributes and requirements, and possible group sizes of 3-10 people, there are a large number of possible groupings of people that would meet the requirements of the population. Simple matching approaches which are designed to optimize one-to-one matches between users or between users and activities or advertisements cannot be applied to population level grouping problems. This is because these approaches consider each match in isolation, as each match does not have an effect on the overall population of users. These approaches are not applicable to population grouping because they optimize only individual matches for users and do not optimize results for the entire group. By contrast, in population grouping, even if a set of users that have been grouped together are optimally matched, the solution is not an optimal solution if it does not maximize results across all groupings of users from the population. In an exemplary embodiment, the disclosed method, apparatus, and computer-readable medium for quantitatively grouping a set of persons into a plurality of groups of three or more persons using computational clustering is used to group a population of caregivers or a subset of caregivers (e.g., mothers) into groups or “circles” based on attributes of the population of caregivers. These attributes can be derived based upon, for example, characteristics, values and/or interests of the population and individuals within it. This can aid mothers (or fathers or other caregivers) in finding inclusive, like-minded, and accessible groups of individuals to form support networks and relationships and exchange advice. FIG.1illustrates a flowchart for quantitatively grouping a set of persons into a plurality of groups of three or more persons using computational clustering according to an exemplary embodiment. At step101a set of data objects corresponding to a set of persons are stored. The data objects can take any suitable form. For example, the data objects can be rows of a person database or table. The data objects can also be objects in an object oriented language, such as a user-defined class, a structure, an array, or some other suitable structure. These examples are provided for illustration only and are not intended to be limiting. At step102a plurality of attributes corresponding to each data object in the set of data objects are stored. Once again, the storage of the attributes can take any suitable form. For example, the attributes can be stored in individual columns of a table or a database, or be stored as variables within a specialized person data object, such as a person struct object. Many variations are possible. FIG.2illustrates two examples of data objects corresponding to persons and associated attributes according to an exemplary embodiment. Data object201B corresponds to person201A and data object202B corresponds to person202A. As shown inFIG.2, each of the data objects includes a plurality of attributes. The attributes can be stored as variables within the data objects. For example, the Name attribute and the Address attribute can be stored as string variables, the Number of Children attribute can be stored as an integer, and the Host attribute and VirtualMeetings attribute (explained below) can be stored as Boolean values. In another example, the Interests attribute can be stored as an array of strings or as a single sting. Many variations are possible and these examples are not intended to be limiting. The values of the attributes can be provided directly by users through a user interface. For example, the Name, Address, Zip Code, and Gender attributes can be entered by a user through a user interface when the user signs up for the group matching service. The values of the attributes can also be automatically generated or derived from information provided by the user through a user interface. For example, a user may be asked to answer a set of personality questions, resulting in a Myers-Briggs personality type being assigned to the user in a MeyersBriggs attribute. In another example, the attributes can be generated or derived based on answers to an assessment, such as the well-being assessment shown below. Survey QuestionAnswers offered on sliding 5-point scale.BehaviorQuestions asked in Circle registration form and in post-programChange/Outcomesurvey.Take better care of1.1 take care of my mental health, including paying attention tothemselves mentally &how I am feeling and processing my emotions. [Never →physicallyAlways]2.1 nurture my physical body, including getting adequate sleep,healthy food & body movement. [Never → Always]3.1 devote meaningful time to my personal interests and hobbies.[Never → Always]Act with more4.1 am a good mom. [Never → Always]confidence, presence and5.1 am a good employee. [Never → Always]intention (home & office)6.1 take my own values, goals and intuition into account whenmaking decisions. [Never → Always]Ask for help (home &7.1 ask for help when I need it.office)At work: [Never → Always]At home: [Never → Always]Cope with stress better8. My coping strategies in stressful times are: [Terrible →Excellent].Feel more in control9.1 feel stuck or unable to progress.At work: [Never → Always]At home: [Never → Always]Feel more belonging,10.1 feel connected to myself; I know who I am. [Not at all →connection & communityCompletely](home & office)11.1 feel like I can bring my whole self (all aspects of me,including my Motherhood) to work. [Not at all → Completely]12.1 feel lonely in my daily life. [Never → Always]13.1 have a community that provides emotional support andmotivation. [Not at all → Completely]Loyal to employer14.1 consider exiting the workforce or reducing my hours. [Never -(retention)Monthly - Daily]Communicate more15.1 supportively listen to others. [Never → Always]effectively16.1 clearly communicate with others. [Never → Always]More productive17.1 work smart: I use systems to evaluate and prioritize myresponsibilities and keep clear boundaries to reinforce my efforts.At work: [Never → Always]At home: [Never → Always] Other questions that can be used to derive attributes can include questions about group dynamics (e.g., “how comfortable are you sharing with a group?”), questions about readiness for change/growth (e.g., “are you open to trying new things?), questions about how participants feel at work, and/or questions about inclusivity and belonging. Other examples of generated or derived attributes include distance based metrics. For example, a user can provide an address and the system can utilize a Global Positioning System and/or map databases to determine a distance from the user's residence to the nearest playground. This may be a relevant attribute when forming groups of mothers, as discussed earlier. Other attributes can be Boolean (true/false) attributes indicating a user's preferences or answers to questions. For example, in the case of forming groups of mothers, each mother can be asked whether she can act as a host for the other mothers and their children. If the mother can act as a host, then the Host attribute can be set to true, otherwise it is set to false. Each mother can also be asked whether she is available for virtual meetings (e.g., via videoconferencing/teleconferencing software). This attribute can also be stored as true/false value. The attributes corresponding to each data object can include, for example, a name, an address, a zip code, a quantity of children, a gender, a gender of a child, an interest, a hobby, a preferred group size, an indicator of hosting ability, an indicator of virtual communications ability, a personality indicator, a maximum travel distance, a transportation means, an availability attribute, an age of a child or children, an experience type (i.e., blended families, raising special needs kids, first time mom over 40, etc.), race, a well-being indicator (i.e., resulting from an assessment both pre-grouping and post-grouping], a goal (i.e., caregivers can be asked why they are joining a Circle/group with multiple choice answers including community, friendship, accountability, personal growth, empathy, parenting resources, etc.), household income, employment status, profession, role and/or function (i.e., entry-level employees, mid-level, executives), caregiving responsibilities (i.e., who the caregiver cares for, that person's condition, etc.), matching preferences for whether participant prefers to match with moms, dads, any parents (mother or father), or any caregivers (e.g., grandparents, nannies, or other caregivers). The maximum travel distance attribute can indicate the maximum distance the user is willing to travel to meet with the group. The transportation means attribute can indicate, for example, different means of transportation available to the user (e.g., bus, car, train, etc.). As shown inFIG.2, the availability attribute can be a listing of available days for each person. A second availability attribute can also be utilized to indicate available time slots or time periods within each day. A PreferredGroupSize attribute can be used to store the preferred group size for each person. This value can be entered by the user during registration and/or as part of the process of being assigned to a group. For example, some mothers may wish to have large groups of people and other mothers may be more introverted and wish to have more intimate groups. In addition to all the attributes shown inFIG.2and discussed above, a second set of attributes can be stored corresponding to at least some of the above-mentioned attributes that indicate a weight or rank of a corresponding attribute. This weight or rank or importance can be entered by a user in a user interface when providing the attribute information or information that is used to derive attributes. Using the above example of preferred group size, a user can enter a preferred group size and also enter a weight, rank, or importance to be associated with their selected preferred group size. For example, one or more attributes or input fields within the interface can have an associated “importance” indicator or input option. The importance indictor can allow a user select an importance level of the particular attribute (e.g., from 1-10). In another example, a user can rank one or more attributes in order of importance. For example, a user may rank the #Children attribute 1st, indicating that it is most important to them to be grouped with others who have the same number of children, the maximum distance attribute 2nd, and the preferred group size attribute 3rd. Returning toFIG.1, at step103a plurality of multidimensional objects are generated by encoding each data object in the set of data objects as a multidimensional object based at least in part on two or more attributes corresponding to that data object, with each multidimensional object corresponding to a data object in the set of data objects. As explained further below, the multidimensional objects are utilized to generate multiple solution sets with different groupings (clusters) of users/data objects. FIG.3illustrates an example of encoding data objects as multidimensional objects according to an exemplary embodiment. As shown inFIG.3, three attributes of each data object,201B and202B are used to generate multidimensional objects201C and202C. Note that the quantity and selection of attributes used to generate the multidimensional objects can be user-configured and/or set to some default value and more or less attributes or different attributes can be utilized. Each multidimensional object, in this case, is a three-dimensional value with one dimension corresponding to gender (which is represented as a Boolean in this example, with female corresponding to “1”), one dimension corresponding to the #Children attribute, and one dimension corresponding to the PreferredGroupSize attribute. Of course, other attributes can be represented in the multidimensional object in addition to or in place of these attributes. As shown inFIG.3, the step of generating a plurality of multidimensional objects by encoding each data object in the set of data objects as a multidimensional object based at least in part on two or more attributes corresponding to that data object includes mapping a value of each attribute in the two or more attributes to a different dimension of the multidimensional object, each dimension corresponding to a range of possible values of the mapped attribute. Returning toFIG.1, at step104a plurality of groups of three or more data objects are generated based at least in part on applying a clustering algorithm to the multidimensional objects. The clustering algorithm can be any suitable clustering algorithm, such as k-means clustering. FIG.4illustrates a flowchart for generating a plurality of groups of three or more data objects based at least in part on applying a clustering algorithm to the multidimensional objects according to an exemplary embodiment. Optionally, prior to performing the steps shown inFIG.4, a pre-filtering or pre-sorting step can be applied to all multidimensional objects. For example, the MD objects can be sorted into groups based upon zip code and then the steps shown inFIG.4can be performed for each group. In this case, the step of generating the plurality of groups of three or more data objects based at least in part on applying a clustering algorithm to the plurality of multidimensional objects includes grouping the plurality of multidimensional objects into a plurality of multidimensional object groups based on one or more values of one or more dimensions of each multidimensional object. Steps401-404ofFIG.4(described below) would then be performed for each multidimensional object group in the plurality of multidimensional object groups. The dimension(s) used for grouping can include zip code dimension, as indicated above, and/or other dimensions, such as a caregiver type matching preference (e.g., moms only, dads only, both, or other). At step401the clustering algorithm is applied to the plurality of multidimensional objects to generate a plurality solution sets, each solution set comprising a plurality of clusters of multidimensional objects, each cluster of multidimensional objects comprising at least three multidimensional objects. Of course, the clustering algorithm can be configured such that each cluster of multidimensional objects comprises at least two multidimensional objects or such that each cluster of multidimensional objects comprises at least four or more multidimensional objects. A user configured parameter can be used to adjust the minimum quantity of multidimensional objects per cluster. FIGS.5A-5Cillustrate an example of the clustering process according to an exemplary embodiment. As shown inFIG.5A, a set of data objects501are used to generate a plurality of multidimensional objects (“MD objects”)502, with each MD object corresponding to a particular data object. This process is described earlier. A clustering algorithm is then applied to the plurality of MD objects502to generate a plurality of solution sets503, with each solution set comprising a plurality of clusters of multidimensional objects, each cluster of multidimensional objects comprising at least three multidimensional objects. The clustering step clusters the MD objects into clusters based upon the distance between each MD object and all other MD objects in the multidimensional space of the MD objects. For example, if the MD objects are two-dimensional, then the clustering algorithm groups MD objects into clusters based on an assessment of distances between each MD object and other MD objects in two dimensional space. As discussed earlier, clustering can be performed using K-means clustering. Clustering of MD objects can also be performed using the Balanced Iterative Reducing and Clustering using Hierarchies (“BIRCH”) method to cluster the input data objects. BIRCH is a robust clustering algorithm developed for analyzing large volumes of multivariate data. The algorithm is capable of ingesting input data in a continuous fashion. The clustering step when using BIRCH includes four steps, described below. The first step is building a Clustering Feature (“CF”) tree—during this stage input data is loaded into a B-tree like structure and data objects are agglomerated in the leaf nodes based on relative Euclidean distance between the data objects. Data objects merging threshold is an input parameter of the BIRCH algorithm and is set initially to a small value. When the input data is normalized to the [0, 1] interval, a relatively small merging threshold value, such as 0.0001 can be used. Additionally, as discussed below, the threshold value can be automatically corrected during a subsequent intermediate step. The second step is CF tree condensing—this operation can be triggered when the CF tree exceeds a preset size. At this time the samples merging threshold can be recomputed and the CF tree can be rebuilt. A new value of the merging threshold can then be derived from the distance between entries in the existing CF tree. The third step is global clustering—at this step the BIRCH clustering algorithm applies a regular clustering algorithm to information collected in the CF tree. For example, the BIRCH algorithm implementation can utilize two global clustering options: CF tree refinement and Hierarchical Clustering (“HC”). While HC is capable of producing finer granularity clusters, its run time is significantly longer and memory consumption is significantly higher than that of the CF tree refinement procedure. The fourth step is cluster matching—during this step input data objects are matched with the clusters produced after the refinement step. As explained previously, while the BIRCH algorithm is described above step, clustering methods other than BIRCH can be used during the clustering step. For example, clustering algorithms such as DBSCAN or K-means can be used to group the MD objects into clusters. FIG.5Billustrates one solution set, Solution Set 2, and the plurality of clusters504within that solution set according to an exemplary embodiment. As shown inFIG.5B, the total number of clusters in this particular solution is three, with Cluster 1 having four MD objects, Cluster 2 having four MD objects, and Cluster 3 having three MD objects. FIG.5Cillustrates a visualization of the clusters of Solution Set 2 in multidimensional space according to an exemplary embodiment. For the purposes of explanation and visualization, the multidimensional space is shown as a two dimensional space, with a first dimension corresponding to a # Children attribute and a second dimension corresponding to a Preferred Group Size attribute. Of course, the actual dimensionality of each MD object and the number of dimensions utilized when the clustering is performed can be greater than two. Returning toFIG.4, at step402one or more solution sets are removed from the plurality of solution sets to generate a filtered plurality of solution sets based at least in part on one or more attributes of one or more data objects corresponding to one or more multidimensional objects in one or more clusters of each removed solution set. FIG.6illustrates a process for removing solution sets from the plurality of solution sets to generate a filtered plurality of solution sets according to an exemplary embodiment. Filter rules602and data objects603(including the associated attributes) are provided as input to a step of applying filter rules to each cluster within each solution set. The filter rules can include, for example, filtering out a solution set if a cluster size within that solution set deviates too greatly from a preferred group size of the persons within that cluster (i.e., the persons corresponding to the data objects that correspond to the MD objects), filtering out a solution set if a cluster size within that solution set exceeds a predefined threshold or is less than a predefined threshold, filtering out a solution set if a cluster within that solution set does not have any persons that are willing to host, filtering out a solution set if a cluster within that solution set has an incompatibility between attributes of persons in that cluster (e.g., an availability attribute, an ability to have virtual meetings, an available days attribute, etc.), and/or filtering out a solution set if a cluster within that solution set has too great of a distance between persons in that cluster or between one or more persons and a host. The filtering step604is applied to the solutions sets601to identify solution sets for removal, which are then removed at step605. As shown in box606, this process results in the removal of solution set 3. Returning toFIG.4, at step403a solution set is selected from the filtered plurality of solution sets based at least in part on one or more weights associated with one or more attributes in the plurality of attributes. FIG.7illustrates an example of the process for selecting a solution set from the filtered plurality of solution sets according to an exemplary embodiment. As shown inFIG.7, a step of applying ranking rules704to each remaining solution set to rank the solution sets is applied to the remaining solutions sets701. This step takes as input ranking rules702and data objects703(including the plurality of attributes). In step704, the ranking rules are used to assign weights to particular attributes of clusters, of MD objects within clusters, or to solution sets. For example, a solution set that has clusters in which persons are, on average, closer to one another than clusters in other solution sets may be assigned a higher weighting and consequently a higher ranking. Weights and ranks can also be assigned on how well the characteristics of each cluster match the desired attributes of the persons within each cluster. For example, whether a cluster size is equal to the desired group size of all persons within that cluster or whether the number of children of all persons within a cluster matches. Many variations are possible, and these examples are not intended to be limiting. The ultimate solution set weight and rank can be computed based on some aggregation of weights and ranks of clusters within each solution set, for example an average, a median, or a sum of clusters within each solution set. Many variations are possible, and these examples are not intended to be limiting. The ranking rules and associated weights can be dynamic, changing in response to user feedback, user assessments, or other feedback from the system. For example, a post-grouping/post-circle assessment can be performed for all individuals that have been grouped and used adjust, remove, or add new ranking rules. An assessment, such as the well-being assessment discussed earlier, can serve a benchmark used to evaluate effectiveness and benefit of the groupings. By measuring well-being prior to and after grouping, the system can determine a “return on investment” or improvement to each member's wellbeing. The groups which result in the greatest improvement to well-being (e.g., improvement above a predetermined threshold) can then be analyzed to determine commonalities between the members, attributes of the individual members and of the group as a whole, and statistical characteristics of the clusters and clustering process used to create the groups. The result of this analysis can then be used to adjust the ranking rules and weightings to select for groups which have the desired attributes and create groups in the future with a higher likelihood of improving well-being. The ranking rules and weightings can therefore be based upon a feedback loop from previous successful groups, as well as each user's own weightings. The system can identify criteria for optimal group selection using the above-described feedback loop to measure characteristics and attributes (individual, interpersonal, and/or aggregate across members of the group) of successful groups. For example, the system can determine that optimal success of a group (as measured, for example, by improvement in well-being scores) occurs when one or more of the following criteria are met: (1) the average wellbeing score of the group is above some threshold value, (2) the members of the group have a certain mix or distribution of attributes or demographics, (3) the age gap or age range between children of participating moms does not exceed a maximum number of months, and/or (4) the ratio of certain Meyers-Briggs personality types to other Meyers-Briggs personality types matches a predetermined ratio known to produce optimal results or the distribution of Meyers-Briggs personality types matches an ideal distribution for optimal results. After applying704the ranking rules702, a top ranking solution set is selected at step705. In this example, the top ranking solution is Solution Set 2, as shown in box706. Returning toFIG.4, at step404the plurality of data objects are grouped according to the plurality of clusters defined in the selected solution set. This step includes identifying the data objects corresponding to MD objects within each cluster of the selected solution set and grouping the identified data objects together as a group. FIG.8illustrates an example of grouping a plurality of data objects according to the plurality of clusters defined in the selected solution set according to an exemplary embodiment. As shown inFIG.8, the selected solution set is Solution Set 2, which includes the clusters shown in box801. The objects corresponding the MD objects in each cluster are grouped together in groups that correspond to the clusters, as shown in box802. FIG.9illustrates the components of the specialized computing environment900configured to perform the specialized processes described herein. Specialized computing environment900is a computing device that includes a memory901that is a non-transitory computer-readable medium and can be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. As shown inFIG.9, memory901can include user interface software901A, a database901B that stores the data objects, plurality of attributes, MD objects, generated clusters, solution sets, and all other intermediate data generated by the processes described herein, MD object generation software901C, clustering software901D, solution set filtering software901E, solution set ranking software901F, filter rules901G, ranking rules901H, and any additional software9011required to implement the specialized processes described herein. Each of the software components in memory901store specialized instructions and data structures configured to perform the corresponding functionality and techniques described herein. All of the software stored within memory901can be stored as a computer-readable instructions, that when executed by one or more processors902, cause the processors to perform the functionality described with respect toFIGS.1-8. Processor(s)902execute computer-executable instructions and can be a real or virtual processors. In a multi-processing system, multiple processors or multicore processors can be used to execute computer-executable instructions to increase processing power and/or to execute certain software in parallel. Specialized computing environment900additionally includes a communication interface903, such as a network interface, which is used to communicate with devices, applications, or processes on a computer network or computing system, collect data from devices on a network, and implement encryption/decryption actions on network communications within the computer network or on data stored in databases of the computer network. The communication interface conveys information such as computer-executable instructions, audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier. Specialized computing environment900further includes input and output interfaces904that allow users (such as system administrators) to provide input to the system to set parameters, to edit data stored in memory901, or to perform other administrative functions. An interconnection mechanism (shown as a solid line inFIG.9), such as a bus, controller, or network interconnects the components of the specialized computing environment900. Input and output interfaces904can be coupled to input and output devices. For example, Universal Serial Bus (USB) ports can allow for the connection of a keyboard, mouse, pen, trackball, touch screen, or game controller, a voice input device, a scanning device, a digital camera, remote control, or another device that provides input to the specialized computing environment900. Specialized computing environment900can additionally utilize a removable or non-removable storage, such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, USB drives, or any other medium which can be used to store information and which can be accessed within the specialized computing environment1500. Having described and illustrated the principles of our invention with reference to the described embodiment, it will be recognized that the described embodiment can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Elements of the described embodiment shown in software may be implemented in hardware and vice versa. It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof. For example, the steps or order of operation of one of the above-described methods could be rearranged or occur in a different series, as understood by those skilled in the art. It is understood, therefore, that this disclosure is not limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the present disclosure as defined by the appended claims.
31,989
11860909
DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure. As previously mentioned, computers are liable to misclassify data into different groupings when filtering the data, especially when such classifications rely on data that is manually input and likely to include typographical errors. When computers are performing these classifications based on different relationships between the computer data, such as identifying profiles that are owned by different entities that are a part of the same household, this problem can become even more pronounced given the amount of data that is stored in each profile (e.g., a computer may inadvertently match different entity profiles into the same grouping if they have one value in common (such as a common first name), despite the profiles not being related). Computers will often rely only on pre-stored data when creating different entity profile groupings without the capability to identify groupings of entity profiles that do not share a common attribute. For instance, a computer may easily group two profiles together that share a matching grouping attribute, but the computer may not be able to group the profiles together if the value of the grouping attribute is missing from one or both of the profiles. Accordingly, when an operator attempts to filter different entity profiles based on different groupings to do an analysis, the computer may not include every entity profile that should be included in the grouping for the analysis. Thus, the analysis may be incomplete or incorrect. In one example, a user may wish to determine which entity profiles in a database maintained by a banking system correspond to or represent individuals that reside in the same household. A computer not using the methods described herein may simply match the addresses between different entity profiles and determine the matching addresses means the entities belong to the same household. Other computers may store identifiers of the households in the profiles themselves. However, implementing each of these methods may result in an incorrect household identification because individuals are liable to move households (e.g., children may move off to college or leave the house after starting a new job) without updating their profile information. Further, because these attributes are manually input by users, the attributes may not be correct because of a typo or failure to add the household attribute to the profiles. Implementations of the systems and methods discussed herein overcome these technical deficiencies because they provide a method for automatically updating entity profiles with group entity labels. The group entity labels may be values that indicate that entity profiles are profiles for individuals that are associated with or a part of the same group entity (e.g., part of the same household or organization). A computer may evaluate attribute-value pairs of individual entity profiles by comparing the attribute-value pairs against each other. The computer may use a specific set of rules that include combinations of different attribute-value pairs that, upon determined to be a match, cause the computer to determine the individuals are a part of the same group. The computer may update the individuals' profiles with value that indicates the group entity. Thus, the profiles may be filtered and/or analyzed as being associated with the same group entity. In some instances, there may be attribute-value pairs that have different real-world limitations that may cause the pairs to be considered differently when determining whether entity profiles are associated with the same group entity. Continuing with the banking system example, there may be instances in which individuals share the same bank account (e.g., a joint account). Accordingly, the individuals' entity profiles may each have the same account number in their corresponding account number attribute-value pair. Such attribute-value pairs may be assigned to a different set of rules that are separate from other sets of rules because such attribute-value pairs may have real-world characteristics that indicate whether individuals are associated with the same group entity (e.g., a matching joint account number and last name combination may not be an accurate indicator of whether a father and daughter are a part of the same household because the daughter may have kept the same joint account after moving out of the house). These unique sets of rules may later be synthesized together based on a common entity profile to build out the entity profiles that are associated with the same group entity, thereby improving the accuracy of the entity profiles that are associated with the group entity. Another advantage to splitting the rules into different sets based on the attributes of the rules is to avoid misidentifying a group of entity profiles that is too large. For instance, if a computer is attempting to identify entity profiles of individuals that reside in the same household, the computer may seek to avoid identifying large public group entities that may share common characteristics such as addresses, but may not be considered a household (e.g., a jail or a college dormitory). The computer may set thresholds for the different sets of rules, determine the number of entity profiles that match each other based on each set of rules, and determine whether the number exceeds either threshold. If the computer identifies a set of rules with a number of entity profiles that exceeds a threshold, the computer may determine it is unclear whether the entity profiles are associated with the same household and the computer may not group them together (e.g., not update the respective entity profiles with a group entity label). The computer may compare the number of entity profiles that were identified as matching based on both sets of rules to the thresholds and either only group the entity profiles together that match based on a set of rules that did not exceed a threshold or not group any group of entity profiles together for a group entity if one of the sets exceeds a threshold. Thus, the computer may accurately group the entity profiles together for a user seeking to filter entity profiles by group entity and avoid inaccurate groupings. Yet another advantage to splitting the rules into different sets based on the attributes of the rules is to allow the system to accurately identify large households. For example, as described above, the system may be configured to avoid grouping entity profiles into group entities that are too large to be considered a household by setting a maximum entity profile threshold for each set of rules. While this may help avoid misclassifying entity profiles into improper households, it may also cause for the system to fail to identify large households that happen to have a number of entity profiles that are a part of the household that would be filtered out based on the thresholds. To overcome this issue, a data processing system may split the rules into different sets to allow for the system to apply the thresholds to the different sets to filter out improper households, but not set any thresholds to the sets of entity profiles that have been synthesized together (e.g., sets of entity profiles from the different sets of rules that were synthesized based on the sets having a common entity profile). In this way, the system may accurately identify households that would otherwise generally be considered to be too large. Furthermore, one advantage of avoiding inaccurate groupings is it improves processing efficiency and lessens the data storage requirements for processing and storing entity profiles. For instance, when attempting to analyze entity profiles based on their groupings (whether it is just counting the number of entity profiles that are in particular groups, filtering the entity profiles based on the grouping, or identifying the online actions of entity profiles of the groups), a computer may not use the processing resources to process the data of improper group entities. Further, different group entities may have profiles that may be stored in memory and have storage requirements. Avoiding improper entity profile groupings may lessen the number of group entity profiles a computer needs to store in memory. Thus, by implementing the systems and methods described herein, a computer may have more memory and processing resources to process applications that the computer may execute. FIG.1illustrates an example system100for household entity generation, in some embodiments. In brief overview, system100can include two client devices102and104that communicate with a household entity generator106over a network108. These components may operate together to identify different entity profiles that can be grouped together and update the entity profiles based on the groupings. System100may include more, fewer, or different components than shown inFIG.1. For example, there may be any number of client devices, computers that make up or are a part of household entity generator106or networks in system100. Client devices102and104and/or household entity generator106can include or execute on one or more processors or computing devices and/or communicate via network108. Network108can include computer networks such as the Internet, local, wide, metro, or other area networks, intranets, satellite networks, and other communication networks such as voice or data mobile telephone networks. Network108can be used to access information resources such as web pages, websites, domain names, or uniform resource locators that can be presented, output, rendered, or displayed on at least one computing device (e.g., client device102or104), such as a laptop, desktop, tablet, personal digital assistant, smartphone, portable computers, or speaker. For example, via network108, client devices102and104can access entity profiles that are stored on household entity generator106. Each of client devices102and104and/or household entity generator106can include or utilize at least one processing unit or other logic devices such as a programmable logic array engine or a module configured to communicate with one another or other resources or databases. The components of client devices102and104and/or household entity generator106can be separate components or a single component. System100and its components can include hardware elements, such as one or more processors, logic devices, or circuits. Household entity generator106may comprise one or more processors that are configured to analyze and update entity profiles based on the entity profiles corresponding to individuals that are a part of the same group (e.g., reside in the same household). Household entity generator106may comprise a network interface110, a processor112, and/or memory114. Household entity generator106may communicate with client devices102and104via network interface110. Processor112may be or include an ASIC, one or more FPGAs, a DSP, circuits containing one or more processing components, circuitry for supporting a microprocessor, a group of processing components, or other suitable electronic processing components. In some embodiments, processor112may execute computer code or modules (e.g., executable code, object code, source code, script code, machine code, etc.) stored in memory114to facilitate the activities described herein. Memory114may be any volatile or non-volatile computer-readable storage medium capable of storing data or computer code. Memory114may include an event identifier116, a profile matcher118, a profile parser120, a profile updater122, a rule database124, and/or a profile database126, in some embodiments. In brief overview, components116-126may cooperate to store entity profiles in profile database126that are accessible to different individuals, detect the occurrence of an event that causes components118-126to identify groupings of entity profiles, identify entity profiles that match and are associated with the same group entity based on one or more sets of rules, filter out any matched entity profiles that may not be associated with a group entity, and update the profiles according to the matched group entity. In some embodiments, the components116-126may generate new group profiles based on new matches and store the group profiles in profile database126such that the group profiles may be analyzed based on the entity profiles that are associated with the respective group profiles. Household entity generator106may store entity profiles in profile database126. Entity profiles may be accounts for different individuals such as bank account, credit card accounts, user profiles with different websites, etc. The entity profiles may include attribute-value pairs that each include a different attribute and a value for the attribute. For example, the entity profiles may include attribute-value pairs for first name, last name, full name, address, phone number, cell phone number, home phone number, tax identification number, street name, zip code, city, state, account number, household identification number, group entity number, company identification number, etc. Household entity generator106may initially store the entity profiles with attribute-value pairs with blank values in the data structures for the respective entity profiles. Household entity generator106may then add values to the attribute-value pairs as household entity generator106receives inputs (e.g., user inputs) indicating the values. Household entity generator106may add the values to the attribute-value pairs by updating the respective data structures with the values. Profile database126may include one or more databases (e.g., databases in a distributed system). Profile database126may store profiles for different individual entities (e.g., individual people) and/or group entities (e.g., companies, households, organizations, etc.). The individual entity profile may be referred to hereafter as entity profiles and the group entity profiles as group entity profiles. The entity profiles may be stored as data structures with one or more attribute-value pairs as described above. The group entity profiles may similarly be stored as data structures with attribute-value pairs (e.g., name, address, phone number, number of employees, names of the employees, number of people in the household, type of business, etc.). The different profiles that are stored in profile database126may be updated over time as household entity generator106either receives new values for the profiles or determines values for profiles. Event identifier116may comprise programmable instructions that, upon execution, cause processor112to identify or detect a profile refresh event occurred that causes components118-122to analyze and update entity profiles in profile database126. A profile refresh event may be any event that causes event identifier116to evaluate the entity profiles that are stored in memory114of household entity generator106to determine whether they are associated with a particular group entity. Profile refresh events may be stored rules in memory114that causes household entity generator106to evaluate the entity profiles. Examples of profile refresh events may include detecting a user input at a user interface (e.g., detecting a selection of a profile refresh button), detecting a time period (e.g., a predetermined time period) has passed since the last instance household entity generator106evaluated the entity profiles, detecting the addition of a new entity profile to memory114, detecting the number of entity profiles stored in memory114exceeds a threshold, detecting a number of entity profiles have been added to memory114since the last instance household entity generator106evaluated the entity profiles, etc. In some embodiments, household entity generator106may evaluate the different rules of the profile refresh events over time and/or each time household entity generator106adds an entity profile to memory114. Profile matcher118may comprise programmable instructions that, upon execution, cause processor112to apply sets of rules from rule database124to the profiles of profile database126after event identifier116detects a profile refresh event. Rule database124may be a relational database that stores any number of rules that can be used to match entity profiles together. For instance, profile matcher118may apply a first set of rules from rule database124to the attribute-value pairs of the entity profiles to identify entity profiles with specific sets of matching attribute-value pairs. The first set of rules may include matching rules for one or more attribute-value pairs (e.g., entity profiles may be a match between each other for a rule if a specific set of attribute-value pairs match). For instance, one rule may indicate two profiles are a match between each other if they have a match between the last name, street number, street name, and zip code attribute-value pairs. Another rule may indicate two profiles are a match if the profiles have matching last name, street name, city, and state attribute-value pairs. Yet another rule may account for last names with more than one token (e.g., name) and may indicate a match if they have at least one matching token in the last name attribute-value pair and a match between the street number, street name, and zip code attribute-value pairs. The first set of rules may include any number of rules for any attributes. In some embodiments, the rules may specify that the attribute-value pairs are a match if the values of the attribute-value pairs do not exactly match, but instead only approximately match. For instance, some rules may specify that last names may be a fuzzy match if they match according to a phonetic function (e.g., a phonetic hashing function). Profile matcher118may apply such rules by applying the phonetic hashing function to each last name attribute-value pair of the entity profiles and determine matches between the resulting hashes to be a fuzzy match. Some rules may specify that attribute-value pairs are a fuzzy match if they are a match within a threshold as determined using an edit distance function (e.g., Levenshtein or Cosine Similarity). Profile matcher118may apply the edit distance function between corresponding attribute-value pairs and determine how many changes would need to occur before the values match. Profile matcher118may compare the number of changes to a threshold (e.g., a predetermined threshold). Profile matcher118may determine any values with a number of changes below the threshold are a match and values with a number of changes above the threshold are not a match. In some embodiments, the value of the threshold may differ depending on the length of the value (e.g., the number of characters of the value). For instance, profile matcher118may determine longer street names correspond to higher thresholds. This may be beneficial because the longer the value, the higher the chance for any typographical errors in the name. Rules may use phonetic functions and/or edit distance functions to determine matches for any attributes. In some embodiments, profile matcher118may use phonetic functions and/or edit distance functions to determine exact matches. In some embodiments, individual rules may include a combination of exact matches and fuzzy matches for different attributes. For instance, one rule may indicate two entity profiles are a match if the last name and street name attribute-value pairs are fuzzy matches (e.g., a fuzzy match based on a phonetic function and/or an edit distance function) and the street number and city and state attribute-value pairs are exact matches. Another rule may indicate two profiles are a match if the same attributes are all fuzzy matches. Yet another rule may indicate two profiles are a match if the same attributes are all exact matches. The rules may have any combination of exact and/or fuzzy matches for any combination of attribute-value pairs. In some embodiments, the first set of rules may be divided into different subsets of rules. The first set of rules may be divided based on the attribute-value pairs that are included in the rules. For instance, one subset of rules may include different variations of attribute-value pairs that each include a last name attribute-value pair (e.g., an exact matching last name, a fuzzy matching last name, or a last name inclusion). Another set of rules may each include a phone number attribute-value pair (e.g., an exact or fuzzy home and/or cell phone number). Yet another set of rules may each include an email attribute-value pair. Thus, profile matcher118may be able to divide the matching sets of entity profiles based on the types of attribute-value pairs that match that each may have different levels of contributions that indicate whether the entity profiles are associated with the same group entity. By applying the first set of rules to the attribute-value pairs of the entity profiles, profile matcher118may determine if the entity profiles are associated with a common group entity, such as a household (e.g., the individuals that correspond to the entity profiles are members of the same household). Profile matcher118may identify any entity profiles that profile matcher118determines to be a match as candidate profiles for being associated with the same group entity. In some embodiments, profile matcher118may erase or delete any pre-stored group entity labels that were previously stored in the entity profiles before applying the first set of rules to the entity profiles. For instance, the households of different individuals may change over time as people move from and/or are added to different households. Upon moving households, the individuals may not update their household information in their entity profiles. Instead, the individuals may update their other information, such as their new address and/or phone number. Accordingly, profile matcher118may need to cleanse and refresh the household labels for all entity profiles that are stored in memory114upon detecting each profile refresh event (e.g., using a batch processing technique). By doing so, profile matcher118may ensure the entity profiles stay up-to-date and that the entity profiles do not contain stale or incorrect data. In some embodiments, before applying the first set of rules to the entity profiles, profile matcher118may use a normalization technique on their attribute-value pairs. For example, profile matcher118may normalize five attributes: last name, address, phone number, email, and/or joint account number. An example of such normalization may include normalizing last names that have composite tokens that are connected by a hyphen (e.g., Aranda-Gonzalez). Profile matcher118may replace the hyphen with a white space. Another example of such normalization may include unifying all zip codes to five digits by converting any zip code strings with nine digits to five-digit strings with the remaining digits replaced with null values. Normalization techniques may help avoid variations with the attributes (e.g., avoid instances where one entity profile includes a composite last name and another entity profile has the otherwise same last name but without the hyphen). Any normalization technique may be used to normalize any attribute-value pairs. Profile matcher118may label the entity profiles according to the rules that caused the profiles to be a match. Profile matcher118may do so in response to profile matcher118identifying at least one pair of entity profiles that match according to the first set of rules. For instance, each rule may correspond to a different label that may have a stored association (e.g., a relationship in rule database124) with the rule. The first set of rules may have labels such as 1, 1.1, 1.2, 1.3, 1.4, etc. Upon identifying entity profiles that match, profile matcher118may identify the labels of the rules that caused the entity profiles to match. Profile matcher118may retrieve the labels and update the entity profiles with the retrieved labels to indicate the match (e.g., update the entity profiles by adding the labels to one or more intermediary label attribute-value pairs of the profiles). Thus, profile matcher118may identify the candidate entities that may be associated with a household by labeling the entity profiles with the matching labels. In some embodiments, profile matcher118may apply labels to the matching entity profiles based on the different subsets of rules. For instance, each subset of rules may have a different initial value associated with the subset (e.g., one subset may include the labels 1, 1.1, 1.2, 1.3, etc., and another subset may include the labels 2, 2.1, 2.2, 2.3, 2.4, etc.). Profile matcher118may retrieve the labels associated with the different subsets and update the matching entity profiles based on the labels to indicate the subsets of the first set of rules that caused the entity profiles to match between each other. Profile parser120may then determine if the different candidate entity profiles may be updated with a group entity label. Profile parser120may comprise programmable instructions that, upon execution, cause processor112to determine if the candidate profiles that profile matcher identifies can be updated with a group entity label. Profile parser120may identify the number of profiles that are associated with the same group entity. To do so, profile parser120may synthesize the entity profiles based on the matching entity profiles sharing a common entity profile. For instance, profile parser120may determine entity profile A matches entity profile B and entity profile B matches entity profile C. Because the matches between entity profiles A and B and entity profiles B and C share a common entity profile (e.g., entity profile B), profile parser120may determine entity profiles A, B, and C are candidate entity profiles for the same group entity. Continuing with this example, entity profiles C and D may be a match, entity profiles E and F may be a match, and entity profiles F and G may be a match. Because entity profiles A, B, and C share a common entity profile with the match between entity profiles C and D, entity profile D may be added to the group of A, B, and C as a candidate entity profile for the same group entity. Meanwhile, entity profiles E, F, and G may be candidate entity profiles for another group entity because entity profile F is a common entity profile between the two matches and the entity profiles do not share a match with entity profiles A, B, C, and D. By synthesizing entity profiles in this way, profile parser120may identify sets of entity profiles that satisfy the first set of rules for different group entity based on the sets of entity profiles having matching attribute-value pairs. Profile parser120may generate a list for each group entity that includes the candidate entity profiles for the group entity. Profile parser120may maintain a counter for each list and increment the counters for each entity profile on the list. Profile parser120may determine the number of candidate entity profiles that are associated with each group entity as the count of the counter. Profile parser120may determine if the number of candidate entity profiles of a set of entity profiles exceeds a threshold. The threshold may be a defined threshold (e.g., 4, 5, 6, 7, 8, 9, etc.). Profile parser120may compare the number of candidate entities profiles to the threshold and determine if the number exceeds the threshold. Advantageously, by doing so, profile parser120may avoid incorrectly assigning a group entity label to entity profiles. For instance, if profile parser120is grouping the entity profiles into individual households, profile parser120would not include public areas (e.g., jails or college dormitories) as households that may result in a number of entity that exceeds the threshold. If profile parser120identifies a set of entity profiles for a group entity that exceeds the threshold, profile parser120may discard the labels on the set of entity profiles. Profile parser120may discard the labels by removing the intermediary labels from the entity profiles that indicate the entity profiles match, leaving the intermediary label and group entity attribute-value pairs for the entity profiles blank. In some embodiments, in instances in which the entity profiles previously had a group entity label in the group entity attribute-value pair, profile parser120may remove the group entity label from the attribute-value pair. In embodiments in which profile parser120cleanses group entity labels prior to applying the first set of rules, profile parser120may discard a previous group entity label from an entity profile during the cleansing and leave the group entity attribute-value pair blank upon refreshing the group entity attribute-value pair labels for the entity profiles. This may occur when profile parser120has more data for other entity profiles that causes another subset of entity profiles to match the first set of entity profiles and the match causes the first set of entity profiles to exceed the threshold after synthesis. Profile matcher118may apply a second set of rules to the entity profiles. The second set of rules may be similar to the first set of rules, but include a distinct set of attribute-value pairs in the set. For example, each rule in the second set of rules may at least include an account number attribute-value pair that must exactly match between the different entity profiles for the rule to be satisfied. The rules may otherwise have other attribute-value pairs that may either be a fuzzy match or an exact match. In some embodiments, the second set of rules may be separate from the first set of rules because the attribute-value pair that is common to the second set of rules may be retrieved from another data source (e.g., another computer or database). For instance, while the first set of rules may only include or reference attribute-value pairs that are stored in the data structures of the entity profiles, the second set of rules may include an attribute-value pair that is common across the second set of rules that is stored in another data structure, such as a database or table that stores values for the attribute-value pair. Continuing with the example above, the account number attribute-value pair for the entity profiles may be aggregated and stored in an account table. The account table may be logically and/or physically separate from the database that stores the entity profiles. The account table may include the account numbers and customer identifiers. The customer identifiers may link the entity profiles to the account numbers as a key. Such a storage configuration may be advantageous given a single customer may have multiple account numbers or customer numbers in a one-to-many relationship (e.g., in instances in which a customer has a savings account and a checking account or a child and a parent have different accounts but the same customer number). For instance, entity profiles may have a customer identifier attribute-value pair but not an account attribute-value pair. Upon applying the second set of rules to the entity profiles, profile matcher118may identify the account numbers for the different entity profiles responsive to the account numbers having stored relationships with the same customer numbers as are stored in the entity profiles. Profile matcher118may identify the attribute-value pairs of the entity profiles that correspond to the account numbers (e.g., that have a stored relationship with the same customer number) and apply the second set of rules to the entity profiles similar to the first set of rules by identifying entity profiles with attribute-value pairs including the account number attribute-value pair that match. Examples of rules of the second set of rules include exact account number, street number, and zip code matches and a fuzzy street name match; exact account number and home phone number matches; exact account number and exact cell phone number matches; and exact account number and email matches. The second set of rules may include account number and any combination or variation of attribute-value pairs that match for the rules to be satisfied. Profile matcher118may label the candidate entity profiles of the second set of rules according to the rules that caused the entity profiles to match. Profile matcher118may do so in response to profile matcher118identifying at least one set of entity profiles that match according to the first set of rules. For instance, each rule of the second set of rules may correspond to a different label that has a stored association with the rule. The second set of rules may have labels such as 3, 3.1, 3.2, 3.3, 3.4, etc. Such labels may be sequential to the last label of the first set of rules or subset of rules (e.g., if the first set of rules includes the labels 1, 1.1, and 1.2, the second set of labels may include the labels 2, 2.1, and 2.2). Upon identifying entity profiles that match according to the second set of rules, profile matcher118may identify the labels of the rules that caused the entity profiles to match. Profile matcher118may retrieve the labels and update the entity profiles with the retrieved labels to indicate the match. Thus, profile matcher118may identify the candidate entities that may be associated with a group entity by labeling the entity profiles with the matching labels. In some embodiments, similar to the first set of rules, profile matcher118may apply labels to the matching entity profiles based on the different subsets of the second set of rules. For instance, each subset of rules may have a different initial value associated with the subset. The labels may be sequential to the last label of the first set of rules or subset of rules (e.g., if the last subset of labels of the first set of rules includes the labels 2, 2.1, and 2.2, the first subset of the second subset of rules may include the labels 3, 3.1, and 3.2). Profile matcher118may retrieve the labels associated with the different subsets and update the matching entity profiles based on the labels to indicate the subsets of the second set of rules that caused the entity profiles to match between each other. Profile parser120may identify the number of profiles that are associated with the same group entity. Profile parser120may do so in a similar manner to how profile parser120identified the number of profiles that are associated with the same group entity for the first set of rules. For instance, profile parser120may identify groups of matching entity profiles that share a common entity profile. One such grouping may be a second set of entity profiles. Profile parser120may identify any number of such groups. For each group, profile parser120may maintain a counter that indicates the number of entity profiles in the group. The count of the counter may indicate the number of entity profiles of the group. For the second set of entity profiles, profile parser120may determine whether the number of entity profiles for a group entity exceeds a threshold. The threshold may be a defined threshold (e.g., 4, 5, 6, 7, 8, 9, etc.). Profile parser120may compare the number of candidate entities profiles to the threshold and determine if the number exceeds the threshold. Advantageously, by doing so, profile parser120may avoid incorrectly assigning a group entity label to entity profiles similar to how profile parser120avoided incorrectly assigning group entity labels to entity profiles for the second set of rules. Profile parser120may similarly compare any number of groups or sets of entity profiles to a threshold. If profile parser120identifies a set of candidate entity profiles for a group entity that exceeds the threshold, profile parser120may discard the labels on the candidate entity profiles for the group entity. Profile parser120may discard the labels similar to how profile parser120discards labels for the first set of rules as described above (e.g., fail to update the entity profile attribute-value pair of an entity profile, remove a value for a group entity attribute-value pair from the entity profiles, remove the intermediary labels from the entity profiles, etc.). Profile updater122may comprise programmable instructions that, upon execution, cause processor112to update the entity profiles that have been matched with a group entity with a group entity label. To do so, profile updater122may determine if there are any entity profiles that are associated with a group entity for which their labels have not been discarded. For instance, profile updater122may determine if there are any matching entity profiles that were identified using the first set of rules or the second set of rules that were not labeled with a group entity that exceeds a threshold. Responsive to profile updater122determining no such entity profiles exist (e.g., all matching entity profiles were discarded or no matching entity profiles were found), profile updater122may generate a record (e.g., a file, document, table, listing, message, notification, etc.) indicating no group entities could be identified. Profile updater122may transmit the record to a client device to be displayed on a user interface. By doing so, profile updater122may inform an administrator that the entity profiles could not be grouped into different group entities. However, responsive to determining there is at least one matching profile that has not been discarded, profile updater122may update the entity profiles according to the group entities with which they are associated. Profile updater122may update the entity profiles of the different sets of entity profiles with group entity labels that indicate the group entities with which they are associated. For example, upon determining there is a set of entity profiles that matches and/or has been synthesized together, profile updater122may generate a numeric, alphabetic, or alphanumeric string as a group entity label and add the group entity label to each of the entity profiles of the set (e.g., add a group entity label to the group entity attribute-value pair of each of the entity profiles). Profile updater122may generate the group entity label using a random number generator or using data that matches one or a portion of the entity profiles such as a common last name or a common phone number. Profile updater122may generate the group entity labels using any method. In some embodiments, profile updater122may generate group entity labels for the entity profiles after synthesizing entity profiles that were identified using the first set of rules and entity profiles that were identified using the second set of rules (e.g., the first and second sets of entity profiles). Profile updater122may synthesize the two sets of entity profiles based on the two sets of entity profiles sharing a common entity profile (e.g., the same entity profile is in each set). Profile updater122may compare the entity profiles in each set of entity profiles and identify sets with a common entity profile as being associated with the same group entity. Accordingly, when profile updater122updates the different sets of entity profiles with group entity labels, profile updater122may update entity profiles that were identified using both sets of rules with the same group entity label. In this way, profile updater122may enable profile updater122to link entity profiles that were identified by only one set of the rules and entity profiles that were only identified using the other set of rules to the same group entity (and therefore each other). FIG.2is an illustration of sets of rules200for household entity generation, in accordance with an implementation. Set of rules200may be or represent rules that are stored in rule database124, shown and described with reference toFIG.1. Sets of rules200may include a first set of rules202and a second set of rules204. In some embodiments, first set of rules202may have a first matching or identical identifier or label (e.g., 1) and second set of rules204may have a second matching or identical identifier or label (e.g., 2) such that the rules may be identified as being a part of different sets when being retrieved. First set of rules202may include rules that include a common attribute-value pair between each other or that each exclude a specific attribute-value pair. For example, each rule of first set of rules202may include a requirement to have at least the exact same last name or a fuzzy matching last name. In another example, by definition, each rule in first set of rules202may not include an account number attribute-value pair. This may be particularly advantageous if the excluded attribute-value pair is stored in another data structure from the entity profiles such as account numbers being stored in a global table or database with common customer identification numbers that act as look-up keys that link the account numbers to different entity profiles. In some cases, the different customer identifiers of different entity profiles may correspond to the same account number or vice versa. Accordingly, using the customer identifiers as look-up keys may enable a data processing system (e.g., household entity generator106) to identify attribute-value pairs for different entity profiles that correspond to the same account number. First set of rules202may include subsets of rules206,208, and210. Each subset of rules206,208, and210may have a common attribute-value pair within the rules of the respective subset (e.g., each rule of subset of rules206may include a last name attribute-value pair, each rule of subset of rules208may include a phone number attribute-value pair, and each rule of subset of rules210may include a street address attribute-value pair). Such rules may be designed to determine the group entities with which the individual entity profiles are associated. Furthermore, each subset of rules206,208, and210may include rules that have a stored relationship with different labels. For instance, the rules of subset of rules206may respectively have stored relationships with the labels 1.1, 1.2, and 1.3; the rules of subset of rules208may respectively have stored relationships with the labels 2.1, 2.2, and 2.3; and the rules of subset of rules210may respectively have stored relationships with the labels 3.1, 3.2, and 3.3. Upon determining two entity profiles match according to one of these rules, the data processing system may retrieve the label for the rule and update the two matching entity profiles with the label. Thus, the data processing system may update the entity profiles to indicate that they have a match and the rule that caused the match. Second set of rules204may include rules that include a common attribute-value pair between each other. In some embodiments, each of second set of rules204may include an attribute-value pair that was excluded from first set of rules204. For instance, each rule of second set of rules204may include an account number attribute-value pair. The account number attribute-value pair may have been explicitly excluded from first set of rules202because the account number attribute may be likely to cause false matches in combination one or more of the attribute of first set of rules202. Second set of rules204may include subsets of rules212and214that identify different types of attribute-value pairs in addition to the account number. For example, the rules of subset of rules212may each include an account number attribute-value pair and a phone number attribute-value pair. The rules of subset of rules214may each include an account number attribute-value pair and an email attribute-value pair. Each subset may have stored relationships with labels for each rule of the subset similar to first set of rules202. The data processing system may apply such labels to entity profiles upon determining the entity profiles match according to the rules of the respective labels. Thus, the data processing system may store indications of the rules that caused the entity profiles to match. FIG.3is an illustration of an example entity profile300, in accordance with an implementation. Entity profile300may be or include a data structure that contains or stores attribute-value pairs for the individual entity profile300. As illustrated, entity profile300may include a name attribute-value pair302, an account number attribute-value pair304, a phone number attribute-value pair306, an email address attribute-value pair308, an address attribute-value pair310, and a group entity attribute-value pair312. However, entity profiles such as entity profile300may include any combination of more or less attribute-value pairs. Each attribute-value pair may include a string identifying the attribute and a value (e.g., a numerical, alphabetical, or alphanumerical value) for the attribute-value pair. Such attribute-value pairs may be stored in a database (e.g., profile database126) with attribute-value pairs of other entity profiles such that a data processing system (e.g., household entity generator106) may retrieve the attribute-value pairs to identify group entities that are associated with the individual entity profiles. Entity profile300may not include a value for group entity attribute-value pair312. This may be the case if a user has not input a value for group entity attribute-value pair or the data processing system has not assigned entity profile300to a specific group entity. Upon determining entity profile300is associated with a group entity, the data processing system may update group entity attribute-value pair312with a value indicating the group entity. The data processing system may refresh (e.g., replace with a new value, remove the value, or keep the value the same) this value at each instance the data processing system determines the group entity with which each entity profile is associated. FIG.4is an illustration of a comparison400between two entity profiles402and404, in accordance with an implementation. A data processing system (e.g., household entity generator106) may compare entity profiles402and404by applying one or more sets of rules (e.g., sets of rules202and/or204) to entity profiles402and404. The data processing system may do so by comparing the corresponding attribute-value pairs (e.g., attribute-value pairs of the same type) of entity profiles402and404between each other and determining whether any of the rules are satisfied based on the comparison. For example, the data processing system may apply a rule that indicates that two entity profiles match if they have the exact same last name, phone number, and address. The data processing system may apply the rule to entity profiles402and404and determine the rule is satisfied. After determining the rule is satisfied, the data processing system may determine entity profiles are associated with the same group entity and are not a part of a set of entity profiles that exceeds a threshold. Accordingly, the data processing system may update each of entity profiles402and404with a “Doe153” group entity label406. In some embodiments, the data processing system may determine group entity labels, such as the “Doe153” label mentioned above, based on the matching attributes of the matching profiles being labeled. For instance, the data processing system may determine entity profiles402and404are a match based at least on the last name attribute-value pair “Doe.” Accordingly, the data processing system may generate a group entity label of Doe to add to each of entity profiles402and404. The data processing system may select one set of matching attribute-value pairs between matching entity profiles when multiple sets of entity profiles with different matching attribute-value pairs have been synthesized together (e.g., if entity profiles of set A and B and set B and C have been synthesized, the data processing system may select a matching attribute value pair from set A and B or from B and C to use to create the group entity label). The data processing system may determine group entity labels based on any matching attribute-value pair. In some instances, when generating group entity labels using the above technique, the data processing system may identify multiple group entities that correspond to the same label (e.g., multiple sets of matching entity profiles with the same matching attribute, such as the last name of Doe). In such instances, the data processing system may increment a counter for each label with the same value and concatenate the count of the counter to the end of the label. For example, if the data processing system had previously identified152sets of matching entity profiles that have the value Doe, the data processing system may determine the label for the 153rdgroup entity to be Doe153. Accordingly, the data processing system may use the data of the matching attribute-value pairs to generate unique group entity labels for each set of matching entity profiles. In some embodiments, the data processing system may generate the group labels using a random number generator. FIG.5is an illustration of a method500for improved household data generation, in accordance with an implementation. Method500can be performed by a data processing system (a client device or a household entity generator106, shown and described reference toFIG.1, a server system, etc.). Method500may include more or fewer operations and the operations may be performed in any order. Performance of method500may enable the data processing system to ingest different entity profiles and correlate the entity profiles with a common group entity (e.g., a household entity). The data processing system may apply different sets of rules to the entity profiles to identify candidate entity profiles that may be associated with individuals of the same group entity. The data processing system may parse the candidate entity profiles to identify entity profiles that are likely not associated with a group entity that meets a predetermined criteria (e.g., a group entity that is not too large). The data processing system may then update the entity profiles to indicate the group entities with which they are associated and/or generate group entity profiles for the group entities. Accordingly, the data processing system may enable the entity profiles to be accurately filtered according to their groups (e.g., based on the entity profiles having matching group entity labels) without relying on a user input to indicate the groups and without misidentifying the entity profiles as being associated with the incorrect group (e.g., misidentifying the entity profiles as being associated with a household when they are located or reside in a jail, dormitory, or another large public space). At operation502, the data processing system may store entity profiles. Entity profiles may be accounts for different individuals such as bank account, credit card accounts, user profiles with different web sites, etc. The entity profiles may include attribute-value pairs that each include a different attribute and a value for the attribute. The data processing system may initially store the entity profiles with attribute-value pairs with blank values in the data structures for the respective entity profiles. The data processing system may then add values to the attribute-value pairs as the data processing system receives inputs (e.g., user inputs) indicating the values. The data processing system may add the values to the attribute-value pairs by updating the respective data structures with the values. The data processing system may store the entity profiles in memory, in some cases in a single or multiple databases. The data processing system may store the entity profiles in a database that may include profiles for different individual entities and/or group entities. The entity profiles may be stored as data structures with one or more attribute-value pairs as described above. The group entity profiles may similarly be stored as data structures with attribute-value pairs. The different profiles that are stored in the database may be updated over time as the data processing system either receives new values for the profiles or determines values for profiles. At operation504, the data processing system may detect a profile refresh event. A profile refresh event may be any event that causes the data processing system to evaluate the entity profiles that are stored in memory of the data processing system to determine whether they are associated with a particular group entity. Profile refresh events may be stored rules in the data processing system that, upon being satisfied, causes the data processing system to evaluate the entity profiles. In some embodiments, the data processing system may evaluate the different rules of the profile refresh events over time and/or each time the data processing system adds an entity profile to memory. Upon detecting any of these events occurred, the data processing system may perform operations506-534to determine whether the entity profiles are associated with group entities and update the entity profiles accordingly. After detecting a profile refresh event, at operation508, the data processing system may apply a first set of rules to the entity profiles. The data processing system may apply the first set of rules to the attribute-value pairs of the entity profiles to identify entity profiles with specific sets of matching attribute-value pairs. The first set of rules may include matching rules for one or more attribute-value pairs (e.g., entity profiles may be a match between each other for a rule if a specific set of attribute-value pairs match). For instance, one rule may indicate two profiles are a match between each other if they have a match between the last name, street number, street name, and zip code attribute-value pairs. Another rule may indicate two profiles are a match if the profiles have matching last name, street name, city, and state attribute-value pairs. The first set of rules may include any number of rules for any attributes. In some embodiments, the rules may specify that the attribute-value pairs are a match if the values of the attribute-value pairs do not exactly match, but instead only approximately match. For instance, some rules may specify that last names may be a fuzzy match if they match according to a phonetic function (e.g., a phonetic hashing function). The data processing system may apply such rules by applying the phonetic hashing function to each last name attribute-value pair of the entity profiles and determine matches between the resulting hashes to be a fuzzy match. Some rules may specify that attribute-value pairs are a fuzzy match if they are a match within a threshold as determined using an edit distance function. The data processing system may apply the edit distance function between corresponding attribute-value pairs and determine how many changes would need to occur before the values match. The data processing system may compare the number of changes to a threshold (e.g., a predetermined threshold). The data processing system may determine any values with a number of changes below the threshold are a match and values with a number of changes above the threshold are not a match. In some embodiments, the value of the threshold may differ depending on the length of the value (e.g., the number of characters of the value). For instance, the data processing system may determine longer street names correspond to higher thresholds. This may be beneficial because the longer the value, the higher the chance for any typographical errors in the name. Rules may use phonetic functions and/or edit distance functions to determine matches for any attributes. In some embodiments, the data processing system may use phonetic functions and/or edit distance functions to determine exact matches. In some embodiments, individual rules may include a combination of exact matches and fuzzy matches for different attributes. For instance, one rule may indicate two entity profiles are a match if the last name and street name attribute-value pairs are fuzzy matches (e.g., a fuzzy match based on a phonetic function and/or an edit distance function) and the street number and city and state attribute-value pairs are exact matches. Another rule may indicate two profiles are a match if the same attributes are all fuzzy matches. Yet another rule may indicate two profiles are a match if the same attributes are all exact matches. The rules may have any combination of exact and/or fuzzy matches for any combination of attribute-value pairs. In some embodiments, the first set of rules may be divided into different subsets of rules. The first set of rules may be divided based on the attribute-value pairs that are included in the rules. For instance, one subset of rules may include different variations of attribute-value pairs that each include a last name attribute-value pair (e.g., an exact matching last name, a fuzzy matching last name, or a last name inclusion). Another set of rules may each include a phone number attribute-value pair (e.g., an exact or fuzzy home and/or cell phone number). By applying the first set of rules to the attribute-value pairs of the entity profiles, the data processing system may determine if the entity profiles are associated with a common group entity, such as a household (e.g., the individuals that correspond to the entity profiles are members of the same household). The data processing system may identify any entity profiles that the data processing system determines to be a match as candidate profiles for being associated with the same group entity. In some embodiments, the data processing system may erase or delete any pre-stored group entity labels that were previously stored in the entity profiles before applying the first set of rules to the entity profiles. For instance, the households of different individuals may change over time as people move from and/or are added to different households. Upon moving households, the individuals may not update their household information in their entity profiles. Instead, the individuals may update their other information, such as their new address and/or phone number. Accordingly, the data processing system may need to cleanse and refresh the household labels for all entity profiles that are stored in memory upon detecting each profile refresh event (e.g., using a batch processing technique). By doing so, the data processing system may ensure the entity profiles stay up-to-date and that the entity profiles do not contain stale or incorrect data. In some embodiments, before applying the first set of rules to the entity profiles, the data processing system may use a normalization technique on their attribute-value pairs. For example, the data processing system may normalize five attributes: last name, address, phone number, email, and/or joint account number. An example of such normalization may include normalizing last names that have composite tokens that are connected by a hyphen (e.g., Aranda-Gonzalez). The data processing system may replace the hyphen with a white space. Another example of such normalization may include unifying all zip codes to five digits by converting any zip code strings with nine digits to five-digit strings with the remaining digits replaced with null values. Normalization techniques may help avoid variations with the attributes (e.g., avoid instances where one entity profile includes a composite last name and another entity profile has the otherwise same last name but without the hyphen). Any normalization technique may be used to normalize any attribute-value pairs. At operation508, the data processing system may determine if any of the entity profiles satisfy any of the first set of rules. For instance, the data processing system may compare the attribute-value pairs of the different entity profiles against each other according to the first set of rules and determine if there are any matching entity profiles that satisfy at least one of the first set of rules. The data processing system may identify any matching entity profiles as candidate profiles that may be associated with the same group entity. At operation510, the data processing system may label the entity profiles according to the rules that caused the profiles to be a match. The data processing system may do so in response to the data processing system identifying at least one pair of entity profiles that match according to the first set of rules. For instance, each rule may correspond to a different label that may have a stored association (e.g., a relationship in a relational database) with the rule. The first set of rules may have labels such as 1, 1.1, 1.2, 1.3, 1.4, etc. Upon identifying entity profiles that match, the data processing system may identify the labels of the rules that caused the entity profiles to match. The data processing system may retrieve the labels and update the entity profiles with the retrieved labels to indicate the match. Thus, the data processing system may identify the candidate entities that may be associated with a household by labeling the entity profiles with the matching labels. In some embodiments, the data processing system may apply labels to the matching entity profiles based on the different subsets of rules. For instance, each subset of rules may have a different initial value associated with the subset (e.g., one subset may include the labels 1, 1.1, 1.2, 1.3, etc., and another subset may include the labels 2, 2.1, 2.2, 2.3, 2.4, etc.). The data processing system may retrieve the labels associated with the different subsets and update the matching entity profiles based on the labels to indicate the subsets of the first set of rules that caused the entity profiles to match between each other. At operation512, the data processing system may identify the number of profiles that are associated with the same group entity. To do so, the data processing system may synthesize the entity profiles based on the matching entity profiles sharing a common entity profile. For instance, the data processing system may determine entity profile A matches entity profile B and entity profile B matches entity profile C. Because the matches between entity profiles A and B and entity profiles B and C share a common entity profile (e.g., entity profile B), the data processing system may determine entity profiles A, B, and C are candidate entity profiles for the same group entity. Continuing with this example, entity profiles C and D may be a match, entity profiles E and F may be a match, and entity profiles F and G may be a match. Because entity profiles A, B, and C share a common entity profile with the match between entity profiles C and D, entity profile D may be added to the group of A, B, and C as a candidate entity profile for the same group entity. Meanwhile, entity profiles E, F, and G may be candidate entity profiles for another group entity because entity profile F is a common entity profile between the two matches and the entity profiles do not share a match with entity profiles A, B, C, and D. By synthesizing entity profiles in this way, the data processing system may identify sets of entity profiles that satisfy the first set of rules for different group entity based on the sets of entity profiles having matching attribute-value pairs. The data processing system may generate a list for each group entity that includes the candidate entity profiles for the group entity. The data processing system may maintain a counter for each list and increment the counters for each entity profile on the list. The data processing system may determine the number of candidate entity profiles that are associated with each group entity as the count of the counter. At operation514, the data processing system may determine if the number of candidate entity profiles of a set of entity profiles exceeds a threshold. The threshold may be a defined threshold (e.g., 4, 5, 6, 7, 8, 9, etc.). The data processing system may compare the number of candidate entities profiles to the threshold and determine if the number exceeds the threshold. Advantageously, by doing so, the data processing system may avoid incorrectly assigning a group entity label to entity profiles. For instance, if the data processing system is grouping the entity profiles into individual households, the data processing system would not include public areas (e.g., jails or college dormitories) as households that may result in a number of entity that exceeds the threshold. If the data processing system identifies a set of entity profiles for a group entity that exceeds the threshold, at operation516, the data processing system may discard the labels on the set of entity profiles. The data processing system may discard the labels by removing the intermediary labels from the entity profiles that indicate the entity profiles match, leaving the intermediary label and group entity attribute-value pairs for the entity profiles blank. In some embodiments, in instances in which the entity profiles previously had a group entity label in the group entity attribute-value pair, the data processing system may remove the group entity label from the attribute-value pair. In embodiments in which the data processing system cleanses group entity labels prior to applying the first set of rules, the data processing system may discard a previous group entity label from an entity profile during the cleansing and leave the group entity attribute-value pair blank upon refreshing the group entity attribute-value pair labels for the entity profiles. This may occur when the data processing system has more data for other entity profiles that causes another subset of entity profiles to match the first set of entity profiles and the match causes the first set of entity profiles to exceed the threshold after synthesis. At operation518, the data processing system may apply a second set of rules to the entity profiles. The second set of rules may be similar to the first set of rules, but include a distinct set of attribute-value pairs in the set. For example, each rule in the second set of rules may at least include an account number attribute-value pair that must exactly match between the different entity profiles for the rule to be satisfied. The rules may otherwise have other attribute-value pairs that may either be a fuzzy match or an exact match. In some embodiments, the second set of rules may be separate from the first set of rules because the attribute-value pair that is common to the second set of rules may be retrieved from another data source (e.g., another data processing system or another database). For instance, while the first set of rules may only include or reference attribute-value pairs that are stored in the data structures of the entity profiles, the second set of rules may include an attribute-value pair that is common across the second set of rules that is stored in another data structure, such as a database or table that stores values for the attribute-value pair. Continuing with the example above, the account number attribute-value pair for the entity profiles may be aggregated and stored in an account table. The account table may be logically and/or physically separate from the database that stores the entity profiles. The account table may include the account numbers and customer identifiers. The customer identifiers may link the entity profiles to the account numbers as a key. Such a storage configuration may be advantageous given a single customer or entity profile may have multiple account numbers or customer numbers in a one-to-many relationship (e.g., in instances in which a customer has a savings account and a checking account or a child and a parent have different accounts but the same customer number). For instance, entity profiles may have a customer identifier attribute-value pair but not an account attribute-value pair. Upon applying the second set of rules to the entity profiles, the data processing system may identify the account numbers for the different entity profiles responsive to the account numbers having stored relationships with the same customer numbers as are stored in the entity profiles. The data processing system may identify the attribute-value pairs of the entity profiles that correspond to the account numbers (e.g., that have a stored relationship with the same customer number) and apply the second set of rules to the entity profiles similar to the first set of rules by identifying entity profiles with attribute-value pairs including the account number attribute-value pair that match. Examples of rules of the second set of rules include exact account number, street number, and zip code matches and a fuzzy street name match; exact account number and home phone number matches; exact account number and exact cell phone number matches; and exact account number and email matches. The second set of rules may include account number and any combination or variation of attribute-value pairs that match for the rules to be satisfied. At operation520, the data processing system may determine if any of the second set of rules were satisfied. For instance, the data processing system may retrieve the account numbers for the entity profiles based on the entity profiles and account numbers each having a stored relationship with the same customer number. The data processing system may apply the second set of rules to the account numbers and attribute-value pairs of the entity profiles and determine if there are any matching entity profiles that satisfy at least one of the second set of rules. The data processing system may identify any matching entity profiles as candidate profiles that may be associated with the same group entity. At operation522, the data processing system may label the candidate entity profiles of the second set of rules according to the rules that caused the entity profiles to match. The data processing system may do so in response to the data processing system identifying at least one set of entity profiles that match according to the first set of rules. For instance, each rule of the second set of rules may correspond to a different label that has a stored association with the rule. The second set of rules may have labels such as 3, 3.1, 3.2, 3.3, 3.4, etc. Such labels may be sequential to the last label of the first set of rules or subset of rules (e.g., if the first set of rules includes the labels 1, 1.1, and 1.2, the second set of labels may include the labels 2, 2.1, and 2.2). Upon identifying entity profiles that match according to the second set of rules, the data processing system may identify the labels of the rules that caused the entity profiles to match. The data processing system may retrieve the labels and update the entity profiles with the retrieved labels to indicate the match. Thus, the data processing system may identify the candidate entities that may be associated with a group entity by labeling the entity profiles with the matching labels. In some embodiments, similar to the first set of rules, the data processing system may apply labels to the matching entity profiles based on the different subsets of the second set of rules. For instance, each subset of rules may have a different initial value associated with the subset. The labels may be sequential to the last label of the first set of rules or subset of rules (e.g., if the last subset of labels of the first set of rules includes the labels 2, 2.1, and 2.2, the first subset of the second subset of rules may include the labels 3, 3.1, and 3.2). The data processing system may retrieve the labels associated with the different subsets and update the matching entity profiles based on the labels to indicate the subsets of the second set of rules that caused the entity profiles to match between each other. At operation524, the data processing system may identify the number of profiles that are associated with the same group entity. The data processing system may do so in a similar manner to how the data processing system identified the number of profiles that are associated with the same group entity for the first set of rules. For instance, the data processing system may identify groups of matching entity profiles that share a common entity profile. One such grouping may be a second set of entity profiles. The data processing system may identify any number of such groups. For each group, the data processing system may maintain a counter that indicates the number of entity profiles in the group. The count of the counter may indicate the number of entity profiles of the group. At operation526, for the second set of entity profiles, the data processing system may determine whether the number of entity profiles for a group entity exceeds a threshold. The threshold may be a defined threshold (e.g., 4, 5, 6, 7, 8, 9, etc.). The data processing system may compare the number of candidate entities profiles to the threshold and determine if the number exceeds the threshold. Advantageously, by doing so, the data processing system may avoid incorrectly assigning a group entity label to an entity profile. The data processing system may similarly compare any number of groups or sets of entity profiles to a threshold. If the data processing system identifies a set of candidate entity profiles for a group entity that exceeds the threshold, at operation528, the data processing system may discard the labels on the candidate entity profiles for the group entity. The data processing system may discard the labels similar to how the data processing system discards labels for the first set of rules as described above (e.g., fail to update the entity profile attribute-value pair of an entity profile, remove a value for a group entity attribute-value pair from the entity profiles, remove the intermediary labels from the entity profiles, etc.). At operation530, the data processing system may determine if there are any entity profiles that are associated with a group entity for which their labels have not been discarded. For instance, the data processing system may determine if there are any matching entity profiles that were identified using the first set of rules or the second set of rules that were not labeled with a group entity that exceeds a threshold. Responsive to the data processing system determining no such entity profiles exist (e.g., all matching entity profiles were discarded or no matching entity profiles were found), at operation532, the data processing system may generate a record (e.g., a file, document, table, listing, message, notification, etc.) indicating no group entities could be identified. The data processing system may transmit the record to a client device to be displayed on a user interface. By doing so, the data processing system may inform an administrator that the entity profiles could not be grouped into different group entities. However, responsive to determining there is at least one matching profile that has not been discarded, at operation534, the data processing system may update the entity profiles according to the group entities with which they are associated. The data processing system may update the entity profiles of the different sets of entity profiles with group entity labels that indicate the group entities with which they are associated. For example, upon determining there is a set of entity profiles that matches and/or has been synthesized together, the data processing system may generate a numeric, alphabetic, or alphanumeric string as a group entity label and add the group entity label to each of the entity profiles of the set (e.g., add a group entity label to the group entity attribute-value pair of each of the entity profiles). The data processing system may generate the group entity label using a random number generator or using data that matches one or a portion of the entity profiles such as a common last name or a common phone number. The data processing system may generate the group entity labels using any method. In some embodiments, the data processing system may generate group entity labels for the entity profiles after synthesizing entity profiles that were identified using the first set of rules and entity profiles that were identified using the second set of rules (e.g., the first and second sets of entity profiles). The data processing system may synthesize the two sets of entity profiles based on the two sets of entity profiles sharing a common entity profile (e.g., the same entity profile is in each set). The data processing system may compare the entity profiles in each set of entity profiles and identify sets with a common entity profile as being associated with the same group entity. Accordingly, when the data processing system updates the different sets of entity profiles with group entity labels, the data processing system may update entity profiles that were identified using both sets of rules with the same group entity label. In this way, the data processing system may enable the data processing system to link entity profiles that were identified by only one set of the rules and entity profiles that were only identified by the other set of rules to the same group entity (and therefore each other) and for the entity profiles to be filtered (e.g., the data processing system may filter the entity profiles based on the group entity labels in response to receiving a filtering request from a computing device). In some embodiments, the first or second set of entity profiles exceeding its respective threshold may cause the data processing system to discard the group entity labels for the other set of entity profiles. For example, for a group entity, the data processing system may determine a first set of entity profiles for the group entity exceeds a first threshold and the second set of entity profiles with a common entity profile with the first set of entity profiles does not exceed a second threshold. Responsive to determining the first set of entity profiles exceeds the first threshold, the data processing system may remove the group entity labels from both the first and the second sets of entity profiles or fail to add a group entity label to either set. By doing so, the data processing system may avoid inaccurately labeling entity profiles into households (e.g., private households) when they are instead associated with a public group. In some embodiments, upon generating a group entity label for a set of entity profiles, the data processing system may generate a group entity profile that corresponds to the group entity label. The group entity profile may include group attribute-value pairs that includes information about the group such as the name of the group entity and a list of the entity profiles that have been labeled with the group entity label of the group entity. The data processing system may maintain such group entity profiles and update the group entity profiles upon each refresh of the database that stores the entity profiles to indicate any entity profiles that have been removed or added to the group entity. In some embodiments, the data processing system determines the group entity no longer satisfies a predetermined criteria (e.g., after receiving more data, the data processing system may determine there are a number of entity profiles with the group entity label that exceed a threshold and the group entity is a public group entity rather than a household group entity). In such embodiments, the data processing system may delete the group entity from memory. In this way, the data processing system may preserve memory and processing resources. Solution Design In an example implementation, a data processing system (e.g., household entity generator106) may use two attribute-value pairs together to define matches between two entity profiles for a group entity label. For instance, the data processing system may use the last name and address attribute-value pairs to define a match. In some embodiments, neither last name nor address may be used alone to define a match. In some embodiments, the data processing system may not use more than two attributes at the same time, as it may generate fewer matches. A full address may include street number, street name, city, state, and zip code. The address may be an important attribute-value pair and it may have the highest number of variations compared to other attribute-value pairs. To apply a first set of rules to entity profiles stored in a database, first, the data processing system may separate zip code from the city and state in an address attribute-value pair because the data processing system may identify the same address and zip code may come with different city names. For instance, the two strings, 449 Commerce Drive, Woodbury, MN 55125, and 449 Commerce Drive, Saint Paul, MN 55125, may both be valid address values and be directed to the same location. In some embodiments, the data processing system may only apply fuzzy comparisons to street names, where the variations may be the most likely to take place. For instance, there may be abbreviations in the street names, like ‘Commerce Dr’ vs ‘Commerce Drive’, or the street direction may appear in the front or at the end of the street name, like ‘Lake Road S’ vs ‘S Lake Road’. A rule may have a first group entity label and be defined as follows: ‘fuzzy last name+exact street number+fuzzy street name+exact zip code’  (Label 1). This rule can put the above two address examples together by skipping the city and state information. The data processing system may use phonetic functions to define fuzzy last names, while for fuzzy street names, the data processing system may apply edit distance measures. The threshold value of the edit distance measures could be adjustable based on the length of the street names. A similar rule and label could be defined as follows: ‘fuzzy last name+exact street number+fuzzy street name+exact city & state’  (Label 1.1). replace exact zip code with exact city & state. The rule associated with Label 1 may be useful when the same location has different city names, while Label 1.1 may be useful when the zip code value has typos or is missing. Label 1.1 and Label 1 may have many overlaps. The data processing system may unify or synthesize the entity profiles that were identified based on the rules associated with labels 1 and 1.1, thus removing all the overlaps, in a later stage. In some embodiments, when the last name has more than one token like ‘ARANDA GONZALEZ’, the data processing system may consider the following two rules and labels: ‘last name inclusion+exact street number+fuzzy street name+exact zip code’  (Label 1.2), and ‘last name inclusion+exact street number+fuzzy street name+exact city & state’  (Label 1.3). With the name inclusion rule, by leveraging the similarity of addresses, the last name ARANDA GONZALEZ′ could be matched with ARANDA′ or ‘GONZALEZ’. Note that in the customer profile table, the same record may have more than one address fields, like address1 and address2, each of which may contain valid address information. The rules associated with Label 1 and its variations (Label 1.1-1.3) may include address1and/or address2respectively. In some embodiments, as discussed above, two household members may have different address values or different last names in a customer profile table. To address this issue, the data processing system may store rules that include combinations of attribute-value pairs such as last name+phone, last name+email, phone+address, email+address, and email+phone. Such rules may be labeled with a label 2 as follows: ‘fuzzy last name+exact home phone’  (Label 2), ‘fuzzy last name+exact cell phone’  (Label 2.1), and ‘fuzzy last name+exact home phone vs exact cell phone’  (Label 2.2). In some embodiments, entity profiles may have two phone fields, home phone and cell phone. The data processing system may store rules to account for three different scenarios, home phone vs home phone, cell phone vs cell phone, and home phone vs cell phone. For the rules associated with Labels 2-2.2, the data processing system may apply fuzzy comparisons on the last names, and exact matches on the phones. The remaining variations of Label 2 may be defined as follows: ‘last name inclusion+exact home phone’  (Label 2.3), ‘last name inclusion+exact cell phone’  (Label 2.4), ‘last name inclusion+exact home phone vs cell phone’  (Label 2.5), and ‘last name inclusion+exact cell phone vs home phone’  (Label 2.6). In some embodiments, last name inclusion may be a one-way direction, Label 2.5 and 2.6 may not be exchangeable, so both are listed here. The different combinations of phones and addresses are defined in the rules with the Labels 3-3.5 as follows: ‘exact home phone+exact street number+fuzzy street name+exact zip code’  (Label 3), ‘exact cell phone+exact street number+fuzzy street name+exact zip code’  (Label 3.1), ‘exact home phone vs cell phone+exact street number+fuzzy street name+exact zip code’  (Label 3.2), ‘exact home phone+exact street number+fuzzy street name+exact city & state’  (Label 3.3), ‘exact cell phone+exact street number+fuzzy street name+exact city & state’  (Label 3.4), and ‘exact home phone vs cell phone+exact street number+fuzzy street name+exact city & state’  (Label 3.5). To apply the rules with the Labels 3-3.5, the data processing system may apply fuzzy comparisons on the addresses, and exact matches on the phones. In some embodiments, the data processing system may apply rules that include different combinations of last name and email attribute-value pairs. Email may be a special attribute-value pair because a small variation could lead to two different people. For instance, the emails [email protected] and [email protected] could belong to two different people. Fuzzy matches on the emails could relatively easily trigger false alarms. Therefore, to apply the rules that correspond to labels 4 and 4.1, the data processing system may apply fuzzy comparisons on the last names, and exact matches on the emails: ‘fuzzy last name+exact email’  (Label 4), and ‘last name inclusion+exact email’  (Label 4.1). In some embodiments, the data processing system may apply rules that include different combinations of exact email and fuzzy address attribute-value pairs. Such rules may be associated with a Label 5 and its variations as shown below: ‘exact email+exact street number+fuzzy street name+exact zip code’  (Label 5), and ‘exact email+exact street number+fuzzy street name+exact city & state’  (Label 5.1). In some embodiments, the data processing system may apply rules that include different combinations of exact phone numbers and exact emails. Such rules may be associated with a label 6 and its variations as shown below: ‘exact home phone+exact email’  (Label 6), ‘exact cell phone+exact email’  (Label 6.1), and ‘exact home phone vs cell phone+exact email’  (Label 6.2). Each label in the six sets may have its own contributions to define proper households, but there should be a lot of overlaps among these many labels. The data processing system may use a Graph Theory application (e.g., a Connected Component Algorithm) to integrate all the labels together to define an intermediate label. For example, Label 1 may define a group of {A, B} and Label 2 may define a group of {B, C}. The data processing system may synthesize the two groups together to obtain a group of {A, B, C}, which means A, B, and C belong to the same group entity (e.g., same household). At this stage, the data processing system may be configured to set up the maximum number of members of any intermediate label to be 7 (or any other value) because it may be unlikely that any group entity, such as a household, includes more than 7 members. Accordingly, any intermediate labels whose size is greater than 7 will become void (e.g., the data processing system may remove the intermediary labels from any group that exceeds 7 members. Furthermore, the data processing system may also use a second set of rules that include the joint account information, which may be stored in a separate data structure or database from the entity profiles. The data processing system may have such a storage configuration because an account product could be owned by multiple customers, who may accordingly own the same account number. For example, a customer may have multiple account products with the same financial institute, so it would be one-to-many relationships between the customer ID and account number. Using the customer ID as a key, the data processing system join the account number with the attribute-value pairs of the entity profiles (e.g., identify the attribute-value pairs of entity profiles that correspond to different account numbers). The data processing system may store three different subsets of rules that includes the account numbers, such subsets may include: account number and address, account number and phone, and account number and email. In some embodiments, there might be a case in which an adult child and his/her father share the same account but live in different dwelling units and file tax returns separately. Therefore, the data processing system may not include a rule that includes the combination of account number+last name attribute-value pairs. A subset of rules that includes account number and address may be associated with the Label 7 and its variations is shown below: ‘exact account number+exact street number+fuzzy street name+exact zip code’  (Label 7), and ‘exact account number+exact street number+fuzzy street name+exact city & state’   (Label 7.1). A subset of rules that includes account number and phone may be associated with the Label 8 and its variations is shown below: ‘exact account number+exact home phone’  (Label 8), ‘exact account number+exact cell phone’  (Label 8.1), and ‘exact account number+exact home phone vs cell phone’  (Label 8.2). The subset of rules that includes account number and address may be associated with the Label 9: ‘exact account number+exact email’  (Label 9). The data processing system may synthesize the entity profiles that were identified as matching to integrate the three sets of labels (Lab 7-9) to generate another group of profiles with intermediate labels. The maximum number of the members of the new intermediate labels may be set to 7 or another threshold (e.g., the same or a different threshold from the first set of rules to avoid using this set of rules to identify a household that is too large. The data processing system may finally apply the Connected Component Algorithm to unify the two sets of intermediate labels, and then generate a final household label for each entity profile. For the final labels, the data processing system may not set a limit to the number of entity profiles. Therefore, the size of the final synthesized could be greater than 7 or another threshold. However, because of how the rules for each intermediate set of entity profiles are divided, the percentage of those final labels would be small and may accurately predict households that exceed 7 or a threshold number of members. In summary, a data processing system may split the sets of rules to identify entity profiles for individual group entities, one set that corresponds to Labels 1-6, and another set that corresponds to Labels 7-9. The data processing system may generate two groups of intermediate labels based on the two sets of rules. In each group, the data processing system may set up the maximum size of the intermediate labels that may differ or be the same between the two sets. The data processing system may then generate the final labels based on the two groups of intermediate labels without placing place an upper limit to the size of the final group. Accordingly, the data processing system may still allow for households to have more than the maximum number of members than is allowed for each intermediary grouping. Testing Results The proposed solution was tested with a sample file of 12 million entity profiles, which had a known household ID. Using a previous method to identify different households, among the 12 million entity profiles, 5.38 million would belong to multi-people households, the largest household having 125 entity profiles. However, using the systems and methods described herein, 6.64 million were found to belong to multi-people households, the largest household having 16 entity profiles There were 1.26 million extra entity profiles that could belong to multi-people households, which was a 23.3% increase. About 55% of the 1.26 million entity profiles came from the first set of labels (e.g., matching entity profiles identified using the first set of rules without using joint account information), while 45% came from the second set of labels (e.g., matching entity profiles identified using the second set of rules that take into account joint account information), showing that incorporating the joint account information had a significant impact on the performance of the proposed algorithm. 99.4% of the 6.64 million entity profiles would belong to households with no more than 7 people. Only a small portion (0.6%) would belong to households with more than 7 people. The table below shows the distributions of the household sizes of the two solutions: Size ofPriorProposedNumberHouseholdSolutionSolutionIncrease22,049,5502,373,140323,5903250,120337,98487,864491,348137,05145,703523,65542,85219,19765,46713,0387,57171,3674,5903,223 Among the 5.38 million entity profiles classified by the previous solution, there were 118 thousand entity profiles that could not be found in the 6.64 million entity profiles classified by the proposed algorithm. This means that 98% of the results of the previous solution were included in the new results. The following method was used to verify the results: Step 1. Select a random set of 20 households from the 118 thousand entity profiles that were only available in the previous solution and select a random set of 20 households from the 1.26 million entity profiles that were only available in the new solution. Step 2. Manually check the false alarms of the two random sets. Step 3. Repeat Steps 1 and 2 a number of times. In the data, the false alarm rate in the first random set was high, while the false alarm rate in the second random set was low. The repeated ‘cross validations’ showed that the proposed solution could significantly increase the household detection rate while keeping a low false alarm rate. The subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. The subject matter described in this specification can be implemented as one or more computer programs, e.g., one or more circuits of computer program instructions, encoded on one or more computer storage media for execution by, or to control the operation of, data processing apparatuses. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. While a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate components or media (e.g., multiple CDs, disks, or other storage devices). The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The terms “computing device” or “component” encompass various apparatuses, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, app, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program can correspond to a file in a file system. A computer program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs (e.g., components of the client devices102and/or104or household entity generator106) to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatuses can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While operations are depicted in the drawings in a particular order, such operations are not required to be performed in the particular order shown or in sequential order, and all illustrated operations are not required to be performed. Actions described herein can be performed in a different order. The separation of various system components does not require separation in all implementations, and the described program components can be included in a single hardware or software product. The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to implementations or elements or acts of the systems and methods herein referred to in the singular may also embrace implementations including a plurality of these elements, and any references in plural to any implementation or element or act herein may also embrace implementations including only a single element. Any implementation disclosed herein may be combined with any other implementation or embodiment. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. References to at least one of a conjunctive list of terms may be construed as an inclusive OR to indicate any of a single, more than one, and all of the described terms. For example, a reference to “at least one of ‘A’ and ‘B’” can include only ‘A’, only ‘B’, as well as both ‘A’ and ‘B’. Such references used in conjunction with “comprising” or other open terminology can include additional items. The foregoing implementations are illustrative rather than limiting of the described systems and methods. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein.
102,179
11860910
DESCRIPTION OF EMBODIMENTS Hereinafter, an example embodiment of the present invention will be described with reference to the drawings. Example Embodiment 1 FIG.1is a block diagram of an example of an information provision system of the first example embodiment of the present invention. The information provision system1of the present invention comprises an input unit2, an identification unit3, a storage unit4, a display device5, and a display control unit6. The input unit2is an input device to which multiple tables are input. For example, the input unit2may be a data reading device that reads multiple tables from a data recording medium, such as a magneto-optical disk, which records the multiple tables recorded. In present example embodiment, it is assumed that the individual column of each table input into the input unit2is assigned a column type (meaning of the column) in advance. The column type is defined separately from a column name. The table may not include a column name. The column type can be determined before each table is input into the information provision system1by a worker (user) or an external information processing device, for example. It is assumed that there are at least three types of column types of “Entity-Identifier”, “Time”, and “Location”. In present example embodiment, the four types of column types are “Entity-Identifier”, “Time”, “Location”, and “None”. Each column in each table has one of the following types of “Entity-Identifier”, “Time”, “Location”, and “None”. However, there may be other types than the above four types. The type “Entity-Identifier” represents a column consisting of attribute values that indicate that it corresponds to a row in an arbitrary table and has the property of being a primary key. The type “Entity-Identifier” is hereinafter referred to as “Entity-ID”. The type “Time” represents a column whose individual attribute value is a date, time, or date and time. The type “Location” represents a column whose individual attribute value is location or position. Hereinafter, the type “Location” is referred to as “Space”. The type “None” represents a column that does not correspond to either “Entity-ID”, “Time”, or “Space”. The identification unit3refers to the input multiple tables, identifies pairs of columns that are in a combinable relationship, identifies a pair of tables to which the individual columns that make up the pair belong as a pair of tables to be combined, and further identifies a combine method of the tables to be combined. The combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables identified by the identification unit3may not be one, but multiple combinations may be identified by the identification unit3. “Similarity-Join”, “Temporal-Join”, “Spatial-Join”, etc. are some of the combine methods that combine paired tables based on the pairs of columns that are in a combinable relationship. Examples of these combine methods are described below. The storage unit4is a storage device that stores the combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables identified by the identification unit3. The display control unit6displays on the display device5the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables identified by the identification unit3. The identification unit3and the display control unit6are realized, for example, by a CPU (Central Processing Unit) of a computer that operates according to an information providing program. For example, the CPU may read the information provision program from a program storage medium such as a program storage device of the computer, and operate as the identification unit3and the display control unit6according to the information provision program. Next, the processing of present example embodiment will be explained.FIG.2,FIG.3,FIG.4andFIG.5are flowcharts showing an example of the processing of the information provision system1of the present example embodiment. In the following, for ease of explanation, the case where there is at most one column with the type “Time” in one table, and similarly, at most one column with the type “Space” in one table is supposed as an example. The number of columns with the type “Entity-ID” in a table is not limited. First, the input unit2receives input of multiple tables (step S1). Each column of the individual tables to be input is assigned a column type in advance. In this example, the case where each of the tables shown inFIG.6,FIG.7,FIG.8, andFIG.9is input in Step S1is supposed as an example. Table21shown inFIG.6includes two columns with the type “Entity-ID”, one column with the type “Time”, and one column with the type “None”. Table22shown inFIG.7includes one column with the type “Entity-ID” and one column with the type “None”. Table23shown inFIG.8includes one column with the type “Entity-ID”, one column with the type “Space”, and one column with the type “None”. Table24shown inFIG.9includes one column with the type “Space”, one column with the type “Time”, and two columns with the type “None”. Next to step S1, the identification unit3selects one unselected table out of the multiple tables input in step S1(step S2). The table that has been selected is hereinafter referred to as the selected table. Here, the case where the identification unit3selects the table21(refer toFIG.6) in step S2is supposed as an example. In other words, the case where the selected table is the table21is supposed as an example. Next to Step S2, the identification unit3determines whether or not there is a column whose type is “Entity-ID” in the selected table (step S3). When there is no column in the selected table whose type is “Entity-ID” (No in step S3), the process proceeds to step S11(refer toFIG.3) described below. When there is a column in the selected table whose type is “Entity-ID,” the process proceeds to step S4. In this example, the selected table (Table21shown inFIG.6) includes a column whose type is “Entity-ID”. Therefore, the process proceeds to step S4. In step S4, the identification unit3selects one column whose type is “Entity-ID” from the selected table. At this time, the identification unit3excludes columns that have already been selected in step S4from the selection target. Here, it is assumed that the identification unit3selects the column whose column name is “Store name” from Table21shown inFIG.6. Next, the identification unit3identifies columns whose types are “Entity-ID” from among the columns of each table other than the selected table (step S5). When there are multiple columns whose type is “Entity-ID” among the columns of each table other than the selected table, the identification unit3identifies all of the multiple columns. In this example, the identification unit3identifies, in step S5, the column whose column name in Table22(refer toFIG.7) is “Product Name” and the column whose column name in Table23(refer toFIG.8) is “Store Name”. Next, the identification unit3selects one unselected column from among the columns identified in step S5(step S6). Here, the case of selecting the column whose column name in Table23is “Store Name” is supposed as an example. Next, the identification unit3determines whether the column selected in step S4and the column selected in step S6are in a combinable relationship (step S7). In step S7, the identification unit3calculates, for example, an edit distance between attribute values for each combination of the individual attribute values included in the column selected in step S4and the individual attribute values included in the column selected in step S6. Then, if the number of combinations of attribute values for which the edit distance is less than or equal to a threshold value is greater than or equal to a predetermined number, the identification unit3can determine that the two columns are in a combinable relationship. If the number of combinations of attribute values for which the edit distance is less than or equal to the threshold value is less than the predetermined number, the identification unit3can determine that the two columns are not in a combinable relationship. The above threshold and predetermined number of values can be set in advance. The method of determining whether or not two columns whose types are “Entity-ID” are in a combinable relationship in step S7(in other words, a condition for determining that two columns whose types are “Entity-ID” are in a combinable relationship) is not limited to the above example. In step S7, the identification unit3may use other methods to determine whether or not two columns are in a combinable relationship. When it is determined that the two columns are in a combinable relationship (Yes in step S7), the process proceeds to step S8. When it is determined that the two columns are not in a combinable relationship (No in step S7), the process proceeds to step S9(refer toFIG.3). In this example, the column selected in step S4(the column whose column name in Table21(refer toFIG.6) is “Store Name”) and the column selected in step S6(the column whose column name in Table23(refer toFIG.8) is “Store Name”) both have the store name as an attribute value. Therefore, the case where the number of combinations of attribute values for which the edit distance is less than or equal to a threshold value is greater than a predetermined number, and the identification unit3determines that the two columns are in a combinable relationship is supposed as an example (Yes in step S7). In this case, the process proceeds to step S8, and the identification unit3determines to combine the selected table (in this example, Table21shown inFIG.6) and the table including the columns selected in step S6(in this example, Table23shown inFIG.8) by “Similarity-Join” (step S8). The pair of tables identified in the process of steps S7and S8is a pair of tables to be combined. In step S8, the identification unit3stores in the storage unit4a combination of the pair of tables to be combined (in this example, the pair of Tables21and23), the pair of columns in a combinable relationship (in this example, the pair of columns whose column name is “Store Name” in Table21and whose column name is “Store Name” in Table23), and the combine method (in this example, “Similarity-Join”). After step S8, the process proceeds to step S9(refer toFIG.3). In step S9, the identification unit3determines whether or not all the columns identified in step S5have already been selected. When all the columns identified in step S5have been selected in step S6(Yes in step S9), then the process proceeds to step S10. When there are columns identified in step S5that have not yet been selected in step S6(No in step S9), then the identification unit3repeats the process of step S6and the subsequent processes. In this example, the column whose column name is “Product Name” in Table22(refer toFIG.7) has not yet been selected in step S6. Therefore, the process proceeds to step S6, the identification unit3selects the column whose column name in Table22is “Product Name” in step S6. Then, the identification unit3determines whether the column selected in step S4and the column selected in step S6are in a combinable relationship (step S7). The column selected in step S4(the column whose name in Table21(refer toFIG.6) is “Store Name”) is a column whose attribute value is the store name. On the other hand, the column selected in step S6(the column whose name in Table22is “Product Name”) is a column whose attribute value is the product name. Therefore, the case where the number of combinations of attribute values for which the edit distance is less than or equal to a threshold is less than a predetermined number, and the identification unit3determines that the two columns are not in a combinable relationship is supposed as an example (No in step S7). In this case, the step S8is not executed and the process proceeds to step S9. Here, both of the two columns identified in step S5have already been selected in step S6. Therefore, the identification unit3determines that all the columns identified in step S5have already been selected (Yes in step S9), and the process proceeds to step S10. In step S10, the identification unit3determines whether or not all the columns whose types are “Entity-ID” in the selected table have already been selected. When all the columns whose types are “Entity-ID” in the selected table have already been selected in step S4(Yes in step S10), then the process proceeds to step S11. When there are any columns whose types are “Entity-ID” in the selected table that have not yet been selected in step S4(No in step S10), the identification unit3repeats the process of step S4and the subsequent processes. In this example, the column whose column name is “Product Name” in Table21corresponding to the selected table has not yet been selected in step S4. Therefore, the process proceeds to step S4, and the identification unit3selects the column whose column name is “Product Name” in Table21in step S4. Since the process of steps S4to S10has already been described, a detailed explanation is omitted here. Here, if the column whose column name in Table22(refer toFIG.7) is “Product Name” is selected in step S6, the identification unit3executes steps S7and S8sequentially. Then, in step S8, the identification unit3stores in the storage unit4a combination of the pair of tables to be combined (in this example, the pair of Tables21and22), the pair of columns in a combinable relationship (in this example, the pair of columns whose column name is “Product Name” in Table21and whose column name is “Product Name” in Table22), and the combine method (in this example, “Similarity-Join”). At the time of proceeding to step S10again, all the columns in Table21whose types are “Entity-ID” have already been selected (Yes in step S10). Therefore, the process proceeds to step S11. In step S11, the identification unit3determines whether or not there is a column whose type is “Time” in the selected table. When the column whose type is the type “Time” does not exist in the selected table (No in step S11), the process proceeds to step S17(refer toFIG.4) described below. When there is a column whose type is “Time” (Yes in step S11) in the selected table, the process proceeds to step S12. In this example, the selected table (Table21shown inFIG.6) includes a column whose type is “Time”. Therefore, the process proceeds to step S12. In step S12, the identification unit3identifies the columns whose types are “Time” from among the columns of each table other than the selected table. When there are the multiple columns whose types are “Time” among the columns of each table other than the selected table, the identification unit3identifies all of multiple columns. In this example, the identification unit3identifies, in step S12, the column whose column name is “Date and Time” in Table24(refer toFIG.9). Therefore, in this example, one column is identified in step S12. Next, the identification unit3selects one unselected column from among the columns identified in step S12(step S13). In this example, the identification unit3selects the column whose column name in Table24is “Date and Time”. Next, the identification unit3determines whether the column whose type is “Time” in the selected table and the column selected in step S13are in a combinable relationship (step S14). In step S14, the identification unit3determines whether or not the two columns whose types are “Time” are in a combinable relationship. An example of this determination is shown below. For example, when the two columns whose types are “Time” both have an attribute value of “Time” (not including date), or when the two columns whose types are “Time” both have an attribute value of “Date” (which may include time as well) as the attribute value, the identification unit3may determine that the two columns are in a combinable relationship (Yes in step S14). In other cases, the identification unit3may determine that the two columns are not in a combinable relationship (No in step S14). For example, when one of the two columns whose type is “Time” has only the time (not including date) as its attribute value, and the other has only the date as its attribute value, the identification unit3determines that the two columns are not in a combinable relationship. In this example, the column whose type is “Time” in the selected table (the column whose name in Table21is “Date and Time”) and the column selected in step S13(the column whose name in Table24is “Date and Time”) both have date as their attribute value (refer toFIG.6andFIG.9). Therefore, in this example, in step S14, the identification unit3determines that the two columns whose types are “Time” are in a combinable relationship (Yes in step S14). The method of determining whether or not the two columns whose types are “Time” are in a combinable relationship in step S14(in other words, a condition for determining that the two columns whose types are “Time” are in a combinable relationship) is not limited to the above example. In step S14, the identification unit3may use other methods to determine whether or not the two columns are in a combinable relationship. When it is determined in step S14that the two columns are not in a combinable relationship (No in step S14), the process proceeds to step S16(refer toFIG.4) described below. When it is determined in step S14that the two columns are in a combinable relationship (Yes in step S14), the process proceeds to step S15(refer toFIG.4). In this example, the process proceeds to step S15. In step S15, the identification unit3determines to combine the selected table (in this example, Table21) and the table including the columns selected in step S13(in this example, Table24shown inFIG.9) by “Temporal-Join”. The pair of tables identified in the process of steps S14and S15is a pair of tables to be combined. In step S15, the identification unit3stores in the storage unit4a combination of the pair of tables to be combined (in this example, the pair of Tables21and24), the pair of columns in a combinable relationship (in this example, the pair of columns whose column name is “Date and Time” in Table21and whose column name is “Date and Time” in Table24), and the combine method (in this example, “Temporal-Join”). After step S15, the process proceeds to step S16. In step S16, the identification unit3determines whether or not all the columns identified in step S12have already been selected. When all the columns identified in step S12have already been selected in step S13(Yes in step S16), then process proceeds to step S17. When there are columns identified in step S12that have not yet been selected in step S13(No in step S16), then the identification unit3repeats the process of step S13and the subsequent processes. In this example, only one column (the column whose name in Table24is “Date and Time”) is identified in step S12, and that column is selected in step S13(Yes in step S16). Therefore, the process proceeds to step S17. Here, for ease of explanation, the case where there is at most one column with the type “Time” in one table is supposed as an example. If there are two or more columns with “Time” as the type in the selected table, the identification unit3may execute the process of steps S12to S16for each of the columns. In step S17, the identification unit3determines whether or not there is a column whose type is “Space” in the selected table. When the column whose type is “Space” does not exist in the selected table (No in step S17), the process proceeds to step S23(refer toFIG.5). When there is a column whose type is “Space” in the selected table (Yes in step S17), the process proceeds to step S18(refer toFIG.4). In this example, since there is no column whose type is “Space” in Table21corresponding to the selected table (No in step S17), the process proceeds to step S23. The process for proceeding to step S18will be described below. In step S23, the identification unit3determines whether or not all the tables input in step S1have already been selected. When all the input tables have been selected in step S2(Yes in step S23), then the process proceeds to step S24. When any of the input tables have not yet been selected in step S2(No in step S23), then the identification unit3repeats the process of step S2and the subsequent processes. In this example, the identification unit3has not yet selected Tables22,23,24. Accordingly, the identification unit3repeats the process of step S2and the subsequent processes. The following is an example of a case where the process proceeds from step S23to step S2and the identification unit3selects Table23(refer toFIG.8) in step S2. In this step S2and thereafter, Table23corresponds to the selected table. After step S2, in step S3, the identification unit3determines that there is a column whose type is “Entity-ID” in the selected table (Table23) (Yes in Step S3). Therefore, the identification unit3executes the process of step S4and the subsequent processes. Since the loop processing of steps S4to S10has already been explained, the explanation is omitted here. In step S10(refer toFIG.3), when it is determined that all the columns whose types are “Entity-ID” in the selected table have been selected (Yes in step S10), the process proceeds to step S11. In step S11, the identification unit3determines whether or not there is a column whose type is “Time” in the selected table. In this example, since there is no column whose type is “Time” in the selected table (Table23) (No in step S11), the process proceeds to step S17(refer toFIG.4). In step S17, the identification unit3determines whether or not there is a column whose type is “Space” in the selected table (Table23). In this example, there is a column whose type is “Space” in the Table23(Yes in step S17). Therefore, the process proceeds to step S18. In step S18, the identification unit3identifies the columns whose types are “Space” from among the columns of each table other than the selected table. When there are multiple columns whose types are “space” among the columns of each table other than the selected table, the identification unit3identifies all of the multiple columns. In this example, the identification unit3identifies the column whose column name is “Prefectures” in Table24(refer toFIG.9) in step S18. Therefore, in this example, one column is identified in step S18. Next, the identification unit3selects one unselected column from among the columns identified in step S18(step S19). In this example, the identification unit3selects the column whose column name in Table24is “Prefectures”. Next, the identification unit3determines that the column whose type is “Space” in the selected table (in this example, the column whose name is “Address” in Table23) and the column selected in step S19(in this example, the column whose name is “Prefectures” in Table24) are in a combinable relationship (step S20). Next, the identification unit3determines to combine the selected table (in this example, Table23) and the table including the columns selected in step S19(in this example, Table24) by “Spatial-Join” (step S21). The pair of tables identified in the process of steps S20and S21is a pair of tables to be combined. In step S21, the identification unit3stores in the storage unit4a combination of the pair of tables to be combined (in this example, the pair of Tables23and24), the pair of columns in a combinable relationship (in this example, the pair of columns whose column name is “Address” in Table23and whose column name is “Prefectures” in Table24), and the combine method (in this example, “Spatial-Join”). After step S21, the process proceeds to step S22. In step S22, the identification unit3determines whether or not all the columns identified in step S18have already been selected. When all the columns identified in step S18have already been selected in step S19(Yes in step S22), then the process proceeds to step S23(refer toFIG.5). When there are columns identified in step S18that have not yet been selected in step S19(No in step S22), then the identification unit3repeats the process of step S19and the subsequent processes. In this example, only one column (the column whose name in Table24is “Prefectures”) is identified in step S18, and that column is selected in step S19(Yes in step S22). Therefore, the process proceeds to step S23. Here, for ease of explanation, this example assumes that there is at most one column with the type “Space” in one table. When there are two or more columns whose types are “Space” in the selected table, the identification unit3may execute the processing of steps S18to S22for each column. As already explained, in step S23, the identification unit3determines whether or not all the tables input in step S1have already been selected. When there are any tables among the input tables that have not yet been selected in step S2(No in step S23), then the identification unit3repeats the process of step S2and the subsequent processes. In this example, Tables22and24have not yet been selected. Therefore, the identification unit3selects Table22in step S2and repeats the process of step S3and the subsequent processes. When the process proceeding to step S2again, the identification unit3selects Table24and repeats the process of step S3and the subsequent processes. In step S23, when the identification unit3determines that all the tables input in step S1have already been selected (Yes in step S23), the process proceeds to step S24. In step S24, the display control unit6reads the combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method from the storage unit4. Then, the display control unit6displays on the display device5the combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method, respectively, based on each combination read from the storage unit4. FIG.10is a schematic diagram showing an example of the information that the display control unit6displays on the display device5in step S24. The display control unit6, for example, displays each input table on the display device5. Furthermore, for each combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method, the display control unit6displays a line connecting the columns in a combinable relationship on the display device5, and displays the combine method included in the combination near the line (refer toFIG.10). When the columns in a combinable relationship are connected by a line, the tables to which the columns belong are also connected by the line. Therefore, in the example shown inFIG.10, that the display control unit6displays on the display device5the lines connecting the columns in a combinable relationship would display a pair of columns in a combinable relationship and also display a pair of tables to be combined based on the pair of columns. In the example shown inFIG.10, the combine method is displayed near the line. Accordingly, in the display form illustrated inFIG.10, the display control unit6can display the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables, as identified by the identification unit3. In the example shown inFIG.10, for example, Tables21and22are a pair of tables to be combined, and the combine method is “Similarity-Join” when combining Tables21and22based on the column of “Product Name” in Table21and the column of “Product Name” in Table22. However, the display form of information by the display control unit6is not limited to the example shown inFIG.10. As a result of the process illustrated in the flowchart, it may be determined that one column is in a combinable relationship with multiple columns. In this case, lines extending from the one column to multiple columns will be displayed. According to the present example embodiment, the display control unit6displays on the display device5a pair of tables to be combined, a pair of columns in a combinable relationship, and a combine method of the tables. Therefore, the information provision system1of the present example embodiment can provide to a worker (a user of the information provision system1) which method should be used to combine the tables based on which column of which table and which column of which table. Accordingly, even a worker with little specialized knowledge can smoothly proceed with a task of combining multiple tables. In other words, according to the present example embodiment, useful information can be provided to the worker for the task of combining tables for data analysis. The following are examples of table combine process using “Similarity-Join”, “Temporal-Join”, and “Spatial-Join”. However, the combine processes shown below are examples, and each combine process is not limited to the examples shown below. The information provision system1may or may not comprise a combine unit (not shown) that executes the combine process of tables according to the contents presented to the worker by the display control unit6. When the information provision system1comprises such a combine unit, the combine unit is realized, for example, by a CPU of a computer operating according to an information provision program. In this case, the CPU can read an information provision program from a program recording medium such as a program storage device in the computer, and operate as the identification unit3, the display control unit6, and the combine unit according to the information provision program. If the information provision system1does not comprise such a combine unit, for example, an external system other than the information provision system1may combine the tables according to the instructions of the worker. In this case, the worker may give instructions to the external system regarding table combines based on the information (information shown inFIG.10, which is displayed on the display device5by the display control unit6) provided by the information provision system1of the present invention. The case where the combine method “Similarity-Join” is defined along with two columns that are in a combinable relationship will be explained. It is assumed that a pair of an arbitrary attribute value (referred to as attribute value a) in one column (referred to as column A) and an arbitrary attribute value (referred to as attribute value b) in the other column (referred to as column B), satisfying the condition that the edit distance between the attribute values is equal to or less than a threshold value, is specified. In this case, the record including the attribute value b in the table including column B may be added to the record including the attribute value a in the table including column A. Here, the case where the edit distance of attribute values is used as an example, but word embeddings may also be used to identify a pair of attribute values. For example, suppose that a distance between the vectors obtained by word2vec from attribute values a and b respectively is calculated, and a pair having a distance which is less than the threshold are identified. In this case, as described above, the record including the attribute value b in the table including column B may be added to the record including the attribute value a in the table including column A. The case where the combine method “Temporal-Join” is defined along with two columns that are in a combinable relationship will be explained. It is assumed that a pair of an arbitrary attribute value (referred to as attribute value a) in one column (referred to as column A) and an arbitrary attribute value (referred to as attribute value b) in the other column (referred to as column B), under the condition that a time period within a predetermined range centered on the attribute value a overlaps a time period within a predetermined range centered on the attribute value b, is specified. In this case, the record including the attribute value b in the table including column B may be added to the record including the attribute value a in the table including column A. The case where the combine method “Spatial-Join” is defined along with two columns that are in a combinable relationship will be explained. It is assumed that a pair of an arbitrary attribute value (referred to as attribute value a) in one column (referred to as column A) and an arbitrary attribute value (referred to as attribute value b) in the other column (referred to as column B), under the condition that a distance between the coordinates obtained from attribute value a (for example, latitude and longitude) and the coordinates obtained from attribute value b is equal to or less than a threshold value, is specified. In this case, the record including the attribute value b in the table including column B may be added to the record including the attribute value a in the table including column A. As the distance between the two coordinates, for example, Euclidean distance or Manhattan distance can be used. These combine processes are examples, and the combine processes of tables by “Similarity-Join”, “Temporal-Join”, and “Spatial-Join” are not limited to the above examples. FIG.11shows the result of combining each of the aforementioned Tables21-24according to the information shown inFIG.10. Next, modifications of the present example embodiment will be explained. The various modifications shown below can also be applied to following second example embodiment. In step S6(refer toFIG.2), step S13(refer toFIG.3), and step S19(refer toFIG.4) of the flowchart illustrated in the first example embodiment, the identification unit3may exclude from the selection target a column that has already been determined to be in a combinable relationship with another column. In this case, the identification unit3treats the column excluded from the selection target in step S6due to the fact that it is already defined as being in a combinable relationship with other columns as the column already selected in step S6, in step S9(refer toFIG.3). Similarly, the identification unit3treats the column excluded from the selection target in step S13as the column already selected in step S13, in step S16(refer toFIG.4). Similarly, the identification unit3treats the column excluded from the selection target in step S19as the column already selected in step S19, in step S22(refer toFIG.4). In this way, the processing time can be shortened by excluding from the selection target the columns that have already been determined to be in a combinable relationship with other columns in steps S6, S13, and S19. In step S2(refer toFIG.2) of the flowchart illustrated in the first example embodiment, the identification unit3may exclude from the selection target a table that is already defined to be combined with another table. In this case, the identification unit3treats the table excluded from selection in step S2due to the fact that it is already defined to be combined with other tables as a table that has already been selected in step S2, in step S23(refer toFIG.5). In this way, the processing time can be shortened by excluding tables that have already defined to be combined with other tables from the selection target in Step S2. In the multiple tables to be input, there may be a pair of columns, belonging to different tables respectively, that are predetermined to be in a combinable relationship, and the combine method for the different tables may be predetermined. In other words, in the multiple tables to be input, there may be a combination of a pair of tables to be combined, a pair of columns that are in a combinable relationship, and a combine method that has already been defined. The worker may not be able to determine all the combinations of the pairs of tables to be combined, the pairs of columns in a combinable relationship, and the combine methods, but may be able to determine some of the combinations based on knowledge which the worker has. In such a case, the worker can input the multiple tables into the input unit2along with information indicating the combinations that the worker has been able to determine. In this case, as explained in the previous modification, in step S6(refer toFIG.2), step S13(refer toFIG.3), and step S19(refer toFIG.4), the identification unit3may exclude from the selection target the column that has already been determined to be in a combinable relationship with other columns. Then, in step S9(refer toFIG.3), the identification unit3may treat the column excluded from the selection target in step S6as the column already selected in step S6. Similarly, the identification unit3can treat the column excluded from the selection target in step S13as the column already selected in step S13, in step S16(refer toFIG.4). Similarly, the identification unit3can treat the column excluded from the selection target in step S19as the column already selected in step S19, in step S22(refer toFIG.4). FIG.12shows another modification of the first example embodiment. Elements similar to those shown inFIG.1are marked with the same signs as inFIG.1, and the explanation is omitted. In the modification shown inFIG.12, the information provision system1has a column type estimation unit7in addition to each of the elements shown inFIG.1. In the first example embodiment described above, a case in which a column type (column meaning) is assigned in advance to individual columns of individual tables input to the input unit2is supposed as an example. In this modification, the column types need not be assigned to the individual columns of the individual tables that are input to the input unit2. For each individual column of the individual tables input to the input unit2, the column type estimation unit7estimates the type of the column based on the attribute values included in the column, and adds (assigns) the estimated type to the column. In this modification, when multiple tables are input to the input unit2in step S1(refer toFIG.2), for example, before the execution of the first step S2, the column type estimation unit7may estimate the column type for each individual column of the individual tables input to the input unit2, based on the attribute values included in the column, and add the estimated type to the column. Then, the identification unit3may execute the process of step S2and the subsequent processes, by referring to the column type added to each individual column of each table by the column type estimation unit7. The method by which the column type estimation unit7estimates the type of an individual column based on the attribute values included in the column can be a known method. For example, the column type estimation unit7may estimate a type of an individual column by the method of estimating the meaning of a column described in the non-patent literature 1 or the method of estimating the meaning of a column described in the patent literature 1. At this time, it is assumed that there are at least “Entity-ID”, “Time”, and “Space” as column types. If the column type estimator7obtains a type other than these three types as an estimation result, the column type estimator7may replace the type with “None”. The column type estimation unit7is realized, for example, by a CPU of a computer that operates according to the information provision program. In this case, the CPU can read the information provision program from a program storage medium such as a program storage device in the computer, and operate as the column type estimation unit7, the identification unit3, and the display control unit6according to the information provision program. Example Embodiment 2 As one of the modifications of the first example embodiment, it is explained that there may be a combination of a pair of tables to be combined, a pair of columns that are in a combinable relationship, and a combine method that has already been defined, in the multiple tables to be input. The information provision system of the second example embodiment presents combinations of pairs of tables to be combined, pairs of columns in a combinable relationship, and combine methods to a worker, and adds such combinations in response to an operation of the worker. FIG.13is a block diagram of an example of an information provision system of the second example embodiment. Elements similar to those shown inFIG.1are marked with the same sign as inFIG.1and the explanation is omitted. The information provision system1of the second example embodiment includes an information adding unit9in addition to each of the elements shown inFIG.1. The operations from step S1(refer toFIG.2) to step S24(refer toFIG.5) described in the first example embodiment are the same in the second example embodiment. However, in present example embodiment, the display control unit6displays, in step S24, a GUI (Graphical User Interface) for a worker to add combinations of pairs of tables to be combined, pairs of columns in a combinable relationship, and combine methods, together with the individual combinations (combinations of pairs of tables to be combined, pairs of columns in a combinable relationship, and combine methods) identified by the identification unit3. The information adding unit9receives a combination of a pair of tables to be combined, a pair of columns in a combinable relationship, and a combine method according to the operation to the GUI by the worker, and stores the combination in the storage unit4. When the information adding unit9stores a new combination in the storage unit4, the display control unit6reads the combination as well, and additionally displays on the display device5the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method included in the combination. FIG.14is a schematic diagram of an example of a screen including a GUI displayed in step S24. In the second example embodiment, the display control unit6displays a screen illustrated inFIG.14on the display device5in step S24. The screen shown inFIG.14includes a pull-down menu51and an enter button52. The display contents other than the pull-down menu51and the enter button52are the same as the display contents illustrated inFIG.10. However, each column of each table shown inFIG.14can be specified by mouse clicking or other operations. The pull-down menu51is used by the worker to specify the combine method of tables, such as “Similarity-Join”, “Temporal-Join”, and “Spatial-Join”. An example of the operation in which the information adding unit9receives additional information from a worker is explained with reference toFIG.14. Two columns (a pair of columns) belonging to different tables are specified by the worker using mouse clicks or other operations. In addition, the combine method between the table to which one of the two columns belongs and the table to which the other of the two columns belongs is specified by the pull-down menu51. Then, the decision button52is clicked by the worker. Then, the information adding unit9regards the table to which one of the two specified columns belongs and the table to which the other of the two columns belongs as a pair of tables to be combined. Furthermore, the information adding unit9defines the two specified columns as a pair of columns in a combinable relationship. Then, the information adding unit9adds a combination of the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method specified by the pull-down menu51to the storage unit4. As already explained, when the information adding unit9stores a new combination in the storage unit4, the display control unit6reads that combination as well, and additionally displays on the display device5the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method included in the combination. The information adding unit9is realized, for example, by a CPU of a computer that operates according to an information provision program. In this case, the CPU can read the information provision program from a program recording medium such as a program storage device in the computer, and operate as the identification unit3, display control unit6, and information adding unit9according to the information provision program. According to the second example embodiment, the same effect as the first example embodiment can be obtained. Furthermore, the second example embodiment allows a worker to have the information provision system1add a combination of a pair of tables to be combined, a pair of columns in a combinable relationship, and a combine method, at own decision of the worker. As mentioned above, various modifications of the first example embodiment can also be applied to the second example embodiment. FIG.15shows a schematic block diagram of a computer for the information provision system1of each example embodiment of the present invention. The computer1000has a CPU1001, a main memory1002, an auxiliary memory1003, an interface1004, a display device1005, and an input device1006. The information provision system1of each example embodiment of the present invention and modifications thereof is realized by a computer1000. The operation of the information provision system1is stored in the auxiliary storage device1003in the form of an information provision program. The CPU1001reads the information provision program from the auxiliary storage1003, deploys the information provision program in the main memory1002, and executes the operation described in each of the above example embodiments and various modifications according to the information provision program. The auxiliary memory1003is an example of a non-transitory tangible medium. Other examples of non-transitory tangible media are a magnetic disk, an optical magnetic disk, a CD-ROM (Compact Disk Read Only Memory), a DVD-ROM (Digital Versatile Disk Read Only Memory), a semiconductor memory, and the like, which are connected through the interface1004. When the program is delivered to the computer1000through a communication line, the computer1000that receives the delivery may develop the program into the main memory1002and operate according to the program. The program may also be a program for realizing part of the aforementioned processing. Further, the program may be a difference program that realizes the aforementioned processing in combination with other programs already stored in the auxiliary memory1003. Some or all of the components may be realized by general-purpose or dedicated circuitry, processors, or a combination of these. They may be configured by a single chip or by multiple chips connected through a bus. Some or all of the components may be realized by a combination of the above-mentioned circuits, etc. and a program. When some or all of each component is realized by multiple information processing devices, circuits, etc., the multiple information processing devices, circuits, etc. may be centrally located or distributed. For example, the information processing devices, circuits, etc. may be implemented as a client-and-server system, cloud computing system, etc., each of which is connected through a communication network. Next, a summary of the present invention will be described.FIG.16is a block diagram showing an example of a summarized information provision system of the present invention. The information provision system of the present invention comprises an input unit81, an identification unit82, and an output unit83. The input unit81(for example, input unit2in the example embodiment) receives input of multiple tables. The identification unit82(for example, identification unit3in the example embodiment) identifies a pair of columns that are in a combinable relationship, identifies that a pair of tables to which the individual columns forming the pair belong is the pair of tables to be combined, and identifies a combine method of the tables to be combined. The output unit83(for example, display control unit6in the example embodiment) outputs the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables to be combined. Such a configuration can provide a workers with useful information for combining tables, so that even workers with little specialized knowledge can smoothly proceed with the task of combining multiple tables. It may also be configured that the identification unit82identifies the pair of columns in a combinable relationship based on types of individual columns in the individual tables, identifies that the pair of tables to which the individual columns forming the pair belong is the pair of tables to be combined, and identifies the combine method of the tables to be combined. It may also be configured that the identification unit82when the pair of columns belonging to different tables and having predetermined types, which means that the columns comprise attribute values that indicate that they correspond to a row of an arbitrary table and that have the property of being a primary key, satisfies a first condition, identifies the pair of columns as the pair of columns in a combinable relationship, identifies that the pair of tables to which the individual columns forming the pair belong is the pair of tables to be combined, and identifies the combine method of the tables to be combined as Similarity-Join, when the pair of columns belonging to different tables and having types “Time” satisfies a second condition, identifies the pair of columns as the pair of columns in a combinable relationship, identifies that the pair of tables to which the individual columns forming the pair belong is the pair of tables to be combined, and identifies the combine method of the tables to be combined as Temporal-Join, and identifies the pair of columns belonging to different tables and having types “Location” as the pair of columns in a combinable relationship, identifies that the pair of tables to which the individual columns forming the pair belong is the pair of tables to be combined, and identifies the combine method of the tables to be combined as Spatial-Join. The multiple tables with column types assigned to individual columns in advance may be input to the input unit81. It may also be configured with a column type estimation unit (for example, column type estimation unit7) that estimates a column type for each individual column of each table input to the input unit81. In the multiple tables to be input, there may exist he pair of columns belonging to different tables that are predetermined to be in a combinable relationship, and the combine method of the different tables is predetermined. It may also be configured with an information adding unit (for example, information adding unit9) which adds a pair of tables to be combined, a pair of columns in a combinable relationship, and a combine method of the tables to be combined in response to user operation after the pair of tables to be combined, the pair of columns in a combinable relationship, and the combine method of the tables to be combined have been output. While the present invention has been described with reference to the example embodiments, the present invention is not limited to the aforementioned example embodiments. Various changes understandable to those skilled in the art within the scope of the present invention can be made to the structures and details of the present invention. INDUSTRIAL APPLICABILITY This invention is suitably applied to an information provision system that provides workers with information about the task of combining tables. REFERENCE SIGNS LIST 1Information provision system2Input unit3Identification unit4Storage unit5Display device6Display control unit7Column type estimation unit9Information adding unit
52,346
11860911
DETAILED DESCRIPTION Principle of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitations as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones describe below. As used herein, the term “includes” and its variants are to be read as open terms that mean “includes, but is not limited to.” The term “based on” is to be read as “based at least in part on.” The term “one embodiment” and “an embodiment” are to be read as “at least one embodiment.” The term “another embodiment” is to be read as “at least one other embodiment.” Other definitions, explicit and implicit, may be included below. Reference is first made toFIG.1, in which an exemplary electronic device or computer system/server12which is applicable to implement the embodiments of the present disclosure is shown. Computer system/server12is only illustrative and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the disclosure described herein. As shown inFIG.1, computer system/server12is shown in the form of a general-purpose computing device. The components of computer system/server12may include, but are not limited to, one or more processors or processing units16, a system memory28, and a bus18that couples various system components including system memory28to processor16. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Computer system/server12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system/server12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure. Program/utility40, having a set (at least one) of program modules42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules42generally carry out the functions and/or methodologies of embodiments of the disclosure as described herein. Computer system/server12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, and the like. One or more devices that enable a user to interact with computer system/server12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server12to communicate with one or more other computing devices. Such communication can occur via input/output (I/O) interfaces22. Still yet, computer system/server12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system/server12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, and the like. In computer system/server12, I/O interfaces22may support one or more of various different input devices that can be used to provide input to computer system/server12. For example, the input device(s) may include a user device such keyboard, keypad, touch pad, trackball, and the like. The input device(s) may implement one or more natural user interface techniques, such as speech recognition, touch and stylus recognition, recognition of gestures in contact with the input device(s) and adjacent to the input device(s), recognition of air gestures, head and eye tracking, voice and speech recognition, sensing user brain activity, and machine intelligence. As described above, in trajectory tracking, the fundamental task is to correctly and accurately identify the transition points. The transition points are corresponding to one or more transitions between the routes. As used herein, a transition refers to a location or a travel distance where an entity transits between routes having different route directions on the map. That is, a transition point is located between the end of one route and the beginning of another route. In order to obtain updating information on the routes on the map to achieve effective and efficient fleet management, the conventional approaches depend upon manual labor for observations and measurements by human users. For example, a vehicle provided with the positioning device may periodically travel along roads to detect changes associated with the roads. If a change of a road is found, the corresponding route on the map may be adjusted accordingly. Then, the points on the map may be matched with the changed route. However, such a manual update and maintenance of the routes on the map is inefficiency in terms of both time and cost especially when the roads changes frequently, for example, due to needs of city development, city constructions, and the like. In order to address the above and other potential problems, embodiments of the present disclosure provide an effective and efficient solution for identifying transition points on the digital map. Generally speaking, the proposed solution works on the basis of category of points on the map, which will be explained with reference toFIG.2. As described above, positioning data sensed by the positioning device carried on the object can be mapped into points on the digital map. Then the mapped points of the object can be matched to one or more routes on the map. Each point has a traveling direction which can be indicated by the positioning data, for example. The traveling directions for the points on the map can be saved as metadata. The points on the map can be matched to the routes based on the distances between the points and the routes as well as the traveling directions of the points and the route directions. More particularly, if the distance between a point and a route is short enough and the traveling direction of the point is consistent with the direction of the route, then the point is matched to that route. In the context of the present disclosure, such a point is referred to a “matched point” or a “traveling point” which indicates that the respective object is moving along the route. InFIG.2, the points shown by solid circles such as the point206are traveling points that are matched to the route210or220. In this example, the routes210and220have different directions203and204, respectively. It can be seen that in the map matching, a certain degree of tolerance is given. More particularly, although the point206is separated from the route220, it can be considered as a traveling point because its distance to the route220is below the threshold distance (referred to as “first threshold distance”). The points on the map other than the matched points are referred to “unmatched points.” It would be appreciated that the unmatched points may include points that are separated from a route by distances exceeding a predefined threshold distance, such as points208as shown inFIG.2. Alternatively, or in addition, the unmatched points may include points with directions inconsistent with the route directions, such as points207. According to embodiments of the present disclosure, potential or candidate transition points are identified from those unmatched points. Then the confidence of these candidate transition points will be verified to check whether they actually represent transition points one the map. If one or more candidate transition points are determined to have low confidence, it means that these candidate transition points, such as the point208inFIG.2, are not real transition points. In some embodiments, such points can be classified as traveling points. However, as mentioned above, these points are unmatched with any routes on the map. As a result, it can be determined that some problem occurs such as the map data is not up-to-date and/or the respective entities that are sensing the positioning data are not traveling along the predetermined routes. On the other hand, the candidate transition points with high confidence, such as the point207inFIG.2, are be classified as real transition points, as will be described below. To this end, according to embodiments of the present disclosure, the unmatched points are categorized as the following three types of points: (1) points that are separated from the routes by the distances exceeding the first threshold distance, such as the point208; (2) points that are located between the routes but inconsistent with the route directions, such as the point207; and (3) points that are located on one of the routes but have directions inconsistent with the respective route directions. Among these types of points, the points of the first type indicate that the object is leaving the routes. The points of the second type are referred to as “transition points” which represent a transition between the routes. The points of third type indicate that the object stops moving along the routes which are referred to as “stopping points” representing a stop along the routes. According to embodiments, the unmatched points of the first or second type will be selected as candidate transition points. By verifying the confidence, these two types of points can be separated from each other, such that the unmatched points having high confidence, such as the clusters of points201and202are classified as the real transition points. The existence of unmatched points like the point208(if any) may indicate a problem to be handled. For example, the map data may be out of date, or the entity may be traveling out of the predefined routes. Additionally, in some embodiments, the points of the third type may be assigned with a direction and classified as special traveling points. In the example shown inFIG.2, it is assumed that the routes210and220together form a bus line. Accordingly, points on the map200can be generated based on positioning data acquired by one or more buses equipped with positioning devices. As shown, the points205and206are traveling points whose directions are consistent with the route direction203and204, respectively. The unmatched points such as points207and208are separated from the routes210and220by the distances exceeding the first threshold distance and/or inconsistent with the route directions203and204. The unmatched points207may be caused due to following events: the bus stops moving due to traffic lights, traffic congestion and the like; the bus leaves the bus line for reasons such as fuel-up, relaxation, and the like; and the bus line has been changed. In the shown example, a route change from a part221of the route220to the route222occurs. That is, in practice the bus has changed to travel along the route222instead of the part221of the route220. However, the map200has not yet been updated accordingly. In this case, the trajectory of the bus is still traced by matching the points on the map with the part221of the route220. As a result, the point208is unmatched with any routes. Such kind of problem may significantly degrade the accuracy of the trajectory trace of the bus. By detecting transition points according to embodiments of the present disclosure, the above problems can be identified and handled. It is to be understood that the example scenario of the bus trace is described above only for the purpose of illustration, without suggesting any limitations as to the scope of the disclosure. Embodiments of the present disclosure may be implemented in any other suitable application where a trajectory of an object needs to be traced. FIG.3shows a flowchart of an example method300for classifying the points on the map in accordance with some embodiments of the present disclosure. The method300may be used to process the points on the map200illustrated inFIG.2. As described above, the points on the map are obtained by mapping the positioning data acquired by one or more entities. As used herein, the term “entity” refers to any object or device whose location can be sensed. Examples of an entity include, but are not limited to, a motor vehicle, rickshaw, pedestrian, and the like. In particular, in the context of the present disclosure, when the same vehicle or pedestrian travels along the same route at different time, the vehicle or pedestrian will be regarded as two different entities. An entity may be equipped with a location sensing device, such as a Global Navigation Satellite System (GNSS) device. Examples of GNSS include, but not limited to, one or more of the following: Global Positioning System (GPS), Galileo positioning system, Beidou navigation positioning system, and the like. With the location sensing device, each entity may obtain positioning data of a plurality of locations during the traveling process. In one embodiment, the positioning data, for example, may be altitude and latitude data. Optionally, the positioning data may also comprise velocity data and direction data and the like of the entity at a corresponding location In step302, candidate transition points on the map200are obtained. The candidate transition points includes points that are separated from the routes210and220by distances exceeding the first threshold distance, and/or points that are located between the routes210and220but inconsistent with the route directions203and204. As described above, in the case that a change associated with the actual road occurs, if the corresponding route on the map is not timely changed, the unmatched points occur. The candidate transition points can be determined from the unmatched points. For example, in some embodiments, the following unmatched points can be selected as the candidate transition points: points that are separated from the routes210and220by the distances exceeding the first threshold distance, and/or points that are inconsistent with the route directions203and204. To this end, in some embodiments, the candidate transition points can be obtained in step302by first determining unmatched points on the map and then excluding those unmatched points located on the routes but having directions inconsistent with the respective route directions. That is, the unmatched points of the third type (stopping points) as described above are excluded, and the remaining unmatched points of the first and second types are used as candidate transition points. Then, in step304, the candidate transition points are aggregated to obtain a first cluster of points and a plurality of second clusters of points. The candidate transition points may be aggregated according to many factors. The aggregation may be implemented by clustering algorithms, either currently known or to be developed in the future, or any other approaches. In some embodiments, distances between the candidate transition points may be used for point aggregation. For example, the candidate transition points having short distances therebetween may be aggregated together. In such embodiments, if a distance between two candidate transition points is below a threshold distance, the two candidate transition points may be aggregated into one cluster of points. The threshold distance for the aggregating may be set as any suitable distance according to practical requirements. Alternatively, or in addition, in some embodiments, the aggregation of the candidate transition points may be further based on the acquisition time of the associated positioning data. For example, if a sequence of positioning data, which correspond to some candidate transition points, is acquired sequentially in a predefined time interval, these candidate transition points may be aggregated together. After the candidate transition points are aggregated into a plurality of clusters of points, the first and second clusters of points may be selected from these clusters of points. For example, the first cluster of points may be generated based on positioning data acquired by a first entity, and the second clusters of points may be generated based on positioning data acquired by one or more second entities different from the first entity. As an alternative example, the first and second clusters of points may be generated based on the positioning data acquired by one entity when traveling along the routes many times. Next, the method300proceeds to step306, where a confidence of the first cluster of points is verified. As used herein, the term “confidence” refers to the possibility that a cluster of points indeed represents a transition indicating the entity is transiting from one route to another route. A high confidence indicates that the cluster of points is more likely to be the transition points, and vice versa. According to embodiments, the confidence is verified at least in part based on distances between the first cluster of points and the second clusters of points. To this end, various approaches can be used to determine the distance between the first and second clusters of points. For example, in some embodiments, the Euclidean distance between geometric centers of the first and second clusters of points may be calculated as the distance. The geometric center of a cluster of points may be determined in a variety of ways. For example, in one embodiment, it is possible to determine a bounding box of the respective cluster of points and then use the center of the bounding box as the geometric center. Alternatively, a shape defined by a boundary of the cluster of points can be determined and the symmetric center of the shape can be used as the geometric center. As another example, the distance between the first and second clusters of points may be calculated based on a distance between two reference points in the two clusters of points. The distances between the first cluster of points and the second clusters of points may be used in a variety of manners to verify the confidence of the first cluster of points. In some embodiments, the confidence of the first cluster of points may be verified based on a number of proximate clusters of points with respect to the first cluster of points from among the second clusters of points. For example, a proximate cluster of points will be selected from among the plurality of second clusters of points if the distance between the first cluster of points and the proximate cluster of points is below a threshold distance (referred to as “second threshold distance”). The second threshold distance for the verification may be set according to practical requirements or any other relevant factors. Then, the selected proximate clusters of points are counted. If the number of the proximate clusters of points exceeds a threshold number, the confidence of the first cluster of points may be determined to be high. This indicates that it is quite possible that the first cluster of points indeed represents a transition between the routes. Otherwise, if the number of the proximate clusters of points is below the threshold number, the confidence of the first cluster of points is determined low. This indicates that the first cluster of points does not represent a transition between the routes. In addition to the distances between the first and second clusters of points, the confidence of the first cluster of points may be verified further based on the comparison of travel distances between two positions along different routes. More particularly, as shown inFIG.2, the routes202and203are paired to form a bus line and have opposite directions. A predetermined reference transition may be used as a landmark. The predetermined reference transition represents the transition between the routes with opposite directions. Then a first travel distance from the first cluster of points to the predetermined reference transition along the route202may be determined. Additionally, a second travel distance from the predetermined reference transition to the first cluster of points along the route203may be determined. The first and second travel distances are compared to one another. If their difference is below a threshold difference, then it can be considered that confidence for the first cluster of points is high enough, that is, above the threshold confidence. Otherwise, the confidence may be determined to be low. Such embodiments work on the basis of an observation that the travel distances along two paired routes with opposite directions are often similar. Accordingly, the reference transition zone may be any suitable transition between routes having nearly equal distances to and from the first cluster of points. Alternatively, or in addition, the confidence of the first cluster of points may be verified based on reference information obtained from a user input. For example, a user may specify the confidence of the first cluster of points, for example, based on known information on associated route changes. As an alternative example, the routes may be investigated or measured on the spot to determine whether the first cluster of points indeed represents a transition. Then, the investigations may be input by the user. The verification of the confidence based on the user input may further increase accuracy of the verification. In the case that the confidence is determined to be below a threshold confidence, the method300proceeds to step308, where the first cluster of points is classified to be traveling points. For the purpose of discussion, the classified traveling points will be referred to as first traveling points having a first direction. If the confidence is determined to exceed the threshold confidence, the first cluster of points is classified as the transition points. In some embodiments, the first direction may be determined based on directions of proximate traveling points with respect to the first cluster of points. These proximate traveling points can be referred to as a second plurality of traveling points. For example, the first direction of the first cluster of points may be determined to be a dominant direction of the proximate traveling points. In the context of the present disclosure, a dominant direction refers to a direction of the majority points in a set of points. In one embodiment, a set of proximate traveling points are obtained based on the distances between these traveling points and the first cluster of points. The distances between the proximate traveling points and the first cluster of points are below a threshold distance (as referred to as “third threshold distance”). If the number of the traveling points having a specific direction from among of the proximate traveling points exceeds a threshold number, the specific direction may be determined as the dominant direction of the proximate traveling points. Alternatively, in some other embodiments, the first direction of the first cluster of points may be determined based on a user input in order to ensure the accuracy of the classification. For example, the user may specify the direction of the first cluster of points based on known information and on-the-spot investigations or measurements on associated route changes. Through the above discussions, it would be appreciated that in accordance with embodiments of the present disclosure, a confidence of a cluster of points from among candidate transition points may be verified. In response to a low confidence, the cluster of points may be classified to traveling points. In this way, the accuracy of the trajectory tracking may be improved in more effective and efficient way while reducing manpower and material costs. As mentioned above, if the first cluster of points is classified to the traveling points, it means that the route information of the map might have some errors or be out of date. In this situation, in one embodiment, the route on the map may be adjusted at least in part based on the first classified cluster of points. For example, it may be necessary to add a new route(s) and/or modify an existing route(s). Any route adjustment approaches can be used in connection with embodiments of the present disclosure. It would be appreciated that such route adjusting approach is more effective and efficient compared with the conventional manual way. In addition to the classification of the candidate transition points, other points such as stopping points and traveling points may also be classified as will be discussed in detail in the following paragraphs. For example, as described above, if an unmatched point is located on a route but has traveling direction inconsistent with the route direction, that point is considered as a stopping point. In some embodiments, depending on the requirements, it is possible to classify the stopping point as a traveling point and assign it with the route direction. Example processes of classifying the points on the map will now be described below with reference to with reference toFIG.4. FIG.4shows an example map400to which embodiments of the present disclosure are applicable. The map400can be considered as a further example implementation of the map200. The map400differs from the map200as described above in that some unmatched points207, as indicated by triangles, on the map400have been aggregated into clusters of points401and402. Similar to the aggregation of the candidate transition points as described above, the unmatched points may be aggregated according to many factors. In this example, the unmatched points207are aggregated based on the distances between the unmatched points207and the acquisition time of associated positioning data. As shown, the cluster of points401is obtained by aggregating the unmatched points that are separated from the routes210and220by the distances exceeding the first threshold distance, that is, the candidate points. Accordingly, the confidence of the cluster of points401is verified. It is to be understood that the processes of the verification as described above with reference toFIG.3can also be implemented for the cluster of points401, and the details thereof will be omitted. In this example, the cluster of points401is determined to have a low confidence, which is caused due to the route change from the part221of the route220to the route222. Then, the cluster of points401may be classified as traveling points. As shown, most of the traveling points proximate to the cluster of points401have the route direction204. That is, the dominant direction of the traveling points proximate to the cluster of points401is route direction204. As a result, the cluster of points401is classified as the traveling points having the route direction204. After the cluster of points401is classified, from among the traveling points proximate to the cluster of points401, a traveling point having a direction other than the dominant direction may also be classified based on the dominant direction. More particularly, if the direction of traveling point is different from the dominant direction of the proximate traveling point, then it is possible directly assign the dominant direction to this traveling point. In this example, a set of traveling points403have the route direction203. However, the dominant direction of the traveling points proximate to the set of traveling points403is consistent with the route direction204. As a result, in some embodiments, the directions of the set of traveling points403are adjusted to be the route direction204. As shown, the cluster of points402are obtained by aggregating the unmatched points that are located on the route210but inconsistent with the route direction203, namely, the stopping points representing a stop along the route210. Due to sensing characteristics of a location sensing device, the points proximate to the stopping points typically have directions that are frequently changed. Accordingly, in this example, in addition to the stopping points, the cluster of points402might also include some nearby traveling points. That is, in this example, in aggregating the stopping points into the cluster of points402, some traveling points are also aggregated together due to these traveling points being proximate to the stopping points in terms of both locations and time. Then, the cluster of points402may also be classified to traveling points. The direction of the stopping points may be determined as the route direction203of the located route210. Automatically maintaining accurate digitalized routes is critical for the applications in fleet management, such as bus arrival predication, trip planning in online maps, vehicle monitoring, etc. Automatically updating digitalized routes is a task to extract n routes from raw GPS data (usually n=2). The data of different driving directions must be separate from each other before extraction. However, raw GPS data of different routes and directions are usually mixed together without any other advanced fleet management sensor system. Digitalized routes data typically is a sequence of GPS points representing the routes that the vehicles are actually driving on. A vehicle's accurate location and driving distance can be calculated by matching a vehicle's current GPS location to a location along the route. Manually collecting and maintaining this information can be a time-consuming and costly task. For example, there are usually thousands of routes for a city, and each routes contains hundreds to thousands of GPS points (500,000˜1,000,000 points for a city). In addition, these routes are always changing, due to the needs of city development, constructions, new metro stations, etc. 7%-20% routes would be changed per month. As a results, 12%-39% of the route information (GPS locations) are in error or out of date. Digitalized route data is critical to many applications of fleet management, especially in the age of the connected vehicle. Driving directions is critical information for automatic route extraction from raw GPS data. Thus, GPS data classification, which infers the driving directions, is a prerequisite. Vehicles in a fleet usually have N operational routes. In most cases N=2, based on driving from location A to location B, and then back. Ideally vehicles may have the following status in operation hours:1. Driving on route1;2. Driving on route2;3. Stop at location A or location B (transition status). The goal is for the GPS data points logged from vehicles to be classified into three classes, which can be labeled with 0 (on route1), 1 (on route2), −1 (not on route1or2). However, in practical operation conditions, there are many exceptions: a) GPS/3G/4G signals are unstable. They are easily affected by many factors, e.g.: weather conditions, high buildings, elevated roads, tunnels, bridges . . . . So GPS data have many errors. Due to some other operation needs, like maintenance, refueling, dynamic dispatching etc., there are usually large numbers of noisy trips amount GPS trace data, which are very different from normal traces (driving on route1or2). 3. Existing digitalized routes data is out-of-date. So GPS data points cannot match to any of routes. Traffic congestions cause the GPS data points stay at one location for a long time (so it may looks like vehicle in transition status). This is a problem the disclosed invention solves. With the disclosed invention, digitalized routes can be fixed automatically based on only raw GPS data (when they are out-of-date). With the disclosed invention, thousands of routes for a city can be auto-generated based on latest 3-6 days historical GPS data, within 10 hours on normal PC. FIG.5is an example of one embodiment of an apparatus500of GPS data classification for digitalized routes. One or more features of the apparatus500could (without limitation) be implemented as a computer program product embodying one or more program modules42(FIG.1) stored in memory28(FIG.1). As shown inFIG.5, raw data502(vehicle id, locations, timestamps, speeds, driving directions, etc.) from GPS satellites504obtained GPS devices on a large number of vehicles506from a fleet is obtained. The raw GPS data is cleaned for processing using well known signal processing techniques by data cleaning device508. The cleaned GPS data510is input to Initial Classifier processing module512. Digitalized routes data514that can be inaccurate or out-of-date is also input to classifier module512. The output is classified GPS data for different routes in which each data point is labeled with route id, or −1, with no label. In fleet management, there are usually preliminary classification results within raw GPS data. Because the task of vehicle monitoring will locate each vehicle (which route the vehicle is driving on). Many existing methods such as Hidden Markov model, dynamic planning or curve matching, can serve as this purpose. The initially classified GPS data is input to Micro-cluster Sequence Builder module516. Module516identifies the unmatched points and obtains the candidate transition points as described in connection with step302ofFIG.3. Module516transfers point sequences in the GPS data into micro-cluster sequences. In one embodiment, adjacent GPS points with same label can be merged. One micro-cluster only has one label: −1, 0, 1, . . . , n−1 (n-classification, usually n=2); −1 micro-cluster fall into 2 categories: stopping and transition. Module516also identifies the stopping points. Module516determines transition micro-clusters. Module516outputs the micro-cluster data518to Homogeneous Merge module520. Module520merges three micro-clusters: two same label micro-clusters with one stopping micro-clusters in the middle. Transition micro-clusters in middle cannot be merged. Transition micro-clusters are then input to Transition micro-clusters clustering module522. Module522performs a clustering method, such as hierarchy clustering on all transition micro-clusters from different vehicles. As noted above, in one embodiment, the distance between two micro-clusters is the Euclidean distance between their centroids. Module522aggregates candidate transition points as described in connection with step304. Real transition micro-clusters detector module524identifies n real transition micro-clusters as described in connection with step306. In one embodiment, module524considers transition frequency and balance of distance in two half portions of the routes. The n-real transition micro-clusters are then input to Label calibration and merge module526. Optionally, other source reference data528, such as user data, are included to enhance the confidence in the identification of the real transition micro-clusters. Module526calibrates the labels for all micro-clusters and classifies the micro-clusters as described in connection with step308. Module526merges adjacent micro-clusters of the same direction. In one embodiment, the labels are generated based on the fact that the label can only change at real transition points in the sequence. The present disclosure may be a system, an apparatus, a device, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to pedestrianize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, snippet, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reversed order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
44,854
11860912
DETAILED DESCRIPTION In a closed-domain question-answering process, a closed corpus comprising a specific context or set of contexts is searched for an answer to a question (e.g., a natural language question). A question-answering model is applied in order to identify a candidate answer within a context of the closed corpus. In order to avoid returning erroneous or nonsensical answers in cases in which correct/suitable answers are not present within the closed corpus, the process can be designed to be answerability-aware. To this end, in conjunction with searching a context for an answer to a question, the answerability of the question via that context can be assessed. This assessment can involve, for example, determining an unanswerability score for the question with respect to the context, and concluding that the question is unanswerable by the context if the unanswerability score exceeds a threshold. If the assessment indicates that the question is unanswerable by the closed corpus, a value indicating unanswerability can be returned as a result, rather than a purported answer. The use of a closed corpus as the search domain can impose constraints upon the breadth and depth of coverage provided with respect to the variety of topics and associated questions for which answers can be identified. If the set of contexts to be searched in a closed-domain question-answering process includes more than a relatively small number of contexts, the process can be too computationally expensive to be a suitable for use in many potential usage scenarios. In order to increase the likelihood of finding suitable answers, as well as the quality of such answers, it may be desirable to implement an open-domain question-answering process. In an open-domain question answering process, the search is conducted over an open corpus comprising thousands or millions of contexts (or more). An information retrieval (IR) model can be used to retrieve candidate contexts from the open corpus, and a question-answering model can be run on the candidate contexts. The search space is open-ended, in that no specific limitations are placed upon the number or identity of contexts that can be considered for retrieval and evaluation. Implementing an open-domain question answering process can present an issue with respect to question answerability. The question-answering model may be relatively accurate in detecting unanswerability with respect to any given context. Thus, in the case of fundamentally unanswerable questions—questions that are unanswerable regardless of context (e.g., nonsensical questions, incomprehensible questions, etc.)—a closed-domain process may be unlikely to return an erroneous answer (rather than a result indicating unanswerability). However, as the number of different contexts for which answerability is assessed increases, the likelihood may increase that the model “misses” the unanswerability of the question by one or more of those contexts. With respect to an open-domain search space, due to the sheer number of contexts considered, the likelihood that the model erroneously identifies a context as containing an answer to an in-fact unanswerable question may be undesirably high. Disclosed herein are techniques for answerability-aware open-domain question answering that can be implemented in order to realize the benefits (e.g., improvements in answer quality and/or likelihood of identifying suitable answers) of searching an open corpus, in a manner that enables the accurate detection of unanswerable questions. According to such techniques, the answerability of questions can be evaluated over the open-domain context search space. This evaluation can take the relevance of the various contexts into account in conjunction with assessing the open-domain answerability of the question based on the answerability of the question by the various contexts. Implementing such answerability-aware open-domain question answering techniques can improve answer quality and increase the likelihood of finding suitable answers to received questions (including questions that are locally unanswerable in many or most contexts, but globally answerable in that an answer exists in at least one context), while allowing detection of questions that are unanswerable. Disclosed herein is a system, comprising an input interface to receive input indicating a question, a communication module to establish a communication link with an access network, wherein the communication link provides connectivity to one or more packet data networks (PDNs) via the access network, and a computer coupled to the input interface and the communication module, the computer including a processor and a memory, the memory storing instructions executable by the processor to execute an information retrieval procedure including accessing an open-domain context search space of the one or more PDNs and retrieving, from among a plurality of contexts of the open-domain context search space, a plurality of candidate contexts for answering the question using a question-answering model for open-domain question answering, identify a set of non-answering contexts among the plurality of candidate contexts, wherein each of the set of non-answering contexts is a respective context for which the question-answering model predicts the question to be unanswerable, determine an open-domain unanswerability score for the question based on respective relevance scores for the set of non-answering contexts, determine an open-domain result for the question based on the open-domain unanswerability score for the question, and output result information indicating the open-domain result for the question. The memory can store instructions executable by the processor to determine a set of adjusted unanswerability scores including a respective adjusted unanswerability score for each of the set of non-answering contexts, and determine the open-domain unanswerability score for the question based on the set of adjusted unanswerability scores. The memory can store instructions executable by the processor to determine each of the set of adjusted unanswerability scores based on a normalized unanswerability score for a respective non-answering context among the set of non-answering contexts and a relevance score for the respective non-answering context. The memory can store instructions executable by the processor to determine each of the set of adjusted unanswerability scores as a weighted average of the normalized unanswerability score for the respective non-answering context and the relevance score for the respective non-answering context. The memory can store instructions executable by the processor to identify a smallest adjusted unanswerability score among the set of unanswerability scores as the open-domain unanswerability score for the question. The memory can store instructions executable by the processor to determine the open-domain result for the question based on a comparison of the open-domain unanswerability score with a threshold value. The memory can store instructions executable by the processor to, in response to a determination that the open-domain unanswerability score exceeds the threshold value, identify an unanswerability result as the open-domain result for the question, and arrange the result information to indicate the unanswerability result. The memory can store instructions executable by the processor to, in response to a determination that the open-domain unanswerability score does not exceed the threshold value, identify an answer span as the open-domain result for the question, wherein the answer span is contained in an answering context among the set of candidate contexts, and arrange the result information to indicate the answer span. The memory can store instructions executable by the processor to select the plurality of candidate contexts from among the plurality of contexts of the open-domain context search space based on respective relevance scores for the plurality of candidate contexts. The memory can store instructions executable by the processor to output the result information to a human-machine interface (HMI). Further disclosed herein is a method, comprising receiving input indicating a question, establishing a communication link with an access network, wherein the communication link provides connectivity to one or more packet data networks (PDNs) via the access network, executing an information retrieval procedure including accessing an open-domain context search space of the one or more PDNs and retrieving, from among a plurality of contexts of the open-domain context search space, a plurality of candidate contexts for answering the question using a question-answering model for open-domain question answering, identifying a set of non-answering contexts among the plurality of candidate contexts, wherein each of the set of non-answering contexts is a respective context for which the question-answering model predicts the question to be unanswerable, determining an open-domain unanswerability score for the question based on respective relevance scores for the set of non-answering contexts, determining an open-domain result for the question based on the open-domain unanswerability score for the question, and outputting result information indicating the open-domain result for the question. The method can comprise determining a set of adjusted unanswerability scores including a respective adjusted unanswerability score for each of the set of non-answering contexts, and determining the open-domain unanswerability score for the question based on the set of adjusted unanswerability scores. The method can comprise determining each of the set of adjusted unanswerability scores based on a normalized unanswerability score for a respective non-answering context among the set of non-answering contexts and a relevance score for the respective non-answering context. The method can comprise determining each of the set of adjusted unanswerability scores as a weighted average of the normalized unanswerability score for the respective non-answering context and the relevance score for the respective non-answering context. The method can comprise identifying a smallest adjusted unanswerability score among the set of unanswerability scores as the open-domain unanswerability score for the question. The method can comprise determining the open-domain result for the question based on a comparison of the open-domain unanswerability score with a threshold value. The method can comprise, in response to a determination that the open-domain unanswerability score exceeds the threshold value, identifying an unanswerability result as the open-domain result for the question, and arranging the result information to indicate the unanswerability result. The method can comprise, in response to a determination that the open-domain unanswerability score does not exceed the threshold value, identifying an answer span as the open-domain result for the question, wherein the answer span is contained in an answering context among the set of candidate contexts, and arranging the result information to indicate the answer span. The method can comprise selecting the plurality of candidate contexts from among the plurality of contexts of the open-domain context search space based on respective relevance scores for the plurality of candidate contexts. The method can comprise outputting the result information to a human-machine interface (HMI). FIG.1is a block diagram of an example question-answering process100. In question-answering process100, a question-answering (QA) model104is used to attempt to provide an answer to a question102. According to the QA model104, a context106is searched in order to look for an answer to question102. A result108is generated based on the outcome of the search of context106. The search of context106for an answer to question102produces an answer span A and an associated answer confidence score SAfor that answer span A. The answer span A comprises a span of text from context106, and represents a “best guess” as to an answer to question102given the information in context106. The answer confidence score SArepresents a relative measure of confidence that the answer span A constitutes a valid/correct answer to question102. Depending on the nature of question102and context106, there may or may not be an answer span in context106that constitutes a suitable answer to question102. If question102pertains to a topic outside the scope of context106, then the answer span A identified via the search of context106may amount to a nonsensical answer to question102. QA model104can be designed to avoid returning such nonsensical answers by assessing the answerability of question102via context106. To this end, in conjunction with the search of context106, QA model104can determine an unanswerability score SUin addition to answer span A and answer confidence score SA. Unanswerability score SUcan represent a relative measure of estimated likelihood that question102is unanswerable using context106. The unanswerability score SUcan be compared to a threshold τ. If the unanswerability score SUexceeds the threshold τ, then an unanswerable value “(unanswerable)” can be returned as result108. Otherwise, answer span A can be returned as result108. Question-answering process100represents an example of a closed-domain question-answering process. According to a closed-domain question-answering process, a single context (e.g., context106ofFIG.1) is searched for an answer to the question. In order to increase the likelihood of finding answers to received questions, it may be desirable to implement an open-domain question-answering process. According to an open-domain question-answering process, many contexts may be searched for an answer to the question. For example, a given open-domain question-answering process may involve searching a knowledgebase consisting of thousands or even millions of contexts. Even given such large numbers of contexts, it may still be the case, for some questions, that the QA model cannot identify an answer span that constitutes a suitable answer. In order to avoid returning nonsensical answers, it may therefore be desirable that the open-domain question-answering process be answerability-aware, such that it assesses the answerability of the questions it evaluates given the contexts that are searched. FIG.2is a block diagram of an example device200that may implement answerability-aware open-domain question answering. As shown inFIG.2, device200can include a processor210, memory212, and communication elements214. In some implementations, processor210can be a general-purpose processor. In some implementations, device200can be (or include) a microcontroller, and processor210can represent processing circuitry of that microcontroller. In some implementations, processor210can be a graphics processing unit (GPU). In some implementations, processor210can be (or include) a dedicated electronic circuit including an ASIC that is manufactured for a particular operation, e.g., an ASIC for processing sensor data and/or communicating the sensor data. In another example, processor302can be (or include) an FPGA (Field-Programmable Gate Array) which is an integrated circuit manufactured to be configurable by an occupant. Typically, a hardware description language such as VHDL (Very High Speed Integrated Circuit Hardware Description Language) is used in electronic design automation to describe digital and mixed-signal systems such as FPGA and ASIC. For example, an ASIC is manufactured based on VHDL programming provided pre-manufacturing, whereas logical components inside an FPGA may be configured based on VHDL programming, e.g., stored in a memory electrically connected to the FPGA circuit. In some examples, a combination of processor(s), ASIC(s), and/or FPGA circuits may be included in processor210. Memory212can include one or more forms of computer-readable media, and can store instructions executable by processor210for performing various operations, including as disclosed herein. Communication elements214are elements operable to send and/or receive communications over one or more communication links in accordance with one or more associated communication protocols. These can include wireless links/protocols, such as cellular, Wi-Fi, and/or Bluetooth® links/protocols, and can additionally or alternatively include wired links/protocols (e.g., Ethernet). Communication elements214can establish a connection with an access network250in order to provide device200with connectivity to one or more packet data networks (PDN), such as the Internet. As shown inFIG.2, device200can be communicatively coupled to a human-machine interface (HMI)205. HMI205is equipment configured to accept user inputs for one or more computing devices, such as device200, and/or to display/provide outputs from such devices. HMI205can include, for example, one or more of a display configured to provide a graphical user interface (GUI) or the like, an interactive voice response (IVR) system, audio output devices, mechanisms for providing haptic output, etc. In some implementations, HMI205can be comprised in device200. For instance, in some implementations, device200can be a portable computing device such as a tablet computer, a smart phone, or the like, and HMI205can correspond to input and output functions/capabilities of that portable computing device. Processor210can execute question answering (QA) programming220. QA programming220is programming, e.g., software, that performs operations associated with question answering at device200. These operations can include, for example, identifying questions to be answered, identifying contexts to be searched, determining results for such questions by searching such contexts, and outputting those results. In the example depicted inFIG.2, QA programming220identifies a question221for which to search for an answer using open-domain question answering. In some implementations, QA programming can identify question221based on question information received from HMI205. In some implementations, such question information can reflect user input provided to/via HMI205. In order to search for an answer to question221using open-domain question answering, QA programming220can access an open-domain context search space260comprising a plurality of contexts262. In some implementations, device200may have access to the open-domain context search space260via PDN connectivity provided by a connection with access network250, such as may be established by communication elements214. In conjunction with searching for an answer to question221using an answerability-aware open-domain questioning answering process, QA programming220can narrow the open-domain context search space260down into a smaller search space comprising a plurality of candidate contexts264-1to264-C, where C is an integer greater than 1. In order to narrow the open-domain context search space260, QA programming220can determine context relevance scores222for the contexts262of the open-domain context search space260based on question221. Context relevance scores222can be relative indicators of the relevance of the various contexts262for the purpose of answering question221. In some implementations, context relevance scores222can be information retrieval (IR) distance scores. Candidate contexts264-1to264-C can be contexts among contexts262that are of relatively greater relevance for the purpose of answering question221. QA programming220can identify/select candidate contexts264-1to264-C based on candidate context relevance scores224-1to224-C that comprise respective context relevance scores for candidate contexts264-1to264-C. QA programming220can identify, among candidate contexts264-1to264-C, a set of non-answering contexts266-1to266-N, where N is a positive integer. Each of non-answering contexts266-1to266-N may be a respective context for which a QA model, such as QA model104, predicts question221to be unanswerable (e.g., returns an “(unanswerable)” value as a result). QA programming220can identify non-answering contexts266-1to266-N based on respective unanswerability scores230-1to230-N for those contexts, as can be determined using the QA model. QA programming220can normalize the set of unanswerability scores230-1to230-N by applying min-max normalization, and can use the normalized unanswerability scores to determine a set of adjusted unanswerability scores232-1to232-N. Each of adjusted unanswerability scores232-1to232-N can be determined based on a normalized unanswerability score for a respective one of non-answering contexts266-1to266-N and a relevance score (i.e., one of non-answering context relevance scores226-1to226-N) for that non-answering context. In some implementations, each of adjusted unanswerability scores232-1to232-N can be determined as a weighted average of a normalized unanswerability score for a respective one of non-answering contexts266-1to266-N and a relevance score (i.e., one of non-answering context relevance scores226-1to226-N) for that non-answering context. In some implementations, each of adjusted unanswerability scores232-1to232-N can be determined according to Equation 1 as follows: S′Ui=μU*Nm(SU)i+(1−μU)*SIRi(1) whereSUrepresents a vector including unanswerability scores230-1to230-N, Nm(SU)irepresents a min-max normalized unanswerability score for non-answering context i, SIRirepresents a relevance score for non-answering context i, μUrepresents a hyperparameter specifying a weighted average of unanswerability scores and context relevance scores, and S′Uirepresents an adjusted unanswerability score for the non-answering context i. QA programming220can determine an open-domain unanswerability scorefor question221based on adjusted unanswerability scores232-1to232-N. In some implementations, QA programming220can determine the open-domain unanswerability scoreaccording to Equation 2 as follows: =mini∈1⁢to⁢nSUi′(2) QA programming220can compare open-domain unanswerability scorewith a threshold value {circumflex over (τ)} in order to determine whether question221is unanswerable. If the open-domain unanswerability scoreexceeds the threshold value {circumflex over (τ)}, QA programming220can determine that question221is unanswerable, and can identify an unanswerability result (e.g., “(unanswerable)”) as an open-domain result240for question221. QA programming220can then output result information to HMI205, and can arrange the result information to indicate the unanswerability result. In addition to non-answering contexts266-1to266-N, QA programming220can identify a set of answering contexts268-1to268-R among candidate contexts264-1to264-C, where R is a positive integer. Each of answering contexts264-1to264-R may be a respective context for which the QA model returns an answer to question221(as opposed to an unavailability result). If the open-domain unanswerability scoredoes not exceed the threshold value {circumflex over (τ)}, QA programming220can determine that question221is answerable, and can identify one of answer spans234-1to234-R as the open-domain result240for question221, where answer spans234-1to234-R represent the respective answers returned by answering contexts264-1to264-R. QA programming220can determine a set of answer confidence scores236-1to236-R, where each of answer confidence scores236-1to236-R is an answer confidence score associated with one of answer spans234-1to234-R corresponding to a respective one of answering contexts268-1to268-R. QA programming220can normalize the set of answer confidence scores236-1to236-R by applying min-max normalization. QA programming220can also apply min-max normalization to normalize a set of answering context relevance scores228-1to228-R that includes a respective context relevance score for each of answering contexts268-1to268-R. QA programming220can use the normalized answer confidence scores and normalized context relevance scores to determine a set of adjusted answer confidence scores238-1to238-R. Each of adjusted answer confidence scores238-1to238-R can be determined based on a normalized answer confidence score for a respective one of answering contexts268-1to268-R and a normalized context relevance score for that answering context. In some implementations, each of adjusted answer confidence scores238-1to238-R can be determined as a weighted average of a normalized answer confidence score for a respective one of answering contexts268-1to268-R and a normalized context relevance score for that answering context. In some implementations, each of adjusted answer confidence scores238-1to238-R can be determined according to Equation 3 as follows: S′Ai=μA*Nm(SA)i+(1−μA)*(1−Nm(SIR)i)  (3) whereSArepresents a vector including answer confidence scores236-1to236-R, Nm(SA)irepresents a min-max normalized answer confidence score for answering context i,SIRrepresents a vector including answering context relevance scores228-1to228-R, Nm(SIR) represents a min-max normalized context relevance score for answering context i, μArepresents a hyperparameter specifying a weighted average of answer confidence scores and context relevance scores, and S′Airepresents an adjusted answer confidence score for the answering context i. Based on adjusted answer confidence scores238-1to238-R, QA programming220can identify one of answer spans234-1to234-R as the open-domain result240for question221. In some implementations, QA programming220can identify a largest adjusted answer confidence score (among adjusted answer confidence scores238-1to238-R), and can identify an answer span235corresponding to that largest adjusted answer confidence score as the open-domain result240for question221. QA programming220can then output result information to HMI205, and can arrange the result information to indicate answer span235as the answer to question221. FIG.3is a block diagram of a process flow300, which may be representative of operations executed in various implementations. As shown in logic flow300, a plurality of candidate contexts may be selected at302for answering a question using a question-answering model for open-domain question answering. For example, QA programming220ofFIG.2may select candidate contexts264-1to264-C. At304, a set of non-answering contexts may be identified among the set of candidate contexts. For example, QA programming220ofFIG.2may identify non-answering contexts266-1to266-N among candidate contexts264-1to264-C. At306, an open-domain unanswerability score may be determined based on respective relevance scores for each of the set of non-answering contexts. For example, QA programming220ofFIG.2may determine an open-domain unanswerability scorefor question221based in part on non-answering context relevance scores226-1to226-N. At308, an open-domain result may be determined for the question based on the open-domain unanswerability score determined at306. For example, QA programming220ofFIG.2may determine open-domain result240based in part on an open-domain unanswerability scorefor question221. At310, result information that indicates the open-domain result may be outputted to a human-machine interface. For example, device200ofFIG.2may output result information indicating open-domain result240to HMI205. FIG.4illustrates an example storage medium400. Storage medium400may be any non-transitory computer-readable storage medium or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium. In various implementations, storage medium400may be an article of manufacture. In some implementations, storage medium400may store computer-executable instructions, such as computer-executable instructions to implement logic flow300. Examples of a computer-readable storage medium or machine-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. FIG.5is a block diagram of an example vehicle system500. The system500includes a vehicle505, which is a land vehicle such as a car, truck, etc. The vehicle505includes device200ofFIG.2. Vehicle505also includes a computer510, electronic control units (ECUs)512, vehicle sensors515, actuators520to actuate various vehicle components525, a communications module530, and a vehicle network532. Communications module530allows vehicle505to communicate with a server545via a network535. The computer510includes a processor and a memory. The memory includes one or more forms of computer-readable media, and stores instructions executable by the computer510for performing various operations, including as disclosed herein. The computer510may operate vehicle505in an autonomous, a semi-autonomous mode, or a non-autonomous (manual) mode, i.e., can control and/or monitor operation of the vehicle505, including controlling and/or monitoring components525. For purposes of this disclosure, an autonomous mode is defined as one in which each of vehicle propulsion, braking, and steering are controlled by the computer510; in a semi-autonomous mode the computer510controls one or two of vehicle propulsion, braking, and steering; in a non-autonomous mode a human operator controls each of vehicle propulsion, braking, and steering. The computer510may include programming to operate one or more of vehicle brakes, propulsion (e.g., control of acceleration in the vehicle by controlling one or more of an internal combustion engine, electric motor, hybrid engine, etc.), steering, climate control, interior and/or exterior lights, etc., as well as to determine whether and when the computer510, as opposed to a human operator, is to control such operations. Additionally, the computer510may be programmed to determine whether and when a human operator is to control such operations. The computer510may include or be communicatively coupled to, e.g., via vehicle network532as described further below, more than one processor, e.g., included in ECUs512or the like included in the vehicle505for monitoring and/or controlling various vehicle components525, e.g., a powertrain controller, a brake controller, a steering controller, etc. Further, the computer510may communicate, via communications module530, with a navigation system that uses the Global Position System (GPS). As an example, the computer510may request and receive location data of the vehicle505. The location data may be in a conventional format, e.g., geo-coordinates (latitudinal and longitudinal coordinates). Vehicle network532is a network via which messages can be exchanged between various devices in vehicle505. Computer510can be generally programmed to send and/or receive, via vehicle network532, messages to and/or from other devices in vehicle505(e.g., any or all of ECUs512, sensors515, actuators520, components525, communications module530, a human machine interface (HMI), etc.). Additionally or alternatively, messages can be exchanged among various such other devices in vehicle505via vehicle network532. In cases in which computer510actually comprises a plurality of devices, vehicle network532may be used for communications between devices represented as computer510in this disclosure. Further, as mentioned below, various controllers and/or vehicle sensors515may provide data to the computer510. In some implementations, vehicle network532can be a network in which messages are conveyed via a vehicle communications bus. For example, vehicle network can be a controller area network (CAN) in which messages are conveyed via a CAN bus, or a local interconnect network (LIN) in which messages are conveyed via a LIN bus. In some implementations, vehicle network532can be a network in which messages are conveyed using other wired communication technologies and/or wireless communication technologies (e.g., Ethernet, WiFi, Bluetooth, etc.). Additional examples of protocols that may be used for communications over vehicle network532in some implementations include, without limitation, Media Oriented System Transport (MOST), Time-Triggered Protocol (TTP), and FlexRay. In some implementations, vehicle network532can represent a combination of multiple networks, possibly of different types, that support communications among devices in vehicle505. For example, vehicle network532can include a CAN in which some devices in vehicle505communicate via a CAN bus, and a wired or wireless local area network in which some device in vehicle505communicate according to Ethernet or Wi-Fi communication protocols. Vehicle sensors515may include a variety of devices such as are known to provide data to the computer510. For example, the vehicle sensors515may include Light Detection and Ranging (lidar) sensor(s)515, etc., disposed on a top of the vehicle505, behind a vehicle505front windshield, around the vehicle505, etc., that provide relative locations, sizes, and shapes of objects and/or conditions surrounding the vehicle505. As another example, one or more radar sensors515fixed to vehicle505bumpers may provide data to provide and range velocity of objects (possibly including second vehicles), etc., relative to the location of the vehicle505. The vehicle sensors515may further include camera sensor(s)515, e.g., front view, side view, rear view, etc., providing images from a field of view inside and/or outside the vehicle505. Actuators520are implemented via circuitry, chips, motors, or other electronic and or mechanical components that can actuate various vehicle subsystems in accordance with appropriate control signals as is known. The actuators520may be used to control components525, including braking, acceleration, and steering of a vehicle505. In the context of the present disclosure, a vehicle component525is one or more hardware components adapted to perform a mechanical or electro-mechanical function or operation—such as moving the vehicle505, slowing or stopping the vehicle505, steering the vehicle505, etc. Non-limiting examples of components525include a propulsion component (that includes, e.g., an internal combustion engine and/or an electric motor, etc.), a transmission component, a steering component (e.g., that may include one or more of a steering wheel, a steering rack, etc.), a brake component (as described below), a park assist component, an adaptive cruise control component, an adaptive steering component, a movable seat, etc. In addition, the computer510may be configured for communicating via communication module530with devices outside of the vehicle505, e.g., through vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2X) wireless communications to another vehicle, to (typically via the network535) a remote server545. The communications module530could include one or more mechanisms by which the computer510may communicate, including any desired combination of wireless (e.g., cellular, wireless, satellite, microwave and radio frequency) communication mechanisms and any desired network topology (or topologies when a plurality of communication mechanisms are utilized). Exemplary communications provided via the communications module530include cellular, Bluetooth®, IEEE 802.11, dedicated short range communications (DSRC), and/or wide area networks (WAN), including the Internet, providing data communication services. The network535can be one or more of various wired or wireless communication mechanisms, including any desired combination of wired (e.g., cable and fiber) and/or wireless (e.g., cellular, wireless, satellite, microwave, and radio frequency) communication mechanisms and any desired network topology (or topologies when multiple communication mechanisms are utilized). Exemplary communication networks include wireless communication networks (e.g., using Bluetooth, Bluetooth Low Energy (BLE), IEEE 802.11, vehicle-to-vehicle (V2V) such as Dedicated Short-Range Communications (DSRC) and cellular V2V (CV2V), cellular V2X (CV2X), etc.), local area networks (LAN) and/or wide area networks (WAN), including the Internet, providing data communication services. Computer510can receive and analyze data from sensors515substantially continuously, periodically, and/or when instructed by a server545, etc. Further, object classification or identification techniques can be used, e.g., in a computer510based on lidar sensor515, camera sensor515, etc., data, to identify a type of object, e.g., vehicle, person, rock, pothole, bicycle, motorcycle, etc., as well as physical features of objects. As used herein, the term “circuitry” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some implementations, the circuitry may be implemented in, or functions associated with the circuitry may be implemented by, one or more software or firmware modules. In some implementations, circuitry may include logic, at least partially operable in hardware. In the drawings, the same reference numbers indicate the same elements. Further, some or all of these elements could be changed. With regard to the media, processes, systems, methods, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claimed invention. The disclosure has been described in an illustrative manner, and it is to be understood that the terminology which has been used is intended to be in the nature of words of description rather than of limitation. Many modifications and variations of the present disclosure are possible in light of the above teachings, and the disclosure may be practiced otherwise than as specifically described. The present invention is intended to be limited only by the following claims.
38,997
11860913
Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Implementations include systems and methods that read real-time streaming input, e.g. in chunks, maintain a list of response candidates for the input, and decide when to provide one of the response candidates back to the user. The list of responses candidates is a dynamic list in that it is continually updated by adding one or more new response candidates and/or removing (or “pruning”) one or more response candidates from the list, and is referred to as a “rotating” list. A dialog host calls a dialog mixer upon a triggering event, which may be return of a back-end request, receipt of new streaming input, or expiration of a window of time (in case there has been no other triggering event within the window). The dialog host maintains one or more paths in a dialog beam, managing diverging paths, pruning paths with low posterior probabilities, and backtracking to start a new path when needed. Streaming input is input that is received in real-time and may include an incomplete request. In other words, implementations begin generating response candidates even before the user has finished speaking. Because the dialog host begins formulating an answer before the user has finished speaking, the dialog host increases the speed at which the electronic assistant can respond to the user. The dialog host includes a ranking and triggering capabilities to decide which, if any, dialog responses to provide to the user as part of a conversation. Deciding when to respond, i.e., deciding not to respond to a particular triggering event, is an important function so that the electronic assistant does not interrupt inappropriately or provide premature suggestions. Implementations track a dialog state for each path of a dialog beam and are able to backtrack or start a new path for the dialog as additional input changes the context of the dialog. FIG.1is a block diagram of a real-time dialog management system in accordance with an example implementation. The system100may be used to more accurately simulate a natural conversation with a user, to provide more helpful responses, and to provide responses more quickly than conventional turn-taking dialog managers. The system100may also be configured to provide candidate responses from multiple dialog schemas, combining schemas when appropriate. The system100is able to process real-time streaming input from the user rather than waiting to process input after the user has completed a command or query. The depiction of system100inFIG.1is a single computing device but implementations may also move some of the components to a server, making system100a client-server system, as illustrated in more detail inFIG.2. In addition, one or more components may be combined into a single module or engine, and some capabilities of the illustrated components may be performed by separate engines. In some implementations, a user of the computing device may indicate that portions of the processing be performed at a server. Thus, implementations are not limited to the exact configurations illustrated. The real-time dialog management system100includes a computing device105. The computing device may be implemented in a personal computer, for example a laptop computer, a smartphone, a wearable device (smart watch, smart glasses, etc.), a game console, a home appliance, etc. The computing device105may be an example of computer device500, as depicted inFIG.5. The computing device105may include one or more processors formed in a substrate (not illustrated) configured to execute one or more machine executable instructions or pieces of software, firmware, or a combination thereof. The processors can be semiconductor-based—that is, the processors can include semiconductor material that can perform digital logic. The computing device105can also include one or more computer memories. The memories, for example, a main memory, may be configured to store one or more pieces of data, either temporarily, permanently, semi-permanently, or a combination thereof. The memories may include any type of storage device that stores information in a format that can be read and/or executed by the one or more processors. The memories may include volatile memory, nonvolatile memory, or a combination thereof, and store modules or engines that, when executed by the one or more processors, perform certain operations. In some implementations, the modules may be stored in an external storage device and loaded into the memory of computing device105. The computing device105may include dialog input-output devices110. The dialog input/output devices110may include hardware that enables the dialog host120to receive input from the user180or provide a response to the user180. Input from the user may be vocal, e.g., in the form of speech. Speech may be provided as streaming input using conventional techniques such as chunking. Input from the user may also be non-vocal, e.g., text, taps, etc., provided by the user. The output can, similarly, be speech-based or text-based. An example of the input/output devices110may include a microphone and a speaker. Another example of the input/output devices100may be a keyboard (virtual or physical) and a display. The input/output devices110may also include modules to convert sounds captured by the microphone to streaming input. The real-time dialog management system100is discussed primarily in the context of a spoken conversation using a microphone and speaker but implementations include other conversational modes, such as those held in a messaging application. The modules of the real-time dialog management system100may include a dialog host120. The dialog host120may be configured to obtain or receive input from input/output devices110. Input can include streaming input. Streaming input captures the user's voice (speech) as a series of chunks, e.g., a few seconds long, and provides the chunks as a file to the dialog host120. Streaming input is considered verbal input. The dialog host120considers each new file as a triggering event and invokes a dialog mixer130for each new input. The input may include a sliding window of chunks. For example, the window can include the newly received file and some quantity of previously received files, if they exist. The window may represent the duration of input for which the system has not yet committed to a semantic understanding or a response. In other words, the window may represent the “unstable” portion of the input that the system is using to determine different paths and therefore the system could still backtrack or begin a new path, etc. Once the system provides a response, the system has committed to the input provided and that input becomes “stable”. In some implementations, the window may be defined as any input chunks received after providing a most recent response. In some implementations, the window can be defined in terms of a time period, e.g., seconds, fractions of a second, etc. Thus, older files become too old to be included in the window. The dialog host120may be configured to recognize non-verbal input as a triggering event. Non-verbal input may include a text string, tap inputs, or selections obtained from the user using the input/output devices110. The dialog host120considers such non-verbal input as a triggering event and is configured to invoke the dialog mixer130for each new nonverbal input. The dialog host120may also consider a rewrite candidate to be triggering event. In some implementations, the system may provide the current input context to an engine that performs various types of resolution, e.g., coreference, ellipsis, etc., on the input. This engine may be a function provided by the dialog host120or one of the dialog managers170. The engine may provide a rewrite candidate, which the dialog host120may treat the rewrite candidate like a backend response. The dialog host120is configured to call the dialog mixer130with the rewrite candidate as new input. The dialog host120also recognizes receipt of a backend response as a triggering event. The dialog host120is configured to call the dialog mixer130for each backend response received. A “backend response” represents data generated using a dialog manager170, which may be based on one or more searchable data repositories, e.g., backend systems190. The data is intended for output by the input/output device110. A backend response may be provided by a dialog manager170in response to a request sent to the dialog manager170. The backend response thus represents a search result provided by the schema that the particular dialog manager170operates on. In other words, in this embodiment a “backend request” to a dialog manager170initiates a search of the schema managed by the dialog manager170using the input. The “backend response” returned by the dialog manager170includes the results of the search. The backend response may be for a request solicited by the dialog host120. The backend response may also be for a request not solicited by the dialog host120. For example, in some implementations, a dialog manager170amay provide one or more other dialog managers (e.g.,170band/or170n) with resources, e.g., information or data obtained in response to a request, and the other dialog managers may use some or all of the resources to provide an additional backend response. The backend response includes a proposed system response to the backend request. The system response can be verbal output to be provided by the input/output devices110to the user. The system response can alternatively or also be associated with an action that the computing device will perform if the response is provided. For example, the system response may cause the computing device to open an application and perform some function in the application, e.g., adding a new calendar event. The dialog host120may be configured to call the dialog mixer130periodically in the absence of other triggering events. For example, if no new input and no backend responses are received within a period of time, e.g., 100 milliseconds, the dialog host120may consider this passage of time to be a triggering event and to call the dialog mixer130. This enables the dialog host120to update the rotating list of candidates and to make a new decision about whether to provide one of the candidates as a response to the user via the dialog input/output devices110. The dialog host120manages a rotating list of candidate responses150. Each candidate response may be referred to as a dialog. In a real-time streaming dialog environment, a dialog may be represented as a path in a dialog beam. A dialog beam is a beam search where dialog responses are mapped to dialog states. A path in a dialog beam represents the dialog states generated for the same input (e.g., query) from the same base state. Because the system monitors input in real-time, the user's intended dialog is not always known. Therefore, the dialog host120manages several possible dialogs at once, which are represented in the candidate list150. The dialog host120prunes paths in the dialog beam that become irrelevant or outdated and adds new paths as needed. Each candidate is associated with a dialog state. The state may be represented by a data structure. The state data structure may include the question being answered, e.g., taken from the input (e.g., input window). The state data structure may include current conversational context, a history of the user input/requests, system interpretations of the inputs, a history of responses provided to the user, other relevant events, such as incoming notifications, data relevant to task prediction, e.g., data that helps the computing device determine or predict a task the user is desires to accomplish (such as booking a restaurant table), the attentional state of the user (such as a person or place that the current dialog relates to), etc. The state data structure may also include information on type of information being requested for the dialog. For example, a calendar dialog may need a date, a time, an event name, etc. The state data structure may keep track of the types of values needed and whether the values have been provided. The dialog state may also include indications of previous responses accepted system responses (e.g., responses provided to the user). The candidate list150is stored in memory and maintained by the dialog host120. The candidate list150represents candidate responses and their corresponding states received from the dialog mixer130. A candidate response in the candidate list150may be a system response that provides an action to be taken and/or a response to be provided to the user. A candidate response may also be a back-end request to be executed. The back-end request may be associated with a dialog schema, or in other words a particular dialog manager170. For example, there may be a dialog manager170afor cooking, a dialog manager170bfor local directions, a dialog manager170cfor music, a dialog manager170dfor time, etc. Dialog manager170can thus include any number of different dialog managers (e.g.,170ato170n). The dialog host120may use the ranking engine122and/or the triggering engine124to determine whether or not to execute the backend request. For example, if the request is to search “cry” in the music schema, this may represent a search unlikely to provide a single response and, thus, represents a waste of resources because the goal of a dialog host is to provide a single relevant response. Alternatively, if the request is to search “cry me a river” in music, the dialog host120may decide to execute the request, which will result in a back-end response provided to the dialog host120. The state data structure may track whether the candidate is a request or response, enabling the dialog host120to determine whether requests are outstanding or not. The dialog host120includes a ranking engine122and a triggering engine124. The ranking engine122may rank candidate responses provided by the dialog mixer130. The ranking engine122may prune candidate responses with a low posterior probability. In other words, the ranking engine122can determine whether a particular candidate response is unlikely to be selected as a good response and be provided to the user. For example, in some implementations, the dialog mixer130provides a failure candidate and a backend request candidate for the same dialog manager, e.g., dialog manager170a, and the ranking engine122may rank the failure candidate low and prune the candidate because the backend request has not yet been executed, so the failure candidate is premature. Pruning the candidate means removing the candidate from the list of candidates. In some implementations, the ranking engine122preserves the failure candidate until the corresponding backend response is received, but is given a low rank at each ranking event before the corresponding backend response is received. The ranking engine122may include a machine-learned model that takes as input the candidate responses in the candidate list150and annotations about the candidate responses and that provides, as output, a rank for each of the candidate responses. In some implementations the ranking model may be a machine-learned model. For example, the ranking model may be a long short-term memory (LSTM) neural network, feed-forward neural network, a support vector machine (SVM) classifier etc., that can predict whether a candidate is likely to be selected for presentation to the user given a set of ranking signals in the form of annotations about the candidates. In some implementations, the ranking model can be trained at a server and provided to the computing device105. In some implementations, the dialog host120may be configured to further train the ranking model from user responses to candidates provided to the user. For example, if a candidate is selected and presented to the user, but the user indicated disapproval, the candidate (and its corresponding state, including annotations) may be marked as a negative training example for the model. Likewise, the system may use responses for which the user indicates approval as positive training examples. The ranking score can be considered a confidence score indicating how confident the model is that the response candidate is a high quality, relevant response. The annotations can include characteristics of the real-time streaming chunk that are obtained through speech analysis. For example, the annotations may indicate whether the chunk includes upward inflection. As another example, the annotations may indicate whether the speaker has finished speaking, and if so for how long. As another example, the annotations may indicate whether the chunk includes a filter or how much of the chunk is a filter. A filter is a sound that signals the speaker is really pausing. For example, [uhhh] is a verbal filter. As another example, an annotation may indicate the power of the speech, e.g., an indication of whether the speaker is yelling or otherwise conveying frustration. The system may use conventional speech analysis of the chunk to provide the annotations. The ranking engine122may also prune response candidates from the candidate list. The ranking engine122may prune a candidate that is too old. The ranking engine122may also prune a backend request candidate that is expensive to compute but has little chance of success, e.g., because the search is too broad. The ranking engine122may prune a candidate that does not match, e.g., a failure candidate. A failure candidate is a candidate response provided as a default response and indicates the particular dialog manager was unable to understand the request or unable to provide a better response. In general, the system may prune any response candidates that the system is confident will be outranked. The ranking engine122may also prune any candidates unlikely to be correct based on new information (e.g., additional input). In other words, once the system is confident in one interpretation, the ranking engine122may prune candidate responses relating to the other interpretations. The dialog host120may also include a triggering engine124. The triggering engine124may decide whether to actually provide one of the top candidates as a response to the user, e.g., via the input/output devices110. When the triggering engine124provides a response, it may update a base state for the dialog. The base state represents a state the system has committed to, e.g., a state the system has provided a response for. Thus, once the triggering engine124provides a response it may move or promote the provisional state of the candidate provided to the user as a response to the base state. In some implementations the triggering engine124may be a machine-learned model. For example, the triggering engine124may be a long short-term memory (LSTM) neural network, feed-forward neural network, a support vector machine (SVM) classifier etc., that selects either taking no action or selects a response from among the candidate responses. The triggering engine124can select no action, or in other words, no response, as a valid response to a triggering event. Whether the triggering engine124selects no action depends on the context of the triggering event and the candidate responses in the candidate list. The triggering engine124may also select one of the system response candidates in the candidate list in response to a triggering event. If the model selects a candidate, the triggering engine124may provide the selected system response to the input/output devices110for presentation to the user. Presentation to the user can involve an action performed by the computing device105, such as playing audio files, playing video files, providing text on a display, and/or invoking an application. As one example, providing a candidate with a system response of [playing Cry Me a River] may cause the computing system105to provide audio output of [playing cry me a river] and to open a media application and begin playing a song titled “Cry Me a River”. Depending on the response, providing the candidate as a response may include other actions, such as adding a calendar event, setting a timer, adding a contact, setting an alarm, playing a movie, playing an audio book, etc. The real-time dialog management system100includes a dialog mixer130. The dialog mixer130is configured to take as input a base state and information about a triggering event, e.g., a backend response, new input, passage of time). The base state includes the current conversational context, including dialog states (e.g., from the state data structure) for all most recently accepted candidates in the path of the dialog beam. The information about the triggering event can include text from the user, e.g. from an input stream window or via a text box, etc. The information about the triggering event can also include the response from a backend request. The information about the triggering event can also include a timestamp for the event. The dialog mixer130provides as output one or more candidate responses. A candidate response can be a system response. A system response is text to be provided as part of the conversation and any actions the system100should take. A system response is optional and is not always included in the candidates provided by the dialog mixer130. A candidate response can also be a backend request the dialog mixer130would like the host to execute. The backend request identifies the schema or the dialog manager to which the request is directed as well as the query to be executed. In some implementations the query is processed as a beam search. A backend request is also optional and is now always included in the candidates provided by the dialog mixer130. However, the dialog mixer130provides at least one system response or one backend request for each triggering event. For each candidate response, the dialog mixer130also provides a provisional dialog state. The provisional state may use the state data structure discussed herein. The provisional state can be used as part of a base state provided to the dialog mixer130in a subsequent call to the dialog mixer130if the candidate is accepted. For example, the provisional state provided with a backend request is provided as the base state for a backend response to the backend request. Finally, the dialog mixer130also provides, for each candidate response, annotations about the candidates. The annotations are used as signals for ranking and may also be used in logging. When the dialog mixer130is called, it accepts the base dialog states provided in the input. When the triggering event is new input, the dialog mixer130determines if the user is triggering a new dialog. A new dialog corresponds to a new dialog manager, e.g., a new schema or a new search in a dialog schema. If the user is triggering a new dialog, the dialog mixer130fetches the corresponding schema and initializes the dialog manager for the schema. The dialog mixer130then distributes the output of the natural language parser, also referred to as an analyzer, to all dialog managers. When the triggering event is a backend response, the dialog mixer130loads the dialog manager that corresponds with the backend response and applies the backend response to the dialog managers that request them, respectively. The dialog mixer130may solicit the dialog managers for backend requests and new state tokens. Each dialog manager solicited generates some kind of response, even if it is an error or failure response. In some implementations, the dialog manager130may also issue a backend request. The dialog mixer130rolls up each dialog manager's output, whether a system response or a backend request, into a response candidate. Each candidate has some combination of a system response(s) and/or a backend request(s), and a provisional dialog state. In some implementations, the dialog mixer130may perform second phase candidate generation. In second phase candidate generation the dialog mixer130may derive a composite candidate response from two or more individual schemas. The dialog mixer130provides the candidate response(s), a respective dialog state for each candidate response, and annotations for each candidate response back to the dialog host120, where the responses are ranked, pruned, and potentially a response is triggered and provided to the input/output devices110. The real-time dialog management system100may also include a plurality of dialog managers170a-170n. Each dialog manager is responsible for a single thread of dialog and represents a searchable schema. For example, dialog manager170amay be a music dialog for searching a digital library of music. Dialog manager170bmay be a local dialog for searching local areas of interest, e.g., “restaurants near me”, and for providing directions to a specific area of interest. Dialog manager170cmay be a calendar dialog capable of finding appointments, setting new appointments, setting reminders for an appointment, etc. Each dialog manager is configured to look at the input provided and determine whether the input matches the schema. For example, the input [take me to] may not be similar enough for a food dialog manager to trigger a search in that schema, but may be similar enough for a local dialog manager and a music dialog manager to trigger and issue backend requests. The real-time dialog management system100may include backend systems190. The backend systems190represent searchable data repositories that provide responses for a particular dialog manager. For example, the music dialog manager170amay call a music server to search for titles, artists, albums, etc., and can play music from the repository. In some implementations, the repositories are local to the computing device, as illustrated inFIG.1. In some implementations, the repositories are remote, e.g., located at one or more servers, as illustrated inFIG.2. FIG.2is a block diagram illustrating another example system100in accordance with the disclosed subject matter. In the example ofFIG.2, the real-time dialog management system100includes a server210, which may be a computing device or devices that take the form of a number of different devices, for example a standard server, a group of such servers, or a rack server system. For example, server210may be implemented in a distributed manner across multiple computing devices. In addition, server210may be implemented in a personal computer, for example a laptop computer. The server210may be an example of computer device500, as depicted inFIG.5, or system600, as depicted inFIG.6. The real-time dialog management system may include client device205. Client device205is similar to client device105described with regard toFIG.1. Thus, client device205includes dialog input/output devices110, dialog host120, dialog mixer130, and candidate list150. In the example ofFIG.2, the server210includes the dialog managers170and backend systems190. In the example ofFIG.2the client device205communicates with the server210and with other client devices190over network140. Network140may be for example, the Internet, or the network140can be a wired or wireless local area network (LAN), wide area network (WAN), etc., implemented using, for example, gateway devices, bridges, switches, and/or so forth. Network140may also represent a cellular communications network. Via the network140, the server210may communicate with and transmit data to/from client device205. The real-time dialog management system100ofFIG.1and ofFIG.2represents example configurations but implementations may incorporate other configurations. For example, some implementations may have only the backend systems190on the server210, or may have some backend systems190on the server210and some on the client device205. Some implementations may have some dialog managers170on the client device205and some on the server210. Some implementations may move the dialog mixer130, or some functionalities of the dialog mixer130to the server210. Some implementations may move the dialog host120the server210. Some implementations may combine one or more of the dialog input/output devices110, dialog host, and dialog mixer130, and dialog managers170into a single module or application. To the extent that the real-time dialog management system100collects and stores user-specific data or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect the user information or to control whether and/or how to receive content that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, search records may be treated so that no personally identifiable information can be determined and/or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by a real-time dialog management system100. FIG.3illustrates a flow diagram of an example process300for managing a real-time dialog, in accordance with disclosed implementations. Process300may be performed by a real-time dialog management system, such as system100ofFIG.1or ofFIG.2. In some implementations, process300is run by a dialog host, such as dialog host120. Process300may be used to call a dialog mixer in response to a triggering event, determine what input to provide to the dialog mixer, manage a list of candidates from candidates provided by the dialog mixer, and decide whether to provide a candidate response to the user or to stay silent and keep waiting for further input. Process300may represent a main loop for a real-time dialog management system. Thus process300may be continually running while the dialog system is active. Process300may include a wait mode, where the system waits for a triggering event (305). The wait mode may be interrupted by a triggering event (310-320). One triggering event is receipt of a backend response (310). The backend response is a system response generated by a backend request. The backend response includes the system response and identifies a dialog manager that handled the request. Another triggering event is receipt of new input (315). The input may be speech captured from the user in a sliding window. The input may be text entered by the user. The input may be a selection made by the user. While the user is speaking the system may provide a new input periodically, e.g., every 100 milliseconds. The sliding window may encompass up to a predetermined number of previous inputs. Thus, for example, an initial input for the sliding window may be “play cry” and a next input for the sliding window may be “me a river,” making the input for the sliding window “play cry me a river.” Another triggering event is passage of time (320). The system may trigger this event when no backend response and no new input has been received within some predefined period of time. This triggering event enables the system to advance the dialog in the absence of other triggering events. In response to a triggering event, the system may determine the base state for the triggering event (330). The base state describes the current conversational context for a triggering event. The base state may be a single dialog state or multiple dialog states. The base state includes the dialog states of any accepted candidates in the candidate list for a particular dialog path. A system response candidate is accepted when it is triggered, or in other words provided as a response to the user. A backend request candidate is accepted when the backend request is executed. A dialog path starts with a root state and includes any candidates accepted or pending until the system backtracks. Once the system backtracks to an ancestor node in the path, which represents the base state for the new path, the new dialog path diverges from the current path at the ancestor node. The ancestor node may be the root node in the dialog beam but does not need to be the root node. As part of determining the base state of the triggering event, the system must determine which dialog path corresponds with the triggering event. This may be a current path or may be a new path started because the system decides to backtrack. For example, when additional input changes what the query provided to one or more dialog managers (e.g., the beam search string is updated) the system starts a second dialog path. The dialog path forks, or diverges from the current path at an ancestor node that the system backtracks to. The system can thus manage multiple paths diverging from any base state and can make decisions (e.g., ranking and triggering decisions) between the paths. The system may also prune a path when the candidates in that path become outdated or low-ranked. The dialog states may include an indication of which path the state belongs in. The dialog path can include competing candidates from different dialog managers, so the base state can include more than one dialog state, e.g., a different dialog state for different dialog managers. The dialog state may be stored in a state data structure, which was described above with regard toFIG.1. The dialog host may send the base state and the triggering event information to the dialog mixer (335). The triggering event information depends on the type of triggering event. For example, if the triggering event is a backend response, the triggering event information includes the backend response received. If the triggering event is receipt of new input, the triggering event information is the received input, input in a sliding window (the window including the received input), text received, or other input received. If the triggering event is passage of time the input may be a current timestamp. The system may then receive potential candidates from the dialog mixer. A potential candidate may be a system response. A system response is something that the system says (e.g., provided via an output device) and/or does (e.g., play a song, set a timer, purchase an item, etc.). A potential candidate may be a backend request. A backend request may represent a query in a particular dialog schema. Thus, a backend request may be provided to a dialog manager for the schema. The dialog manager may submit the query to a backend system and formulate a response. Receipt of the response by the dialog host is a triggering event. Thus, a backend request candidate includes an identifier used to match the response to the respective candidate. Each potential candidate has a corresponding provisional dialog state. Each potential candidate may also have respective annotations or metadata that may be used by the system for ranking and pruning potential candidates. The annotations or metadata may also be used in logging. The system ranks the potential candidates, pruning poor candidates (345). The ranking takes place across all branches, not just the branch that was selected in step330. The ranking may include a machine learned model that takes as input the annotations and metadata about the potential candidates and returns a score for each potential candidate. The model may be trained to use an input the list of potential candidates, in all branches, their states, and the annotations. The ranking results in some candidates being pruned. A pruned candidate may be removed from the candidate list. A pruned candidate may also be marked as pruned or not active. A candidate may be pruned because it is too old, because it is a duplicate of another candidate, because it is too expensive (e.g., the query is too broad and the user is still speaking,). All of these may result in a poor ranking score, e.g. one that fails to satisfy (e.g., meet or exceed) a ranking threshold. A pruned candidate is no longer considered in the list of candidates, i.e., it is not considered a response candidate. The system then decides whether to trigger any of the candidates in the list of candidates (350). The triggering decision may also use a machine-learned model that assigns a confidence score for each of the candidates in the list. In some implementations, the confidence score may be the rank assigned to the candidate. The confidence score may represent how certain the system is that the candidate is an appropriate at that time. In other words, the system has uncertainty about whether to provide a candidate response at all. This differs from a turn-taking dialog system where the system always provides one of the candidate responses for a triggering event. In the real-time dialog system, the system is continuously determining whether to respond, with the option not to respond at all being a valid determination. The system may use a variety of input signals to calculate a confidence score for each candidate. The input signals can include whether the last verbal input from the user had an upward intonation. An upward intonation is a factor for indicating the user finished a question. The input signals can include how long the user has been silent. A short silence may mean the user is thinking. A longer silence may indicate the user is awaiting some response or could use help. For example, if the input sliding window is [play the1978song by Boston named] the system may have already generated a candidate system response of [playing more than a feeling]. If the user trails off, e.g. is trying to think of the title, the system may trigger the candidate. The input signals may include the length of the sliding window, how long the user has been speaking without triggering a response. If the user has been speaking a while without triggering a response, the system may trigger a back-channel candidate. In some implementations, the list of candidates may include a back-channel feedback candidate as a default candidate. The back-channel candidate represents some feedback by the system that indicates the system is listening but the dialog is primarily one-way, i.e., the user speaking. For example, a back-channel feedback candidate may be [uh-huh], [hmm], or [right] or some other expression that indicates attention or comprehension. The system may trigger a system response candidate when the system response candidate has a confidence score that satisfies (meets or exceeds) a triggering threshold. The system may also trigger the system response candidate when the system response candidate has a rank that satisfies the triggering threshold. If the system decides not to trigger any response candidate (350, No), the system may initiate, e.g., execute, any backend requests that are candidates and have not already been accepted (355). Any backend requests that are still in the candidate list at this point are accepted. In some implementations, the system may track (e.g., via a flag in the candidate list) which backend requests are outstanding. The system may then return to the wait state (305). If the system decides to trigger a candidate (350, Yes), the system may perform the system response (360). Only a candidate that is a system response can be triggered because only the system responses have an output to provide to the user. The output may be something provided to an output device, e.g., text spoken or displayed. The output may be an action the computing device performs, e.g., playing a media file. A system response candidate that is triggered is an accepted candidate. If the triggered candidate is a back-channel candidate (365, Yes), the system may initiate any accepted backend requests (355), as explained above, so that the system can wait for the user to keep taking and decide whether to provide a more concrete response later. If the triggered candidate is not a back-channel candidate (365, No), the system may clean up any non-triggered branches (370). This may include setting a new root state or new base state and clearing the list of candidates. The system may then enter the wait state (505) for the next triggering event. The following is an example real-time dialog to illustrate process300. In the example, input provided by the user (e.g., via a microphone or keyboard) is illustrated in brackets [ ] as is audio output provided by the system. Actions taken by the system is illustrated in curly braces}. This example is provided for illustrative purposes only. In the present example, the dialog host starts with an empty candidate list, so the root state is null or empty. To begin, the dialog host receives a streaming chunk of [take me to church] as current input, e.g., at315. Because there are no candidates in the list, the base state is null. Thus, the dialog host sends an empty or null base state and the input “take me to church” to the dialog mixer. The dialog mixer determines that the input applies to two dialog managers; a media dialog manager and a local dialog manager. The dialog mixer provides four potential candidates, as illustrated in Table 1. All candidates Table 1 are in path 1 because they originated from the same base state (e.g., the null state) and search the same input (e.g., “take met to church”). TABLE 1DialogDialogPathCandidateStateManagerIdentifier1LocalSearch(“take me to church”)L1LocalLocal11MediaSearch(“take me to church”)M1MediaMedia11[Sorry, I can't look up directions]L2LocalLocal21[Sorry, I can't look up your media.]M2MediaMedia2 The dialog host ranks the four potential candidates; Local1, Local2, Media1, and Media2. The ranking may occur via a machine learned model that looks at the four candidates and the attributes of each. The model decides that the Local2 and Media2 candidates, which represent failure candidates for the respective dialog managers, are poor candidates because the other two candidates represent backend requests not yet submitted or executed. These two candidates have poor rankings and the dialog host prunes the Local2 and Media2 candidates. Thus the candidate list now includes only the two backend request candidates, i.e., Local1 and Media1. The dialog host determines that neither candidate is eligible for triggering because they are backend requests and not system responses. If the backend requests have a high enough rank, the dialog host begins executing the Local1 backend request and the Media1 backend request. Beginning execution of a backend requests is acceptance of the candidate. Thus the L1 dialog state and the M1 dialog state are accepted states. The Local1 backend request corresponds to the Local dialog manager, which provides directions and points of interest. The Local1 candidate represents a search for the input (e.g., for take me to church) in the Local schema. Similarly, the Media1 candidate corresponds to a Media dialog manager, which searches a media library. The Media1 candidate represents a search for the input in the Media schema. Once the dialog host begins execution of the two backend requests the dialog host waits for another triggering event. The next triggering event is the response for the Media1 candidate. In other words, the Media dialog manager returns a result that corresponds to the Media1 request. The dialog host determines that the response corresponds to the Media1 candidate, which is part of path 1, and determines that the base state includes the L1 dialog state and the M1 dialog state. The L1 state is included because the Local search is pending so the L1 dialog state is still active. Thus, the dialog host provides the backend response (a backend response corresponding to the Media1 candidate) and the base state of L1, M1, to the dialog mixer. In response, the dialog mixer provides three potential candidates, as illustrated in Table 2: TABLE 2DialogDialogPathCandidateStateManagerIdentifier1LocalSearch(“take me to church”)L3LocalLocal31[playing take me to church] {playM3MediaMedia3“Take Me To Church}1[Sorry, I can't look up directions]L4LocalLocal4 The Media3 candidate is a system response that provides the output [playing take me to church] to the user and initiates an action that causes the media play to begin playing a corresponding media file, audio or video, which is identified in the response. In some implementations, the dialog host replaces the Media1 candidate in the candidate list with the Media3 candidate because the Media3 candidate is the response received by executing the request represented by the Media1 candidate. In some implementations, the Media1 candidate is marked as completed but remains active. The dialog host prunes the Local3 candidate because it is a duplicate of the Local 1 candidate, which is still executing. In some implementations, the dialog mixer may recognize that the Local3 candidate is a duplicate and may not provide Local3 as a potential candidate. The dialog host ranks the Local4 candidate poorly because the Local 1 request is still executing. Thus, the dialog host prunes the Local4 candidate. This leaves Local1 and Media3 in the candidate list. Media3 is a system response eligible for triggering, but the Media3 candidate has a low rank because the user is still speaking, the user did not have an explicit play intent, i.e., the input was not [play take me to church], and there is an outstanding request. The dialog host therefore decides not to respond and does not trigger the Media3 response. This means the Media3 candidate is not accepted; rather the Media3 candidate is pending. There are no backend requests to execute, so the dialog host waits for another triggering event. The next triggering event is the arrival of another streaming chunk. The next input is a streaming chunk of [take me to church by bicycle]. This streaming chunk represents a sliding window that includes the previous input. The dialog host determines that the new input should be a new beam search. In other words, the dialog host determines that the query is more specific and starts a second path in the dialog beam. The base state for the new path is empty, i.e., the system backtracks to the root state and begins a new path from the root with the new search criteria of “take me to church by bicycle”. Thus, the dialog host sends an empty or null base state and the input “take me to church by bicycle” to the dialog mixer. The dialog mixer determines that the input applies to the Local dialog manager. The dialog mixer does not trigger the Media dialog manager because the input does not sound like a media request. Thus, the dialog mixer provides two potential candidates, as illustrated in Table 3. These candidates are included in the candidate list with the still active and pending candidates from the first path: TABLE 3DialogDialogPathCandidateStateManagerIdentifier2LocalSearch(“take me to church byLB1LocalLocalB1bicycle”)2[Sorry, I can't look up directions]LB2LocalLocalB21[playing take me to church] {playM3MediaMedia3“Take Me To Church]1LocalSearch(“take me to church”)L1LocalLocal1 The dialog host ranks the four candidates; Local1, LocalB1, Media3, and LocalB2. The rank of the LocalB2 candidate is poor and the dialog host prunes the candidate because the LocalB1 search has not yet provided a response or timed out. The Media3 candidate does not trigger because it is not responsive to the input, e.g., it is for path 1 and not path 2. The dialog host therefore does not have any system response to trigger and begins executing the request for the LocalB1 candidate. Thus, the LB1 dialog state is an accepted state in path 2 and the dialog manager waits for the next triggering event. The next triggering event is the response that corresponds to the Local1 backend request. The dialog host may determine that this response corresponds to the Local1 candidate and is in path 1 and not path 2. Thus, the dialog host determines that the base state includes the L1 dialog state and the M1 dialog state, which are the most recent accepted states in path 1. The M3 dialog state is not an accepted state because the candidate has not been triggered. This base state is provided with the backend response to the dialog mixer. The dialog mixer provides three candidates in response. The three candidates are added to the candidate list, which is illustrated in Table 4: TABLE 4DialogDialogPathCandidateStateManagerIdentifier2LocalSearch(“take me to church byLB1LocalLocalB1bicycle”)1[Sorry, I can't look up your media]M5LocalMediaS1[playing take me to church] {playM3MediaMedia3“Take Me To Church)1MediaSearch(“take me to church”)M4LocalMedia41[here are directions by carL5LocalLocal5to Church of Turning] The dialog host ranks the Media4 candidate low and prunes the candidate because it is a duplicate. In some implementations, the dialog mixer may recognize that this candidate is a duplicate of the accepted candidate Media1 and may not even provide Media4 as a candidate. The dialog host also ranks the Media5 candidate low and prunes that candidate. The Local5 and Media3 candidates are system responses, but may have low ranks because there is still a pending backend request (e.g., LocalB1). Thus the L5 dialog state is not yet an accepted state. The dialog host thus chooses to do nothing in response to the triggering event and waits for a next triggering event. The next triggering event is the response that corresponds to the LocalB1 backend request. The dialog host may determine that this response corresponds to the LocalB1 candidate and is in path 2 and not path 1. Thus, the dialog host determines that the base state includes the LB1 dialog state, which is the most recent accepted state in path 2. The L1 and M1 states are not associated with path 2 and are therefore not included in the base state provided to the dialog mixer. This base state is provided with the backend response to the dialog mixer. The dialog mixer provides one candidate in response. The candidate are added to the candidate list, which is illustrated in Table 5: TABLE 5DialogDialogPathCandidateStateManagerIdentifier2[here are directions by bikeLB3LocalLocalB3to Church of Turning]1[playing take me to church] {playM3MediaMedia3“Take Me To Church)1[here are directions by carL5LocalLocal5to Church of Turning] The dialog host may rank the LocalB3 candidate highly because it is responsive to the whole query and the system may have metadata that indicates the user has finished speaking, etc. The Local5 candidate is lower ranked because it does not take into account the entire query and the Media3 candidate is poorly ranked. The dialog host decides to trigger the LocalB3 candidate. Triggering the LocalB3 candidate causes the system to update the base state for the dialog beam to the LB3 dialog state, e.g., making the LB3 dialog state a root state and to output the response and execute its corresponding action. FIG.4is an example block diagram illustrating the dialog beam400for the example presented above. The tree starts with a root dialog state405that is empty. In other words there are no pending requests or responses and the candidate list is empty. The first triggering event, DM trigger 1, results in the four dialog states illustrated from Table 1. Two of the dialog states (L2 and M2) are pruned and the other two (L1 and M1) are accepted. All four states are part of path 1, which is illustrated inFIG.4as solid lines410. The second triggering event, DM Trigger 2, results in three more dialog states, two of which (L3 and L4) are pruned and one of which (M3) is kept, but not accepted. Thus M3 is a pending dialog state. The next triggering event, DM trigger 3, causes the system to backtrack and start a new path, which is illustrated with the dotted and dashed line450inFIG.4. The DM trigger 3 results in two new dialog states, one of which is pruned (LB2) and one of which is accepted (LB1). The next triggering event, DM trigger 4, applies to the first path and results in a new dialog state L5 that is kept but not yet accepted. The L5 dialog state is pending. The next triggering event, DM trigger 5, applies to the second path and results in a new dialog state, LB3, that is accepted. The acceptance of the LB3 dialog state causes the pending dialog states of the first path, i.e., L5 and M3, to be pruned. FIG.5shows an example of a generic computer device500, which may be operated as server110, and/or client150ofFIG.1, which may be used with the techniques described here. Computing device500is intended to represent various example forms of computing devices, such as laptops, desktops, workstations, personal digital assistants, cellular telephones, smartphones, tablets, servers, and other computing devices, including wearable devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the inventions described and/or claimed in this document. Computing device500includes a processor502, memory504, a storage device506, and expansion ports510connected via an interface508. In some implementations, computing device500may include transceiver546, communication interface544, and a GPS (Global Positioning System) receiver module548, among other components, connected via interface508. Device500may communicate wirelessly through communication interface544, which may include digital signal processing circuitry where necessary. Each of the components502,504,506,508,510,540,544,546, and548may be mounted on a common motherboard or in other manners as appropriate. The processor502can process instructions for execution within the computing device500, including instructions stored in the memory504or on the storage device506to display graphical information for a GUI on an external input/output device, such as display516. Display516may be a monitor or a flat touchscreen display. In some implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices500may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory504stores information within the computing device500. In one implementation, the memory504is a volatile memory unit or units. In another implementation, the memory504is a non-volatile memory unit or units. The memory504may also be another form of computer-readable medium, such as a magnetic or optical disk. In some implementations, the memory504may include expansion memory provided through an expansion interface. The storage device506is capable of providing mass storage for the computing device500. In one implementation, the storage device506may be or include a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in such a computer-readable medium. The computer program product may also include instructions that, when executed, perform one or more methods, such as those described above. The computer- or machine-readable medium is a storage device such as the memory504, the storage device506, or memory on processor502. The interface508may be a high speed controller that manages bandwidth-intensive operations for the computing device500or a low speed controller that manages lower bandwidth-intensive operations, or a combination of such controllers. An external interface540may be provided so as to enable near area communication of device500with other devices. In some implementations, controller508may be coupled to storage device506and expansion port514. The expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device500may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server530, or multiple times in a group of such servers. It may also be implemented as part of a rack server system. In addition, it may be implemented in a computing device, such as a laptop computer532, personal computer534, or tablet/smart phone536. An entire system may be made up of multiple computing devices500communicating with each other. Other configurations are possible. FIG.6shows an example of a generic computer device600, which may be server110ofFIG.1, which may be used with the techniques described here. Computing device600is intended to represent various example forms of large-scale data processing devices, such as servers, blade servers, datacenters, mainframes, and other large-scale computing devices. Computing device600may be a distributed system having multiple processors, possibly including network attached storage nodes, that are interconnected by one or more communication networks. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the inventions described and/or claimed in this document. Distributed computing system600may include any number of computing devices680. Computing devices680may include a server or rack servers, mainframes, etc. communicating over a local or wide-area network, dedicated optical links, modems, bridges, routers, switches, wired or wireless networks, etc. In some implementations, each computing device may include multiple racks. For example, computing device680aincludes multiple racks658a-658n. Each rack may include one or more processors, such as processors652a-652nand662a-662n. The processors may include data processors, network attached storage devices, and other computer controlled devices. In some implementations, one processor may operate as a master processor and control the scheduling and data distribution tasks. Processors may be interconnected through one or more rack switches658, and one or more racks may be connected through switch678. Switch678may handle communications between multiple connected computing devices680. Each rack may include memory, such as memory654and memory664, and storage, such as656and666. Storage656and666may provide mass storage and may include volatile or non-volatile storage, such as network-attached disks, floppy disks, hard disks, optical disks, tapes, flash memory or other similar solid state memory devices, or an array of devices, including devices in a storage area network or other configurations. Storage656or666may be shared between multiple processors, multiple racks, or multiple computing devices and may include a computer-readable medium storing instructions executable by one or more of the processors. Memory654and664may include, e.g., volatile memory unit or units, a non-volatile memory unit or units, and/or other forms of computer-readable media, such as a magnetic or optical disks, flash memory, cache, Random Access Memory (RAM), Read Only Memory (ROM), and combinations thereof. Memory, such as memory654may also be shared between processors652a-652n. Data structures, such as an index, may be stored, for example, across storage656and memory654. Computing device680may include other components not shown, such as controllers, buses, input/output devices, communications modules, etc. An entire system, such as system100, may be made up of multiple computing devices680communicating with each other. For example, device680amay communicate with devices680b,680c, and680d, and these may collectively be known as system100. As another example, system100ofFIG.1may include one or more computing devices680. Some of the computing devices may be located geographically close to each other, and others may be located geographically distant. The layout of system600is an example only and the system may take on other layouts or configurations. According to certain aspects of the disclosure, a mobile device includes at least one processor and memory storing instructions that, when executed by the at least one processor, cause the computing device to perform operations. The operations include generating first candidate responses to a triggering event. The triggering event may be receipt of a live-stream chunk for the dialog or receipt of a backend response to a previous backend request for a dialog schema. The operations also include updating a list of candidate responses that are accepted or pending with at least one of the first candidate responses, and determining, for the triggering event, whether the list of candidate responses includes a candidate response that has a confidence score that meets a triggering threshold. The operations also include waiting for a next triggering event without providing a candidate response when the list does not include a candidate response that has a confidence score that meets the triggering threshold. These and other aspects can include one or more of the following features. For example at least one of the first candidate responses may have a highest rank of the first candidate responses. As another example, each candidate in the candidate list may be either a system response or a backend request and each candidate in the candidate list has a respective dialog state and is associated with a path in a dialog beam. As another example, the pending candidate responses can be system responses that have not been provided in response to a triggering event and the operations also include determining a path in a dialog beam the triggering event corresponds to, determining a base state for the triggering event; the base state including dialog states of accepted candidates in the candidate list for the path, and generating the first candidate responses using information from the triggering event and the base state. As another example, one of the candidate responses in the list of candidate responses may represent back-channel feedback. As another example, an accepted response may be a backend request that has been initiated. As another example, a pending response is a system response not provided to the user. As another example, the triggering event is a first triggering event and the candidates in the list of candidates all correspond to a first path in a dialog beam and the operations also include receiving a second triggering event, determining that the second triggering event requires a second path in a dialog beam, setting a base state for the second path, the base state for the second path being a base state for an ancestor node in the first path of a current base state of the first path, generating second candidate responses using the base state for the second path and information for the second triggering event, and updating the list of candidate responses that are accepted or pending with at least one of the second candidate responses. As another example, updating the list can include pruning candidate responses that fail to satisfy a ranking threshold. In another aspect, a method includes providing, responsive to receiving a chunk from a real-time dialog stream, the chunk to a dialog mixer, receiving response candidates for the chunk from the dialog mixer, each response candidate being a system response for a dialog schema or a backend request for a dialog schema, and update a rotating list of response candidates using at least one of the response candidates for the chunk. The method further includes ranking the response candidates in the list, each response candidate having a respective confidence score, determining whether the rotating list includes a response candidate with a confidence score that satisfies a triggering threshold, and when the rotating list does not include a response candidate with a confidence score that satisfies the triggering a threshold, initiating a backend request represented by a response candidate in the list that has a confidence score that satisfies a ranking threshold and that is not yet an accepted dialog state. These and other aspects can include one or more of the following features. For example, each response candidate in the list may have respective annotations and a respective dialog state and ranking the response candidates can include providing the annotations with the list to a machine learned model, the machine learned model using the annotations and the response candidates in the list to determine the respective confidence scores. In such implementations, the annotations can include characteristics of the chunk obtained through speech analysis. As another example, each response candidate in the list of response candidates may have a corresponding dialog state. As another example, the method may also include updating the response candidates in the list includes pruning candidates with a confidence score that fails to satisfy a ranking threshold. As another example, each response candidate in the list of response candidates may have a corresponding dialog state and is assigned to a path in a dialog beam, the dialog beam including at least two paths. In such implementations, when the rotating list does include a response candidate with a confidence score that satisfies the triggering threshold, the method may also include determining a path associated with the response candidate with the confidence score that satisfies the triggering threshold and pruning response candidates from the list that are not associated with the path. In another aspect a method includes receiving a triggering event for a real-time dialog, the real-time dialog having an associated dialog beam with a first path, the dialog beam representing dialog states for a real-time dialog with a user, determining that the triggering event starts a new path in the dialog beam, and backtracking in the first path to an ancestor node in the dialog beam. The method also includes starting the new path in the dialog beam from the ancestor node by generating response candidates using a base state represented by the ancestor node and information from the triggering event, where a path in the dialog beam includes one or more accepted or pending response candidates, a response candidate being a system response generated by a dialog schema or a backend request for a dialog schema. These and other aspects can include one or more of the following features. For example, the ancestor node may be a root node that represents a blank base state. As another example, the response candidate may have a respective dialog state and is assigned to one of the dialog paths. As another example, the method might also include determining, responsive to a second triggering event, that a response candidate in the new path is a system response with a confidence score that satisfies a triggering threshold, providing the response candidate to the user, and pruning the first path from the dialog beam. Various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any non-transitory computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory (including Read Access Memory), Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. A number of implementations have been described. Nevertheless, various modifications may be made without departing from the spirit and scope of the invention. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
70,161
11860914
DETAILED DESCRIPTION Techniques and mechanisms described herein provide for the generation and querying of a database system based on natural language. A text generation interface system serves as an interface between one or more client machines and a text generation modeling system configured to implement a large language model. The text generation interface system may identify a set of documents including natural language, as well as one or more fields for querying the documents. The text generation interface system may then generate one or more prompts for extracting information related to the fields from the documents. The prompts may then be sent to the text generation modeling system, which may return structured text corresponding to the fields. The text generation interface system may then generate or update a database system based on the structured text. The database system may be queried to identify one or more documents based on search terms included in a search query. Natural text included in the identified documents may then be evaluated against the search query via one or more additional prompts completed by the text generation modeling system. The completed prompts may be used to prepare a comprehensive response to the search query. Consider the challenge of a transactional attorney who wishes to understand the common formulation of a given deal term in the market for contracts having particular characteristics. Using conventional techniques, the transactional attorney would need to rely on inaccurate and/or incomplete information, such as personal knowledge, simple text searches, surveys, practice guides, manual review of large volumes of documents, and the like. Such processes are slow, expensive, and/or error prone. The same is true for a variety of such complex, text-based inquiries. The following example queries that may be addressed in accordance with some embodiments of techniques and mechanisms described herein are drawn from the analysis of legal contracts. For example, “Show me material adverse effect definitions from public company merger agreements in the last 2 years.” As another example, “Identify all double trigger vesting acceleration clauses.” As yet another example, “What is the typical liquidation preference multiple in Series B rounds in the last 3 years?” As still another example, “Was it typical for force majeure clauses to mention pandemics prior to 2020?” However, techniques and mechanisms described herein are broadly applicable to a range of contexts, and are not limited to the legal context or to the analysis of contracts. In contrast, embodiments of techniques and mechanisms described herein may be used to generate answers to complex queries of natural language documents. For instance, keeping to the above example, a set of reference contracts may be parsed to generate or update a database table characterizing the reference contracts along one or more numerical and/or classification dimensions. The database system may then be queried using terms identified based on a search query to identify a set of contracts that exhibit particular characteristics. The identified documents may then be further analyzed using a large language model to determine and quantify the various formulations of the given deal term for those documents. According to various embodiments, techniques and mechanisms described herein may be able to review large numbers of documents and to understand them sufficiently well so as to classify them along one or more numerical and/or discrete dimensions. The documents may then be filtered to identify a subset of documents relevant to a particular search query. The text of the filtered documents may then be analyzed against the search query to produce document-level answers to the search query. These document-level answers may then be combined into a single response to the search query. For instance, the system may answer a search query that asks about which features are common in a subset of a corpus of documents that exhibit one or more characteristics. According to various embodiments, techniques and mechanisms described herein provide for novel text generation in domain-specific contexts. A text generation interface system may take as input one or more arbitrary documents, process them via optical text recognition, segment them into portions, and process the segmented text via various tasks based on need. Different workflows are provided for different tasks, and this application describes a number of examples of such workflows. In many workflows, an input document is divided into chunks via a chunking technique. Then, chunks are inserted into prompt templates for processing by a large language model such as the GPT-3 or GPT-4 available from OpenAI. The large language model's response is then parsed and potentially used to trigger additional analysis, such as one or more database searches, one or more additional prompts sent back to the large language model, and/or a response returned to a client machine. According to various embodiments, techniques and mechanisms described herein provide for retrieval augmented generation. A search is conducted based on a search query. Then, the search results are provided to an artificial intelligence system. The artificial intelligence system then further processes the search results to produce an answer based on those search results. In this context, a large language model may be used to determine the search query, apply one or more filters and/or tags, and/or synthesize potentially many different types of search. According to various embodiments, techniques and mechanisms described herein provide for a sophisticated document processing pipeline. The pipeline receives one or more input documents, identifies text that should be kept together, identifies extraneous text such as headers, footers, and line numbers, and segments the text accordingly. In this way, the quality of the text provided to the rest of the system is improved. According to various embodiments, techniques and mechanisms described herein provide for new approaches to text segmentation. Large language models often receive as input a portion of input text and generate in response a portion of output text. In many systems, the large language model imposes a limit on the input text size. Accordingly, in the event that the large language model is asked to summarize a length document, the document may need to be segmented into portions in order to achieve the desired summarization. Conventional text segmentation techniques frequently create divisions in text that negatively affect the performance of the model, particularly in domains-specific contexts such as law. For example, consider a caption page of a legal brief, which includes text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the text in the different columns should not be mixed and should be treated separately from the line numbers, while both columns should precede the document title, when converting the document to an input query for a large language model. However, conventional techniques would result in these semantically different elements of text being jumbled together, resulting in an uninformative query provided to the large language model and hence a low-quality response. In contrast to these conventional techniques, techniques and mechanisms described herein provide for a pipeline that cleans such raw text so that it can be provided to a large language model. According to various embodiments, techniques and mechanisms described herein provide for the division of text into chunks, and the incorporation of those chunks into prompts that can be provided to a large language model. For instance, a large language model may impose a limit of, for instance, 8,193 tokens on a task, including text input, text output, and task instructions. In order to process longer documents, the system may split them. However, splitting a document can easily destroy meaning depending on where and how the document is split. Techniques and mechanisms described herein provide for evenly splitting a document or documents into chunks, and incorporating those chunks into prompts, in ways that retain the semantic content associated with the raw input document or documents. In some embodiments, techniques and mechanisms described herein may be applied to generate novel text in domain-specific contexts, such as legal analysis. Large language models, while powerful, have a number of drawbacks when used for technical, domain-specific tasks. When using conventional techniques, large language models often invent “facts” that are actually not true. For instance, if asked to summarize the law related to non-obviousness in the patent context, a large language model might easily invent a court case, complete with caption and ruling, that in fact did not occur. In contrast to conventional techniques, techniques and mechanisms described herein provide for the generation of novel text in domain-specific contexts while avoiding such drawbacks. According to various embodiments, techniques and mechanisms described herein may be used to automate complex, domain-specific tasks that were previously the sole domain of well-trained humans. Moreover, such tasks may be executed in ways that are significantly faster, less expensive, and more auditable than the equivalent tasks performed by humans. For example, a large language model may be employed to produce accurate summaries of legal texts, to perform legal research tasks, to generate legal documents, to generate questions for legal depositions, and the like. In some embodiments, techniques and mechanisms described herein may be used to divide text into portions while respecting semantic boundaries and simultaneously reducing calls to the large language model. The cost of using many large language models depends on the amount of input and/or output text. Accordingly, techniques and mechanisms described herein provide for reduced overhead associated with prompt instructions while at the same time providing for improved model context to yield an improved response. In some embodiments, techniques and mechanisms described herein may be used to process an arbitrary number of unique documents (e.g., legal documents) that cannot be accurately parsed and processed via existing optical character recognition and text segmentation solutions. In some embodiments, techniques and mechanisms described herein may be used to link a large language model with a legal research database, allowing the large language model to automatically determine appropriate searches to perform and then ground its responses to a source of truth (e.g., in actual law) so that it does not “hallucinate” a response that is inaccurate. In some embodiments, techniques and mechanisms described herein provide for specific improvements in the legal domain. For example, tasks that were previously too laborious for attorneys with smaller staffs may now be more easily accomplished. As another example, attorneys may automatically analyze large volumes of documents rather than needing to perform such tasks manually. As another example, text chunking may reduce token overhead and hence cost expended on large language model prompts. As yet another example, text chunking may reduce calls to a large language model, increasing response speed. As still another example, text chunking may increase and preserve context provided to a large language model by dividing text into chunks in semantically meaningful ways. According to various embodiments, techniques and mechanisms described herein may provide for automated solutions for generated text in accordance with a number of specialized applications. Such applications may include, but are not limited to: simplifying language, generating correspondence, generating a timeline, reviewing documents, editing a contract clause, drafting a contract, performing legal research, preparing for a depositions, drafting legal interrogatories, drafting requests for admission, drafting requests for production, briefing a litigation case, responding to requests for admission, responding to interrogatories, responding to requests for production, analyzing cited authorities, and answering a complaint. FIG.1illustrates a database generation and querying overview method100, performed in accordance with one or more embodiments. In some implementations, the method100may be performed at a text generation interface system such as the system200shown inFIG.2. For instance, the method100may be performed at the text generation interface system210. A database table characterizing a set of documents along one or more dimensions is determined at102. In some embodiments, the database table may be determined by generating a set of prompts provided to a text generation modeling system. A prompt may include a portion of text from one or more of the documents. The prompt may also include an instruction to identify data corresponding to one or more fields for the included portion of text. The text generation modeling system may complete the prompts and provide a response using structured text, such as JSON. The text generation interface system may then create or update a database system based on the structured text. Additional details regarding the creation or updating of a database system are discussed with respect to the method1300shown inFIG.13. A subset of the documents is identified at104based on a query of the database table using one or more query terms based on a query. In some embodiments, the query may include a natural language element. Alternatively, or additionally, the query may include one or more search terms specified in a different format, such as Boolean logic. The search terms may be generated based on an interaction with the text generation modeling system. For instance, all or a portion of the query may be used to create a query evaluation prompt which is completed by the text generation modeling system. The query evaluation prompt may instruct the model to identify one or more search terms based on the query and a set of fields associated with the database system. The resulting search terms may be used to search the database and identify the subset of the documents. Additional details regarding the identification of the subset of the documents are discussed with respect to the method1400shown inFIG.14. An answer to the query is determined at106by evaluating the text of the subset of the documents based on the query. In some embodiments, the answer to the query may be determined based at least in part on an interaction with a text generation modeling system. For instance, the identified documents may be used to create one or more prompts to the text generation modeling system. A prompt may include a portion of text from the identified documents and instructions based at least in part on the query. The text generation modeling system may complete the prompt, and the text generation interface system may determine an overall answer to the query based on the response or responses provided by the text generation modeling system. Additional details regarding the answering of a query based on an evaluation of the text of a subset of documents are discussed with respect to the method1500shown inFIG.15. FIG.2illustrates a text generation system200, configured in accordance with one or more embodiments. The text generation system200includes client machines202through204in communication with a text generation interface system210, which in turn is in communication with a text generation modeling system270. The text generation modeling system270includes a communication interface272, a text generation API274, and a text generation model276. The text generation interface system210includes a communication interface212, a database system214, a testing module220, and an orchestrator230. The testing module220includes a query cache222, a test repository224, and a prompt testing utility226. The orchestrator230includes skills232through234, and prompt templates236through238. The orchestrator also includes a chunker240and a scheduler242. The orchestrator also includes API interfaces250, which include a model interface252, an external search interface254, an internal search interface256, and a chat interface258. According to various embodiments, a client machine may be any suitable computing device or system. For instance, a client machine may be a laptop computer, desktop computer, mobile computing device, or the like. Alternatively, or additionally, a client machine may be an interface through which multiple remote devices communicate with the text generation interface system210. According to various embodiments, a client machine may interact with the text generation interface system in any of various ways. For example, a client machine may access the text generation interface system via a text editor plugin, a dedicated application, a web browser, other types of interactions techniques, or combinations thereof. According to various embodiments, the text generation modeling system270may be configured to receive, process, and respond to requests via the communication interface272, which may be configured to facilitate communications via a network such as the internet. In some embodiments, some or all of the communication with the text generation modeling system270may be conducted in accordance with the text generation API274, which may provide remote access to the text generation model276. The text generation API274may provide functionality such as defining standardized message formatting, enforcing maximum input and/or output size for the text generation model, and/or tracking usage of the text generation model. According to various embodiments, the text generation model276may be a large language model. The text generation model276may be trained to predict successive words in a sentence. It may be capable of performing functions such as generating correspondence, summarizing text, and/or evaluating search results. The text generation model276may be pre-trained using many gigabytes of input text and may include billions or trillions of parameters. In some embodiments, large language models impose a tradeoff. A large language model increases in power with the number of parameters and the amount of training data used to train the model. However, as the model parameters and input data increase in magnitude, the model's training cost, storage requirements, and required computing resources increase as well. Accordingly, the large language model may be implemented as a general-purpose model configured to generate arbitrary text. The text generation interface system210may serve as an interface between the client machines and the text generation modeling system270to support the use of the text generation modeling system270for performing complex, domain-specific tasks in fields such as law. That is, the text generation interface system210may be configured to perform one or more methods described herein. According to various embodiments, the orchestrator230facilitates the implementation of one or more skills, such as the skills232through234. A skill may act as a collection of interfaces, prompts, actions, data, and/or metadata that collectively provide a type of functionality to the client machine. For instance, a skill may involve receiving information from a client machine, transmitting one or more requests to the text generation modeling system270, processing one or more response received form the text generation modeling system270, performing one or more searches, and the like. Skills are also referred to herein as text generation flows. Additional details regarding specific skills are provided with reference toFIGS.8-10. In some embodiments, a skill may be associated with one or more prompts. For instance, the skill234is associated with the prompt templates236and238. A prompt template may include information such as instructions that may be provided to the text generation modeling system270. A prompt template may also include one or more fillable portions that may be filled based on information determined by the orchestrator230. For instance, a prompt template may be filled based on information received from a client machine, information returned by a search query, or another information source. Additional details regarding prompt templates are provided with reference toFIGS.8-10. In some implementations, the chunker240is configured to divide text into smaller portions. Dividing text into smaller portions may be needed at least in part to comply with one or more size limitations associated with the text. For instance, the text generation API274may impose a maximum size limit on prompts provided to the text generation model276. The chunker may be used to subdivide text included in a request from a client, retrieved from a document, returned in a search result, or received from any other source. According to various embodiments, the API interfaces250include one or more APIs for interacting with internal and/or external services. The model interface252may expose one or more functions for communicating with the text generation modeling system270. For example, the model interface252may provide access to functions such as transmitting requests to the text generation modeling system270, receiving responses from the text generation modeling system270, and the like. In some embodiments, the external search interface254may be used to search one or more external data sources such as information repositories that are generalizable to multiple parties. For instance, the external search interface254may expose an interface for searching legal case law and secondary sources. In some implementations, the internal search interface256may facilitate the searching of private documents. For instance, a client may upload or provide access to a set of private documents, which may then be indexed by the text generation interface system210. According to various embodiments, the chat interface258may facilitate text-based communication with the client machines. For instance, the chat interface258may support operations such as parsing chat messages, formulating responses to chat messages, identifying skills based on chat messages, and the like. In some configurations, the chat interface258may orchestrate text-based chat communication between a user at a client machine and the text generation model276, for instance via web sockets. In some embodiments, the query cache222may store queries such as testing queries sent to the text generation modeling system270. Then, the query cache222may be instructed to return a predetermined result to a query that has already been sent to the text generation modeling system270rather than sending the same query again. In some embodiments, the prompt testing utility226is configured to perform operations such as testing prompts created based on prompt templates against tests stored in the test repository224. In some embodiments, the communication interface212is configured to facilitate communications with the client machines and/or the text generation modeling system270via a network such as the internet. The scheduler242may be responsible for scheduling one or more tasks performed by the text generation interface system210. For instance, the scheduler may schedule requests for transmission to the text generation modeling system270. In some embodiments, the database system214is configured to store information determined based on natural language. For example, the database system214may be configured to store one or more database tables that include fields corresponding with information extracted from natural language documents. As another example, the database system214may be configured to store metadata information about documents based on information extracted from those documents. As yet another example, the database system214may be configured to store linkages between documents and document portions. According to various embodiments, the database system214may be configured using any of a variety of suitable database technologies. For instance, the database system214may be configured as a relational database system, a non-relational database system, or any other type of database system capable of supporting the storage and querying of information described herein. Additional details regarding the creation, updating, and querying of database tables associated with the database system214are discussed with respect to the methods shown inFIG.13,FIG.14, andFIG.15. FIG.3illustrates a document parsing method300, performed in accordance with one or more embodiments. According to various embodiments, the method300may be performed on any suitable computing system. For instance, the method300may be performed on the text generation interface system230shown inFIG.2. The method300may be performed in order to convert a document into usable text while at the same time retaining metadata information about the text, such as the page, section, and/or document at which the text was located. A request to parse a document is received at302. In some embodiments, the request to parse a document may be generated when a document is identified for analysis. For example, as discussed herein, a document may be uploaded or identified by a client machine as part of communication with the text generation interface system230. As another example, a document may be returned as part of a search result. The document is converted to portable document format (PDF) or another suitable document format at304. In some embodiments, the document need only be converted to PDF if the document is not already in the PDF format. Alternatively, PDF conversion may be performed even on PDFs to ensure that PDFs are properly formatted. PDF conversion may be performed, for instance, by a suitable Python library or the like. For instance, PDF conversion may be performed with the Hyland library. Multipage pages are split into individual pages at306. In some implementations, multipage pages may be split into individual pages via a machine learning model. The machine learning model may be trained to group together portions of text on a multipage page. For instance, a caption page in a legal decision may include text in a column on the left that encompasses the parties, text in a column on the right that includes the case number, a title that follows lower on the page, and line numbering on the left. In such a configuration, the machine learning model may be trained to treat separately the text in the different columns, and to separate the text from the line numbers. The document title may be identified as a first page, with the left column identified as the second page and the right column identified as the third page. Optical character recognition is performed on individual pages or on the document as a whole at308. In some implementations, optical character recognition may be performed locally via a library. Alternatively, optical character recognition may be performed by an external service. For instance, documents or pages may be sent to a service such as Google Vision. Performing optical character recognition on individual pages may provide for increased throughout via parallelization. Individual pages are combined in order at310. In some implementations, combining pages in order may be needed if optical character recognition were applied to individual pages rather than to the document as a whole. Inappropriate text splits are identified and corrected at312. In some embodiments, inappropriate text splits include instances where a paragraph, sentence, word, or other textual unit was split across different pages. Such instances may be identified by, for example, determining whether the first textual unit in a page represents a new paragraph, sentence, word, or other unit, or if instead it represents the continuation of a textual unit from the previous page. When such a split is identified, the continuation of the textual unit may be excised from the page on which it is located and moved to the end of the previous page. Such an operation may be performed by, for instance, the Poppler library available in Python. Segmented JSON text is determined at314. In some embodiments, the segmented JSON text may include the text returned by the optical character recognition performed at operation308. In addition, the segmented JSON text may include additional information, such as one or more identifiers for the page, section, and/or document on which the text resides. The output of the segmented JSON may be further processed, for instance via the text sharding method500shown inFIG.5and/or the text chunking method600shown inFIG.6. FIG.4illustrates a text generation method400, performed in accordance with one or more embodiments. According to various embodiments, the method400may be performed on any suitable computing system. For instance, the method400may be performed on the text generation interface system230shown inFIG.2. The method400may be performed in order to identify and implement a text generation flow based on input text. A request from a client machine to generate a novel text portion is received at402. In some embodiments, the request may include a query portion. The query portion may include natural language text, one or more instructions in a query language, user input in some other format, or some combination thereof. For instance, the query portion may include an instruction to “write an email”, “summarize documents”, or “research case law”. In some embodiments, the request may include an input text portion. For example, the request may link to, upload, or otherwise identify documents. As another example, the request may characterize the task to be completed. For instance, the request may discuss the content of the desired email or other correspondence. The particular types of input text included in the request may depend in significant part on the type of request. Accordingly, many variations are possible. A text generation flow is determined at404. In some embodiments, the text generation flow may be explicitly indicated as part of the request received from the client machine. For instance, the client machine may select a particular text generation flow from a list. Alternatively, the text generation flow may be determined at least in part by analyzing the request received from the client machine. For example, the request may be analyzed to search for keywords or other indications that a particular text generation flow is desired. As another example, all or a portion of the request may be provided to a machine learning model to predict the requested text generation flow. In some configurations, a predicted text generation flow may be provided to the client machine for confirmation before proceeding. Input text is determined at406. In some embodiments, the input text may be determined by applying one or more text processing, search, or other operations based on the request received from the client machine. For example, the input text may be determined at least in part by retrieving one or more documents identified in or included with the request received from the client machine. As another example, the input text may be determined at least in part by applying one or more natural language processing techniques such as cleaning or tokenizing raw text. In some embodiments, determining input text may involve executing a search query. For example, a search of a database, set of documents, or other data source may be executed base at least in part on one or more search parameters determined based on a request received from a client machine. For instance, the request may identify one or more search terms and a set of documents to be searched using the one or more search terms. In some embodiments, determining input text may involve processing responses received from a text generation modeling system. For instance, all or a portion of the results from an initial request to summarizing a set of text portions may then be used to create a new set of more compressed input text, which may then be provided to the text generation modeling system for further summarization or other processing. One or more prompt templates are determined at408based on the input text and the text generation flow. As discussed with respect toFIG.2, different text generation flows may be associated with different prompt templates. Prompt templates may be selected from the prompt library based on the particular text generation flow. Additional details regarding the content of particular prompt templates are discussed with respect to the text generation flows illustrated inFIGS.8-10. At410, one or more prompts based on the prompt templates are determined. In some embodiments, a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient, source, topic, and discussion points. The one or more prompts are transmitted to a text generation modeling system at412. In some embodiments, the text generation modeling system may be implemented at a remote computing system. The text generation modeling system may be configured to implement a text generation model. The text generation modeling system may expose an application procedure interface via a communication interface accessible via a network such as the internet. One or more text response messages are received from the remote computing system at414. According to various embodiments, the one or more text response messages include one or more novel text portions generated by a text generation model implemented at the remote computing system. The novel text portions may be generated based at least in part on the prompt received at the text generation modeling system, including the instructions and the input text. The one or more responses are parsed at416to produce a parsed response. In some embodiments, parsing the one or more responses may involve performing various types of processing operations. For example, in some systems a large language model may be configured to complete a prompt. Hence, a response message received from the large language model may include the instructions and/or the input text. Accordingly, the response message may be parsed to remove the instructions and/or the input text. In some implementations, parsing the one or more responses may involve combining text from different responses. For instance, a document may be divided into a number of portions, each of which is summarized by the large language model. The resulting summaries may then be combined to produce an overall summary of the document. A determination is made at418as to whether to provide a response to the client machine. In some embodiments, the determination made at418may depend on the process flow. For example, in some process flows, additional user input may be solicited by providing a response message determined based at least in part on one or more responses received from the text generation modeling system. As another example, in some process flows, a parsed response message may be used to produce an output message provided to the client machine. If a response is to be provided to the client machine, then a client response message including a novel text passage is transmitted to the client machine at420. In some embodiments, the client response message may be determined based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. Additional details regarding the generation of a novel text passage are discussed with respect to the text generation flows illustrated inFIGS.8-10. A determination is made at422as to whether to generate an additional prompt. According to various embodiments, the determination as to whether to generation an additional prompt may be made based in part on the text generation flow determined at404and in part based on the one or more text response messages received at414and parsed at416. As a simple example, a text generation flow may involve an initial set of prompts to summarize a set of portions, and then another round of interaction with the text generation modeling system to produce a more compressed summary. Additional details regarding the generation of a novel text passage are discussed with respect to the text generation flows illustrated inFIGS.8-10. According to various embodiments, the operations shown inFIG.4may be performed in an order different from that shown. Alternatively, or additionally, one or more operations may be omitted, and/or other operations may be performed. For example, a text generation flow may involve one or more search queries executed outside the context of the text generation modeling system. As another example, a text generation flow may involve one or more processes for editing, cleaning, or otherwise altering text in a manner not discussed with respect toFIG.4. Various operations are possible. FIG.5illustrates a method500of sharding text, performed in accordance with one or more embodiments. According to various embodiments, the method500may be performed on any suitable computing system. For instance, the method500may be performed on the text generation interface system230shown inFIG.2. The method500may be performed in order to divide a body of text into potentially smaller units that fall beneath a designated size threshold, such as a size threshold imposed by an interface providing access to a large language model. For instance, a text generation modeling system implementing a large language model may specify a size threshold in terms of a number of tokens (e.g., words). As one example of such a threshold, a text generation modeling system may impose a limit of 8,193 tokens per query. In particular embodiments, a size threshold may be adjusted based on considerations apart from a threshold imposed by an external text generation modeling system. For instance, a text generation interface system may formulate a prompt that includes input text as well as metadata such as one or more instructions for a large language model. In addition, the output of the large language model may be included in the threshold. If the external text generation modeling system imposes a threshold (e.g., 8,193 tokens), the text generation interface system230may need to impose a somewhat lower threshold when dividing input text in order to account for the metadata included in the prompt and/or the response provided by the large language model. A request to divide text into one or more portions is received at502. According to various embodiments, the request may be received as part of the implementation of one or more of the workflows shown herein, for instance in the methods shown inFIGS.8-10. The request may identify a body of text. The body of text may include one or more documents, search queries, instruction sets, search results, and/or any other suitable text. In some configurations, a collection of text elements may be received. For instance, a search query and a set of documents returned by the search query may be included in the text. In some implementations, text may be pre-divided into a number of different portions. Examples of divisions of text into portions may include, but are not limited to: lists of documents, documents, document sections, document pages, document paragraphs, and document sentences. Alternatively, or additionally, text may be divided into portions upon receipt at the text generation interface system230. For instance, text may be divided into a set of portions via a text chunker, document parser, or other natural language processing tool. A maximum text chunk size is identified at504. In some embodiments, the maximum text chunk size may be identified based on one or more configuration parameters. In some configurations, the maximum text size may be imposed by the text generation interface system230. Alternatively, or additionally, a size threshold may be imposed by an interface providing access to a large language model. As one example of a maximum text chunk size may be 100 kilobytes of text, 1 megabyte of text, 10 megabytes of text, or any other suitable chunk size. A portion of the text is selected at506. In some embodiments, as discussed herein, text may be pre-divided into text portion. Alternatively, or additionally, text may be divided into text portions as part of, or prior to, the operation of the method500. As still another possibility, text may not be divided into portions. In such a configuration, the initial portion of text that is selected may be the entirety of the text. Then, the identification of one or more updated text portions at512may result in the division of the text into one or more portions as part of the operation of the method500. A determination is made at508as to whether the length of the selected text portion exceeds the maximum text chunk size. In some embodiments, the determination may be made by computing a length associated with the selected text portion and then comparing it with the maximum text chunk size. The calculation of the length associated with the selected text portion may be performed in different ways, depending on how the maximum text chunk size is specified. For instance, the maximum text chunk size may be specified as a memory size (e.g., in kilobytes or megabytes), as a number of words, or in some other fashion. If it is determined that the length of the selected text portion exceeds the maximum text chunk size, then at510one or more domain-specific text chunking constraints are identified. In some embodiments, domain-specific text chunking constraints may be identified based on one or more pre-determined configuration parameters. For example, one domain-specific text chunking constraint may discourage division of a question and answer in a deposition transcript or other question/answer context. As another example, a domain-specific text chunking constraint may discourage splitting of a contract clause. As yet another example, a domain-specific text chunking constraint may discourage splitting of a minority and majority opinion in a legal opinion. An updated text portion that does not exceed the maximum text chunk size is identified at512. In some embodiments, the updated text portion may be determined by applying a more granular division of the text portion into small portions. For example, a document may be divided into sections, pages, or paragraphs. As another example, a document page or section may be divided into paragraphs. As another example, a paragraph may be divided into sentences. As still another example, a sentence may be divided into words. In particular embodiments, the updated text portion may be the sequentially first portion of the selected text portion that falls below the maximum text chunk size threshold identified at operation504. The text portion is assigned to a text chunk at514. In some embodiments, the text may be associated with a sequence of text chunks. The text portions selected at506and identified at512may be assigned to these text chunks, for instance in a sequential order. That is, text portions near to one another in the text itself may be assigned to the same text chunk where possible to reduce the number of divisions between semantically similar elements of the text. In particular embodiments, some attention may be paid to text divisions such as document, document section, paragraph, and/or sentence borders when assigning text portions to chunks. For instance, text portions belonging to the same document, document section, paragraph, and/or sentence may be grouped together when possible to ensure semantic continuity. In particular embodiments, the method500may be performed in conjunction with the method600shown inFIG.6. In such a configuration, operation514may be omitted. Alternatively, the assignment of text portions into text chunks in operation514may be treated as provisional, subject to subsequent adjustment via the method600shown inFIG.6. In some implementations, the identification of an updated text portion may result in the creation of two or more new text portions as a consequence of the division. In this case, the updated text portion may be assigned to a text chunk at514, while the remainder portion or portions may be reserved for later selection at506. Alternatively, or additionally, if two or more of the text portions resulting from the division at512each fall below the maximum text chunk size, then each of these may be assigned to a text chunk or chunks at operation514. A determination is made at516as to whether to select an additional portion of the text. According to various embodiments, additional portions of the text may continue to be selected as long as additional portions are available, or until some other triggering condition is met. For example, the system may impose a maximum amount of text for a particular interaction. As another example, the amount of text may exceed a designated threshold, such as a cost threshold. FIG.6illustrates a text chunk determination method600, performed in accordance with one or more embodiments. According to various embodiments, the method600may be performed on any suitable computing system. For instance, the method600may be performed on the text generation interface system230shown inFIG.2. The method600may be performed in order to assign a set of text portions into text chunks. In some embodiments, the method600may be used to compress text portions into text chunks of smaller size. For instance, the method600may receive as an input a set of text portions divided into text chunks of highly variable sizes, and then produce as an output a division of the same text portions into the same number of text chunks, but with the maximum text chunk size being lower due to more even distribution of text portions across text chunks. A request is received at602to divide a set of text portions into one or more chunks. In some embodiments, the request may be automatically generated, for instance upon completion of the method500shown inFIG.5. The request may identify, for instance, a set of text portions to divide into text chunks. An initial maximum text chunk size is identified at604. In some embodiments, the initial maximum text chunk size may be identified in a manner similar to that for operation504shown inFIG.5. A text portion is selected for processing at606. In some embodiments, text portions may be selected sequentially. Sequential or nearly sequential ordering may ensure that semantically contiguous or similar text portions are often included within the same text chunk. A determination is made at608as to whether the text portion fits into the latest text chunk. In some embodiments, text portions may be processed via the method500shown inFIG.5to ensure that each text portion is smaller than the maximum chunk size. However, a text chunk may already include one or more text portions added to the text chunk in a previous iteration. In the event that the text portion fits into the last text chunk size, the text portion is inserted into the last text chunk at610. If instead the text portion is the first to be processed, or the text portion does not fit into the last text chunk size, then the text portion is inserted into a new text chunk at612. The new chunk may be created with a maximum size in accordance with the maximum text chunk size, which may be the initial maximum text chunk upon the first iteration or the reduced maximum text chunk size upon subsequent iterations. A determination is made at614as to whether to select an additional text portion for processing. In some embodiments, additional text portions may be selected until all text portions have been added to a respective text chunk. A determination is made at616as to whether the number of text chunks has increased relative to the previous maximum text chunk size. If the number of text chunks increases, then a reduced maximum text chunk size is determined at618, and the text portions are again assigned into chunks in operations606through614. According to various embodiments, for the first iteration, the number of chunks will not have increased because there was no previous assignment of text portions into text chunks. However, for the second and subsequent iterations, reducing the maximum text chunk size at618may cause the number of text chunks needed to hold the text portions to crease because the reduced maximum text chunk size may cause a text portion to no longer fit in a chunk and instead to spill over to the next chunk. In some embodiments, the first increase of the number of text chunks may cause the termination of the method at operation620. Alternatively, a different terminating criteria may be met. For instance, an increase in the number of text chunks may be compared with the reduction in text chunk size to produce a ratio, and additional reductions in text chunk size may continue to be imposed so long as the ratio falls below a designated threshold. In some embodiments, the reduced text chunk size may be determined at618in any of various ways. For example, the text chunk size may be reduced by a designated amount (e.g., 10 words, 5 kilobytes, etc.) As another example, the text chunk size may be reduced by a designated percentage (e.g., 1%, 5%, etc.). When it is determined that the number of text chunks has unacceptably increased, then at620the previous maximum text chunk size and assignment of text portions into chunks is returned. In this way, the number of text chunks may be limited while at the same time dividing text portions more equally into text chunks. The number of text chunks may be strictly capped at the input value, or may be allowed to increase to some degree if a sufficiently improved division of text portions into text chunks is achieved. FIG.7illustrates one example of a computing device700, configured in accordance with one or more embodiments. According to various embodiments, a system700suitable for implementing embodiments described herein includes a processor701, a memory module703, a storage device705, an interface711, and a bus715(e.g., a PCI bus or other interconnection fabric.) System700may operate as variety of devices such as an application server, a database server, or any other device or service described herein. Although a particular configuration is described, a variety of alternative configurations are possible. The processor701may perform operations such as those described herein. Instructions for performing such operations may be embodied in the memory703, on one or more non-transitory computer readable media, or on some other storage device. Various specially configured devices can also be used in place of or in addition to the processor701. The interface711may be configured to send and receive data packets over a network. Examples of supported interfaces include, but are not limited to: Ethernet, fast Ethernet, Gigabit Ethernet, frame relay, cable, digital subscriber line (DSL), token ring, Asynchronous Transfer Mode (ATM), High-Speed Serial Interface (HSSI), and Fiber Distributed Data Interface (FDDI). These interfaces may include ports appropriate for communication with the appropriate media. They may also include an independent processor and/or volatile RAM. A computer system or computing device may include or communicate with a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user. FIG.8illustrates an example of a method800for conducting a chat session, performed in accordance with one or more embodiments. The method800may be performed at the text generation system200in order to provide one or more responses to one or more chat messages provided by a client machine. For instance, the method800may be performed at the text generation interface system210to provide novel text to the client machine202based on interactions with the text generation modeling system270. User input is received at802. In some embodiments, the user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input is used to generate a chat input message804, which is sent to the text generation interface system210. In some implementations, the chat input message804may be received by the text generation interface system210via a web socket. At806, the text generation interface system210determines a chat prompt808based on the chat input message804. The chat prompt808may include one or more instructions for implementation by the text generation modeling system270. Additionally, the chat prompt808includes a chat message810determined based on the chat input message804. In some implementations, determining the chat prompt808may involve processing the chat input message804. In some embodiments, as discussed with respect to the methods500and600shown inFIG.5andFIG.6, the chat input message804may be processed via text sharding and/or chunking to divide the text into manageable portions. Portions may then be included in the same or separate chat prompts depending on chunk size. For instance, text may be inserted into a template via a tool such as Jinja2. The chat prompt808is then sent to the text generation modeling system270via a chat prompt message812. The text generation modeling system270generates a raw chat response at814, which is then sent back to the text generation interface system210via a chat response message at816. The chat response message is parsed at818to produce a parsed chat response at820. In some embodiments, the chat response message received at816may include ancillary information such as all or a portion of the chat prompt message sent at812. Accordingly, parsing the chat response message may involve performing operations such as separating the newly generated chat response from the ancillary information included in the chat response message. For example, the response generated by the model may include information such as the name of a chat bot, which may be removed during parsing by techniques such as pattern matching. The parsed chat response820is provided to the client machine via the chat output message at822. The parsed chat response message is then presented via user output at824. According to various embodiments, the user output may be presented via a chat interface, via a file, or in some other suitable format. In some implementations, the chat interaction may continue with successive iterations of the operations and elements shown at802-824inFIG.8. In order to maintain semantic and logical continuity, all or a portion of previous interactions may be included in successive chat prompts sent to the text generation modeling system270. For instance, at the next iteration, the chat prompt message sent to the text generation modeling system may include all or a portion of the initial user input, the parsed chat message determined based on the response generated by the text generation modeling system270, and/or all or a portion of subsequent user input generated by the client machine in response to receiving the parsed chat message. In some embodiments, the text generation modeling system270may be configured such that the entire state of the text generation model needs to fit in a prompt smaller than a designated threshold. In such a configuration, when the chat history grows too long to include the entire history in a single prompt, then the most recent history may be included in subsequent chat prompts. According to various embodiments, the method800may be performed in such a way as to facilitate tasks more complex text analysis tasks. Examples of such complex text analysis tasks may include, but are not limited to, identifying recommended skills, generating correspondence, and revising correspondence. These tasks are discussed in more detail below. In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to suggest one or more skills. The text generation modeling system270may indicate the recommended skill or skills via natural language text and/or via one or more skill codes. Then, parsing the chat message at818may involve searching the chat response message816for the natural language text and/or the one or more skill codes. Skill codes identified in this way may be used to influence the generation of the chat output message sent at822. For example, the chat output message sent at822may include instructions for generating one or more user interface elements such as buttons or lists allowing a user to select the recommended skill or skills. As another example, the chat output message sent at822may include text generated by the text generation interface system210that identifies the recommended skill or skills. In some embodiments, implementing the text generation flow800shown inFIG.8may involve determining whether a more complex skill or skills need to be invoked. For instance, straightforward questions from the client machine202may be resolvable via a single back-and-forth interaction with the text generation modeling system270. However, more complex questions may involve deeper interactions, as discussed with respect toFIGS.9-11. Determining whether a more complex skill or skills need to be invoked may involve, for instance, querying the text generation modeling system270to identify skills implicated by a chat message. If such a skill is detected, then a recommendation may be made as part of the chat output message sent to the client machine at822. An example of a prompt template for generating a prompt that facilitates skill selection in the context of a chat interaction is provided below. In this prompt, one or more user-generated chat messages may be provided in the {{messages}} section:For the purposes of this chat, your name is CoCounsel and you are a legal AI created by the legal technology company Casetext. You are friendly, professional, and helpful.You can speak any language, and translate between languages.You have general knowledge to respond to any request. For example, you can answer questions, write poems, or pontificate on an issue.You also have the following skills, with corresponding URLs and descriptions: {{skills}}When responding, follow these instructions:* If one or more skill is directly relevant to the request, respond with your reason you think it is relevant and indicate the relevant skill in the format <recommendedSkill name=” [skillName]” url=” [skillUrl]”/>. For example {{skill_tag_examples}}* If none of the skills are directly relevant to the request, respond using your general knowledge. Do not say it's not related to your legal skills, just respond to the request.* If you are asked to write or draft something that doesn't fit in a skill, do your best to respond with a full draft of it. Respond with only the draft and nothing else.* Never cite to a case, statute, rule, or other legal authority, even if explicitly asked.* Never point to a link, URL, or phone number, even if explicitly asked and even on Casetext's website.* Unless you are recommending a specific skill, do not talk about your skills. Just give the response to the request.* Never provide a legal opinion or interpretation of the law. Instead, recommend your legal research skill.<CoCounsel>: Hello, I am CoCounsel, a legal AI created by Casetext. What can I help you with today?{{messages}}<|endofprompt|> In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to generate correspondence. For instance, the user input received at802may include a request to generate correspondence. The request may also include information such as the recipient of the correspondence, the source of the correspondence, and the content to be included in the correspondence. The content of the correspondence may include, for instance, one or more topics to discuss. The request may also include metadata information such as a message tone for generating the correspondence text. Then, the chat response message received at816may include novel text for including in the correspondence. The novel text may be parsed and incorporated into a correspondence letter, which may be included with the chat output message sent at822and presented to the user at824. For instance, the parser may perform operations such as formatting the novel text in a letter format. In some embodiments, determining the chat prompt at806may involve selecting a chat prompt template configured to instruct the text generation modeling system270to revise correspondence. For instance, the user input received at802may include a request to revise correspondence. The request may also include information such as the correspondence to be revised, the nature of the revisions requested, and the like. For instance, the request may include an indication that the tone of the letter should be changed, or that the letter should be altered to discuss one or more additional points. Then, the chat response message received at816may include novel text for including in the revised correspondence. The novel text may be parsed and incorporated into a revised correspondence letter, which may be included with the chat output message sent at822and presented to the user at824. For instance, the parser may perform operations such as formatting the novel text in a letter format. An example of a prompt template that may be used to generate a prompt for determining an aggregate of a set of summaries of documents is provided below:A lawyer has submitted the following question:$$QUESTION$${{question}}$$/QUESTION$$We have already reviewed source documents and extracted references that may help answer the question. We have also grouped the references and provided a summary of each group as a “response”:$$RESPONSES$${% for response in model_responses %}{{loop.index}}. {{response}}{% endfor %}$$/RESPONSES$$ We want to know what overall answer the responses provide to the question. We think that some references are more relevant than others, so we have assigned them relevancy scores of 1 to 5, with 1 being least relevant and 5 being most relevant. However, it's possible that some references may have been taken out of context. If a reference is missing context needed to determine whether it truly supports the response, subtract 1 point from its relevancy score. Then, rank each response from most-reliable to least-reliable, based on the adjusted relevancy scores and how well the references support the response. Draft a concise answer to the question based only on the references and responses provided, prioritizing responses that you determined to be more reliable.* If the most-reliable response completely answers the question, use its verbatim text as your answer and don't mention any other responses.* Answer only the question asked and do not include any extraneous information.* Don't let the lawyer know that we are using responses, references, or relevancy scores; instead, phrase the answer as if it is based on your own personal knowledge.* Assume that all the information provided is true, even if you know otherwiseIf the none of the responses seem relevant to the question, just say “The documents provided do not fully answer this question; however, the following results may be relevant.” and nothing else.<|endofprompt|> Here's the answer and nothing else: FIG.9illustrates an example of a method900for generating a document timeline, performed in accordance with one or more embodiments. The method900may be performed at the text generation system200in order to summarize one or more documents provided or identified by a client machine. In some configurations, the method900may be performed to summarize one or more documents returned by a search query. One or more documents are received at902. In some embodiments, a document may be uploaded by the client machine. Alternatively, a document may be identified by the client machine, for instance via a link. As still another possibility, a document may be returned in a search result responsive to a query provided by a client machine. A single summary request may include documents identified and provided in various ways. In some embodiments, the one or more documents may be received along with user input. The user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input may be used to generate a summary input message904, which is sent to the text generation interface system210. In some implementations, the summary input message904may be received by the text generation interface system210via a web socket. Alternatively, a different form of communication may be used, for instance an asynchronous mode of communication. At906, the text generation interface system210determines one or more summarize prompt908based on the summary request message904. In some embodiments, the determination of the summarize prompt may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods500and600shown inFIG.5andFIG.6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text. Then, each chunk may be used to create a respective summarize prompt for summarizing the text in the chunk. For instance, text may be inserted into a template via a tool such as Jinja2. The one or more summarize prompts908may include one or more instructions for implementation by the text generation modeling system270. Additionally, the one or more summarize prompts each includes a respective text chunk910determined based on the summary request message904. The one or more summarize prompts908are then sent to the text generation modeling system270via one or more summarize prompt messages912. The text generation modeling system270generates one or more raw summaries at914, which are then sent back to the text generation interface system210via one or more summarize response messages at916. The one or more summarize response messages are parsed at918to produce one or more parsed summary responses at920. In some embodiments, the one or more summary response messages received at916may include ancillary information such as all or a portion of the summarize prompt messages sent at912. Accordingly, parsing the summarize response messages may involve performing operations such as separating the newly generated summaries from the ancillary information included in the one or more summarize response messages. An example of a prompt template used to instruct a text generation system to summarize a text is shown below:You are a highly sophisticated legal AI. A lawyer has submitted questions that need answers.Below is a portion of a longer document that may be responsive to the questions:$$DOCUMENT$${%—for page in page_list-%}$$PAGE {{page[“page” ] }}$${{page[“text” ] }}$$/PAGE$${%—endfor-%}$$/DOCUMENT$$ We would like you to perform two tasks that will help the lawyer answer the questions. Each task should be performed completely independently, so that the lawyer can compare the results. Extractive Task The purpose of this task is not to answer the questions, but to find any passages in the document that will help the lawyer answer them. For each question, perform the following steps:1. Extract verbatim as many passages from the document (sentences, sentence fragments, or phrases) as possible that could be useful in answering the question. There is no limit on the number of passages you can extract, so more is better. Don't worry if the passages are repetitive; we need every single one you can find.If the question asks for a list of things or the number of times something occurred, include a passage for every instance that appears in the document2. If you extracted any passages, assign each one a score from 1 to 5, representing how the passage relates to the question:5 (complete answer)4 (one piece of a multipart answer)3 (relevant definition or fact)2 (useful context)1 (marginally related) Abstractive Task The purpose of this task is to compose an answer to each question. Follow these instructions:Base the answer only on the information contained in the document, and no extraneous information. If a direct answer cannot be derived explicitly from the document, do not answer.Answer completely, fully, and precisely.Interpret each question as asking to provide a comprehensive list of every item instead of only a few examples or notable instances. Never summarize or omit information from the document unless the question explicitly asks for that.Answer based on the full text, not just a portion of it.For each and every question, include verbatim quotes from the text (in quotation marks) in the answer. If the quote is altered in any way from the original text, use ellipsis, brackets, or [sic] for minor typos.Be exact in your answer. Check every letter.There is no limit on the length of your answer, and more is betterCompose a full answer to each question; even if the answer is also contained in a response to another question, still include it in each answer Here are the questions:$$QUESTIONS$${{question_str}}$$/QUESTIONS$$ Return your responses as a well-formed JSON array of objects, with each object having keys of:* ‘id’ (string) The three-digit ID associated with the Question* ‘passages’ (array) a JSON array of the verbatim passages you extracted, or else an empty array. Format each item as a JSON object with keys of:** ‘passage’ (string)** ‘score’ (int) the relevancy score you assigned the passage** ‘page’ (int) the number assigned to the page in which the snippet appears* ‘answer’ (string) the answer you drafted, or else “N/A”Escape any internal quotation marks or newlines using \″ or \n[{“id”: <id>, “passages”: [{“passage”: <passage>, “score”: <score>, “page”: <page>}, . . . ]|[ ], “answer”: <text>|“N/A”}, . . . ]Only valid JSON; check to make sure it parses, and that quotes within quotes are escaped or turned to single quotes, and don't forget the ‘,’ delimiters.<|endofprompt|> Here is the JSON array and nothing else: According to various embodiments, the one or more parsed summary responses920may be processed in any of various ways. In some embodiments, the one or more parsed summary response messages920may be concatenated into a summary and provided to the client machine via a summary message922. The summary may then be presented as output on the client machine at924. Presenting the summary as output may involve, for instance, presenting the summary in a user interface, outputting the summary via a chat interface, and/or storing the summary in a file. In some embodiments, the one or more parsed summary responses920may be used as input to generate a consolidated summary. For example, a consolidated summary may be generated if the aggregate size of the parsed summary responses920exceeds or falls below a designated threshold. As another example, a consolidated summary may be generated if the client machine provides an instruction to generate a consolidated summary, for instance after receiving the summary message at922. In some embodiments, generating a consolidated summary may involve determining a consolidation prompt at926. The consolidation prompt may be determined by concatenating the parsed summary responses at920and including the concatenation result in a consolidation prompt template. In the event that the concatenated parsed summary responses are too long for a single chunk, then more than one consolidation prompt may be generated, for instance by dividing the parsed summary response920across different consolidation prompts. In some implementations, one or more consolidation prompt messages including the one or more consolidation prompts are sent to the text generation modeling system270at928. The text generation modeling system270then generates a raw consolidation of the parsed summary responses920and provides the novel text generated as a result via one or more consolidation response messages sent at932. According to various embodiments, the one or more consolidation response messages are parsed at934. For instance, if the one or more consolidation response messages include two or more consolidation response messages, each of the different messages may be separately parsed, and the parsed results concatenated to produce a consolidated summary. The consolidated summary is provided to the client machine at936via a consolidation message. The client machine may then present the consolidated summary as consolidation output at938. In the event that further consolidation is required, operations92-934may be repeated. FIG.10illustrates an example of a method1000for generating a timeline, performed in accordance with one or more embodiments. The method1000may be performed at the text generation system200in order to generate an event timeline based on one or more documents provided or identified by a client machine. In some configurations, the method1000may be performed to generate a timeline based on one or more documents returned by a search query. One or more documents are received at1002. In some embodiments, a document may be uploaded by the client machine. Alternatively, a document may be identified by the client machine, for instance via a link. As still another possibility, a document may be returned in a search result responsive to a query provided by a client machine. A single timeline generation request may include documents identified and provided in various ways. In some embodiments, the one or more documents may be received along with user input. The user input may be received via a chat interface such as iMessage, Google Chat, or SMS. Alternatively, or additionally, user input may be provided via a different mechanism, such as an uploaded file. The user input may be used to generate a timeline generation request message1004, which is sent to the text generation interface system210. In some implementations, the timeline generation request message1004may be received by the text generation interface system210via a web socket. Alternatively, a different form of communication may be used, for instance an asynchronous mode of communication. At1006, the text generation interface system210determines one or more timeline generation prompts1008based on the timeline generation request message1004. In some embodiments, the determination of the one or more timeline prompts may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods500and600shown inFIG.5andFIG.6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text. Then, each chunk may be used to create a respective summarize prompt for summarizing the text in the chunk. For instance, text may be inserted into a template via a tool such as Jinja2. The one or more timeline generation prompts1008may include one or more instructions for implementation by the text generation modeling system270. Additionally, the one or more timeline generation prompts each includes a respective text chunk1010determined based on the timeline generation request message received at1004. The one or more timeline generation prompts1008are then sent to the text generation modeling system270via one or more timeline generation prompt messages1012. The text generation modeling system270generates one or more input timelines at1014, which are then sent back to the text generation interface system210via one or more timeline generation response messages at1016. An example of a prompt template for generating a prompt for generating a timeline is provided below:You are a world-class robot associate reviewing the following text. It may be an excerpt from a larger document, an entire document, or encompass multiple documents.$$TEXT$${% for page in page_list %}$$PAGE {{page[“page” ] }}$${{page[“text” ] }}$$/PAGE$${% endfor %}$$/TEXT$$Create a list of all events for your managing partner based on what is described in the text.* Draw only from events mentioned in the text; nothing extraneous.* Events include occurrences that are seemingly insignificant to the matter at hand in the document, as well as mundane/pedestrian occurrences. Make sure to include ALL events, leaving nothing out (with a few exceptions listed below).* If the text is a transcript, do not include events that took place during the creation of the transcript itself (like the witness being asked a question or actions by a court reporter); rather, include all the events described therein. Also include a single event for the occurrence during which the transcript is being taken.* Do not include events associated with legal authorities if they are part of a legal citation.* Legal arguments or contentions, e.g. interpretations of case law, are not events, although they may make reference to real events that you should include.* Make sure to include events of legal significance even if they did not necessarily come to pass, such as when something is in effect, potential expirations, statutes of limitations, etc.* Assume that when there is a date associated with a document, that document's creation/execution/delivery/etc. should be considered an event in and of itself.* For each event you identify, determine how notable it is on a scale from 0 to 9, with 0 being utterly mundane to the extent that it is almost unworthy of mention and 9 being an essential fact without which the text is meaningless.* In case it is relevant to your analysis, today's date is {{requested_date}}. Do not consider this one of the events to list.Answer in a JSONL list, with each event as its own JSONL object possessing the following keys:* ‘description’ (string): a fulsome description of the event using language from the text where possible. Use past tense.* ‘page’ (int): page in which the fact is described. If it is described in multiple pages, simply use the first occurrence* ‘notability’ (int): 0 to 9 assessment of the facts' notability* ‘year’ (int): year of the event* ‘month’ (int or null): If discernible* ‘day’ (int or null): If discernible* ‘hour’ Optional(int): If discernible, otherwise do not include. Use military (24 hour) time* ‘minute’ Optional(int): If discernible, otherwise do not include* ‘second’ Optional(int): If discernible, otherwise do not includeIn creating this JSONL list, make sure to do the following:* If there are no events in the text, respond with a single JSONL object with a key of ‘empty’ and value of True.* Note that some events may be expressed relatively to each other (e.g., “one day later” or “15 years after the accident”); in those circumstances, estimate the date based on the information provide and make a brief note in the description field.* Keys that are marked as optional (hour, minute, second) should not be included in the event objects if that detail is not present in the text.* Keys that are marked as ($type$ or null) should ALWAYS be present in the list, even when the value is null.* If there is an event that took place over a period of time, include one event in the list for the start and one event for the end, noting as much in the description* If there is no datetime information associated with an event, do not include it in your list.Your answer must be thorough and complete, capturing every item of the types described above that appears in the text.Return a JSON Lines (newline-delimited JSON) list of the events.<|endofprompt|>Here's the JSONLines list of events: In some implementations, an input timeline may be specified in a structured format included in the text generation generated by the text generation modeling system270. For instance, the input timeline may be provided in a JSON format. The one or more timeline generation response messages are parsed at1018to produce one or more parsed timelines events at1020. In some embodiments, the one or more timeline response messages received at1016may include ancillary information such as all or a portion of the timeline generation prompt messages sent at1012. Accordingly, parsing the timeline generation response messages may involve performing operations such as separating the newly generated timelines from the ancillary information included in the one or more timeline response messages. One or more deduplication prompts are created at1022. In some embodiments, a deduplication prompt may be created by inserting events from the parsed timelines at1020into the deduplication prompt, for instance via a tool such as Jinja2. Each timeline event may be specified as, for instance, a JSON portion. The deduplication prompt may include an instruction to the text generation modeling system to deduplicate the events. In some embodiments, in the event that the number of events is sufficiently large that the size of the deduplication prompt would exceed a maximum threshold, then the events may be divided across more than one deduplication prompt. In such a situation, the events may be ordered and/or group temporally to facilitate improved deduplication. In some embodiments, the one or more deduplication prompts are sent to the text generation modeling system270via one or more deduplication prompt messages1024. The text generation modeling system270generates a set of consolidated events at1026and provides a response message that includes the consolidated events at1028. An example of a deduplication prompt template that may be used to generate a deduplication prompt is provided below:Below are one or more lists of timeline events, with each event formatted as a JSON object:$$EVENT_LISTS$${% for list in event_lists %}$$LIST$${% for item in list %}{{item}}{% endfor %}$$LIST$${% endfor %}$$EVENT_LISTS$$We think that each list may contain some duplicate events, but we may be wrong. Your task is to identify and consolidate any duplicate events. To do this, please perform the following steps for each list:1. Identify any events in the list that are duplicative.* For our purposes, events are duplicative if their ‘description’ keys appear to describe the same factual occurrence, even if they have different ‘datetime’ keys. For example, one event may say “Bob died” while another may say “the death of Bob.” Those should be considered duplicate events.* Events are not duplicative just because they occurred on the same day. They must also describe the same occurrence to be considered duplicative.2. If there are duplicates, keep the event with the most complete description and discard the other duplicates3. If you discarded any events in step 2, append the items in their ‘references’ arrays to the ‘references’ array of the event you chose to keep. Retain the notability score from the event you chose to keep.4. Re-evaluate the entire list and discard any items from the list that are not valid events, which includes the following:* Legal arguments and contentions, such as allegations that a statute was violated are not valid events.* Actions that took place during a hearing or deposition such as a witness being asked a question or shown a document are not valid events.* The fact that someone testified is not a valid event.* The fact that someone or something was mentioned in the text is not a valid event. For example, “the document mentioned the defense for the first time” is not a valid event.* The occurrence of a date or time reference in the text by itself, or where the event that occurred on that date is unknown is not a valid event. For example, “the mention of October as a month in which something occurred” is not a valid event. “The occurrence of the year 1986” is also not a valid event. “An event occurred at 7:00” is also not a valid event.* Mentions of exhibits are not valid events.Respond with a well-formed JSON Lines (newline-delimited JSON) list with one object for each event from the lists provided that is not a duplicate, along with any events that you chose to keep in step 2.* Aside from any changes you made in step 3, keep all the original keys and values for each event you return. For reference, each event should be in the following format:{‘id’ (string): <id>, ‘description’ (string): <description>, ‘datetime’ (string): <datetime>, ‘references’ (array): [{‘document_id’ (string): <document_id>, ‘page’ (int): <page>} . . . ]}<|endofprompt|>Here's the JSON Lines list and nothing else: The one or more consolidation response messages are parsed at1030to generate a consolidated timeline. Parsing the one or more consolidation response messages may involve, for instance, separating JSON from ancillary elements of the one or more consolidation response messages, joining events from two or more consolidation response messages into a single consolidated timeline, and the like. The consolidated timeline is transmitted to the client machine via a consolidation message at1032, and presented at the client machine at1034. Presenting the consolidated timeline may involve, for instance, displaying the timeline in a user interface, including the timeline in a chat message, and/or storing the timeline in a file. FIG.11illustrates a flow diagram1100for generating correspondence, configured in accordance with one or more embodiments. The flow diagram1100provides an example of how techniques and mechanisms described herein may be combined to generate novel text in a manner far more sophisticated than simple back-and-forth interactions with text generation modeling systems. The operations shown in the flow diagram1100may be performed at a text generation interface system, such as the system210shown inFIG.2. A request is received at1102. In some embodiments, the request may be received as part of a chat flow. Alternatively, the request may be received as part of a correspondence generation flow. The request may, for instance, include a natural language instruction to generate a correspondence letter pertaining to a particular topic on behalf of a particular party. At1104, the text generation interface system identifies a skill associated with the request by transmitting a prompt to the text generation modeling system. The text generation modeling system returns a response identifying correspondence generation as the appropriate skill. Additional details regarding skill identification are discussed with respect toFIG.8. At1106, the text generation interface system identifies one or more search terms associated with the correspondence by transmitting a prompt to the text generation modeling system. The text generation modeling system may complete the prompt by identifying, for example, relevant keywords from within the request received at1102. At1108, one or more search queries are executed to determine search results. In some embodiments, one or more search queries may be executed against an external database such as a repository of case law, secondary sources, statutes, and the like. Alternatively, or additionally, one or more search queries may be executed against an internal database such as a repository of documents associated with the party generating the request at1102. At1110-1114, the text generation interface system summarizes the search results and then summarizes the resulting search summaries. According to various embodiments, such operations may be performed by retrieving one or more documents, dividing the one or more documents into chunks, and then transmitting the chunks in one or more requests to the text generation modeling system. Additional details regarding document summarization are discussed throughout the application, for instance with respect toFIG.9. At1116, based at least in part on the search summary, the text generation interface system determines a number of separate correspondence portions to generate. The correspondence portions are then generated at1118and1120and combined into a single correspondence at1122. According to various embodiments, such operations may be performed by transmitting appropriate prompts to the text generation modeling system, and then parsing the corresponding responses. Additional details regarding determining correspondence and combining results are discussed throughout the application, for instance with respect toFIGS.8and9. At1124, one or more factual claims in the generated correspondence are identified. According to various embodiments, factual claims may include, for instance, citations to legal case law, statutes, or other domain-specific source documents. Factual claims may also include claims based on other accessible information sources such as privately held documents, information publicly available on the internet, and the like. In some embodiments, the identification of a factual claim may be associated with a respective set of search terms. The search terms may be used to search for evidence for or against the factual claims at1126-1128. The results of these searches may then be provided in prompts to evaluate the factual claims sent to the text generation modeling system at1130-1132. The text generation modeling system may complete the prompts by indicating whether the factual claims are accurate given the available search results. At1134, the text generation interface system revises the correspondence by transmitting one or more prompts to the text generation modeling system. The requests may include the correspondence generated at1122as well as one or more results of the analysis of the factual claims. In this way, the text generation modeling system may revise the correspondence for accuracy, for instance by removing factual claims deemed to be inaccurate. It is important to note that the particular flow shown inFIG.11is only one example of ways in which text generation flows discussed herein may be combined to generate novel text. Many combinations are possible and in keeping with techniques and mechanisms described herein. For example, the flow1100may be supplemented with one or more user interactions. FIG.12illustrates a hallucination detection method1200, performed in accordance with one or more embodiments. The method1200may be performed by the text generation interface system210shown inFIG.2. In some embodiments, the method1200may be performed in order to determine whether novel text generated by a text generation modeling system includes one or more hallucinations. Generative text systems sometimes generate text that includes inaccurate claims. For example, in the legal sphere, a request to summarize a set of judicial opinions about a point of law may result in a summary text that includes a citation to a non-existent opinion. A request is received at1202to identify one or more hallucinations in novel text generated by a text generation model. In some embodiments, the request may be received as part of one or more methods shown herein. For example, the method1200may be performed as part of one or more of the methods shown inFIG.4,FIG.8,FIG.9,FIG.10, and/orFIG.11to evaluate a response returned by the text generation modeling system. When employed in this way, the method1200may be used to prompt the system to revise the response, for instance as discussed with respect toFIG.11. Alternatively, or additionally, the method1200may be used to prompt the system to generate a new response, to flag the error to a systems administrator, and/or to inform a response recipient of a potentially inaccurate response. In some implementations, the request may be received as part of a training and/or testing procedure. For instance, one or more prompts may be tested by the prompt testing utility226against one or more tests stored in the test repository224. A test result may be evaluated using the method1200to determine whether a prompt constructed from a prompt template being tested resulted in the generation of a hallucination, which may be treated as a test failure. One or more factual assertions in the novel text are identified at1204. In some embodiments, the one or more factual assertions may be identified by transmitting a prompt to the text generation modeling system. For instance, the novel text may be included in a prompt requesting that the text generation modeling system identify factual claims in the novel text. The resulting completed prompt may be parsed to identify the one or more factual assertions. A factual assertion is selected for analysis. Factual assertions identified at1204may be analyzed in sequence, in parallel, or in any suitable order. One or more search terms associated with the factual assertion are determined at1208. In some embodiments, one or more search terms may be returned by the text generation modeling system at1204. Alternatively, or additionally, one or more search terms may be determined based on a separate request sent to the text generation modeling system for the factual assertion being analyzed. A search query to identify one or more search results based on the one or more search terms is executed at1210. According to various embodiments, one or more searches may be executed against any suitable database. Such databases may include, but are not limited to: public sources such as the internet, internal document databases, and external document databases. The one or more search results are summarized at1212. In some embodiments, summarizing the one or more search results may involve, for instance, dividing documents into chunks and transmitting the one or more chunks to the text generation modeling system within summarization prompts. At1214, the factual assertion is evaluated against the one or more search results. In some embodiments, evaluating the factual assertion may involve transmitting to the text generation modeling system a prompt that includes a request to evaluate the factual assertion, information characterizing the factual assertion, and a summary of the one or more search results determined as discussed at1212. A determination is made at1216as to whether the factual assertion is accurate. In some embodiments, the determination may be made by parsing the response returned by the text generation modeling system at1214. For instance, the text generation modeling system may complete the prompt by indicating whether the factual assertion is true, false, or uncertain based on the provided summary of search results. If it is determined that the factual assertion is inaccurate, then at1218the factual assertion is identified as a hallucination. In some embodiments, identifying the factual assertion as a hallucination may cause one or more consequences in an encompassing process flow. For example, in a testing phase, the detection of a hallucination may cause the test to fail. As another example, in a production phase, the detection of a hallucination may cause the system to initiate a flow to revise the novel text to remove the hallucination. FIG.13illustrates a database system updating method1300, performed in accordance with one or more embodiments. The method1300may be performed at a text generation system such as the system200shown inFIG.2. A request is received at1302to update a database system based on one or more natural language documents. In some embodiments, the request may be received via a chat interface. Alternatively, the request may be received in some other way, such as via an API request. The request may be generated automatically or based on user input, and may be received from a client machine. According to various embodiments, the natural language documents may be identified in various ways. For example, documents may be uploaded from a client machine, identified based on a search query, retrieved from a repository based on one or more document identifiers, or identified in any other suitable way. Clauses included in the natural language documents are identified at1304. In some embodiments, each clause may include some portion of a natural language document. For instance, a clause may include a single phase, a collection of phrases, a single sentence, a collection of sentences, a section, a page, one or more page, or any other unit of analysis. According to various embodiments, clauses may be identified based on one or more natural language processing techniques. For instance, a document may be tokenized into words. Words may then be grouped into phrases and/or sentences based on indicators such as punctuation and semantic content. Sentences may be grouped into sections such as paragraphs or other units. Clauses may then be identified based on the structure. In particular embodiments, the identification of clauses may involve domain-specific logic. For instance, the identification of clauses in a general-purpose non-fiction text may be different from the identification of clauses in a legal contract. Accordingly, the text generation interface system may store domain-specific instructions for identifying clauses in one or more contexts. One or more data fields associated with the one or more natural language documents are identified at1306. In some embodiments, one or more data fields may be identified based on a query. Additional details regarding query parsing are discussed with respect to query parsing are discussed with respect to the method1400shown inFIG.14. In some implementations, one or more data fields may be identified based on the structure of a table in a database system or other such configuration parameters. For instance, if metadata for a set of documents is intended to be combined with metadata for other documents already reflected in one or more database tables, then fields associated with those database tables may be identified so as to identify values corresponding to the existing table structure. One or more clauses are selected for analysis at1308. A text chunk is determined at1304based on the natural language documents. In some embodiments, the one or more may be determined by dividing the clauses identified at1304into chunks based on a chunk size. Examples of techniques for determining text chunks are discussed with respect to the method600shown inFIG.6. In some contexts, a text chunk may be limited to text from a single document. Alternatively, a single text chunk may include text from more than one documents. An input metadata extraction prompt is determined at1310based on the text chunk and a clause splitting prompt template. In some embodiments, the input metadata extraction prompt may be determined by supplementing and/or modifying the input metadata extraction prompt based on the one or more clauses and the one or more data fields. For instance, the one or more clauses and a description of the one or more data fields may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to identify values for the one or more data fields based on the one or more clauses. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a prompt template for identifying information and clauses relevant for answering a query is as follows:Purpose: Find information in a contract that is highly relevant to a question.The following Clauses are from a {{context}}For each of the Contract Clauses below, decide whether the Contract Clause contains language that is necessary or highly relevant to answer the question. If it does, provide the IDs of the clauses that contain the information necessary or highly relevant to answer the question.A few guidelines regarding what constitutes relevance:* It will often be the case that nothing in the Contract Clauses answers the question.This is not a problem. When this happens, simply respond by saying “none” (all lower case)* Sometimes, multiple clauses will contain information highly relevant or necessary to answer the question. If that happens, please list all such relevant clauses in your answer.* If there is/are Clause(s) that only partially answer the question, include them in your answer.* If the answer to a question can be inferred from a Clause, include that Clause in your answer list, even if the Clause does not directly answer the question.* If a Clause contains information that could potentially help answer the question if it were combined with other information not seen here, include this Clause in your answer list.* If a question is asking whether something is present or missing, a Clause closely related to the subject of the question that is missing the element is still helpful in answering the question.* If a header Clause is relevant, then list all the Clauses under that header as relevant as well.Question: {{query.text}}Contract Clauses XML:<contract_clauses>{% for contract_section in paragraphs %}<section><id>CC{{loop.index0}}</id><text>{{contract_section.text}}</text></section>{% endfor %}</contract_clauses>Give your answer in the following format:<question_comprehension>[restate what the Question is trying to ask in clear terms to show that you understood the question]</question_comprehension><what_to_look_for>[briefly summarize what sorts of clauses you should be looking for to answer the question, but never refer to a specific clause ID here. It is very important that you not include the clause IDs in this section]</what_to_look_for><clauses>[if there are Clauses containing information highly relevant or necessary to answer the question, provide your answer as a pipe-character-separated list of the clause ID's here, for example: CC1|CC2|CC5|CC9</clauses>Then give a very brief explanation of your answer.<|endofprompt|>{% if question_comprehension %}<question_comprehension>{{question_comprehension }}</question_comprehension><what_to_look_for>{{what_to_look_for}}</what_to_look_for><clauses>{% else %}<question_comprehension>{%—endif %} A completed metadata extraction prompt is determined at1312based on a request sent to a remote text generation modeling system. In some embodiments, the completed metadata extraction prompt may be determined by sending the input metadata extraction prompt to the remote text generation modeling system via an API request. A text generation model implemented at the remote text generation modeling system may then complete the prompt, after which it may be sent back to the text generation interface system. Clause-level field values corresponding to the identified data fields are determined at1314. In some embodiments, the clause-level field values may be determined by parsing the completed metadata extraction prompt. For instance, structured text such as JSON included in the completed metadata extraction prompt may be parsed to identify data values corresponding with data fields for clauses included in the metadata extraction prompt. A determination is made at1316as to whether to determine an additional one or more clauses for analysis. In some implementations, additional clauses may continue to be selected for analysis until all of the natural language documents have been processed. Document-level field values are determined at1318based on the clause-level field values. In some embodiments, the document-level field values may be determined by first identifying and then aggregating clause-level field values for a given document. For example, in the legal context, a data field may indicate whether a contract includes an indemnification clause. One or more metadata extraction prompts may be used to identify, for each clause in the document, whether that clause is an indemnification clause. Although most clauses in the document will not be an indemnification clause, the data field value for the document as a whole will be true if even one of the clauses for the document is identified as an indemnification clause. As another example, in the legal context, a data field may indicate whether a contract involves an exchange valued at more than a threshold value. In this context, one or more metadata extraction prompts may be used to identify the exchange value, if any, associated with each clause in the document. The data field value for the document may then be determined by identifying the maximum exchange value determined for any of the clauses. In particular embodiments, determining the document-level field values may involve domain-specific logic. This domain-specific logic may be reflected in one or more configuration parameters and/or subroutines included in the text generation system. A database system is updated at1320to include one or more entries identifying the field values. In some embodiments, the database system may maintain one or more tables at the document level, as well as one or more tables at the clause level. The database system may link documents with clauses. The text of the clauses may be included within the database system itself and/or may be identified by location within the text of the associated document. The one or more tables may include the field values to facilitate searching the documents and/or clauses on the basis of the field values. Additional details regarding the searching of natural language documents based on data field values are discussed with respect to the method1500shown inFIG.15. According to various embodiments, the operations discussed inFIG.13may be performed in various orders, and in sequence or in parallel. For instance, a set of prompts may be created in one phase and then sent to the text generation modeling system in a subsequent phase. FIG.14illustrates a database system query and filter determination method1400, performed in accordance with one or more embodiments. The method1400may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1400may be performed at the text generation interface system210. A request to query a database system is received at1402. In some embodiments, the request may be received as part of a chat flow. Alternatively, the request may be received via an API call. In either case, the request may be received from a client machine in communication with the text generation interface system210via the internet. The request may, for instance, include a natural language query to identify, count, summarize, or other interact with documents that meet one or more criteria. For instance, the request may include a natural language query to determine the proportion of contracts for the purchase of goods or services valued over $100,000 signed by parties within California in the last 10 years where the contract includes a mandatory arbitration clause. A query and filter comprehension prompt is determined at1404based on the request. In some embodiments, the query and filter comprehension prompt may be determined by combining some or all of the query received with the request at1402with a query and filter comprehension prompt template. The query and filter comprehension prompt template may include one or more fillable elements that may be filled with text, such as “{{query.text}}”. The query and filter comprehension prompt template may also include an instruction to the text generation modeling system to restate the query and filter request included in the query and filter comprehension prompt template. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a template for generating a summary of a query is as follows:Purpose: Find information in a contract that is highly relevant to a question.Question: {{query.text}}Give your answer in the following format:<question_comprehension>[restate what the Question is trying to ask in clear terms to show that you understood the question]</question_comprehension>Then give a very brief explanation of your answer.<|endofprompt|><question_comprehension> A query and filter description is determined at1406based on the prompt. In some embodiments, the query and filter description may be determined by transmitting the query and filter comprehension prompt to a remote text generation modeling system, for instance via an API call. The remote text generation modeling system may then complete the prompt and return it to the text generation interface system. The text generation interface system may extract from the completed prompt a description of the query and filter request included in the prompt. The query and filter description is transmitted for feedback at1408. In some embodiments, the query and filter description may be transmitted to a client machine, such as the client responsible for generating the request received at1402. For instance, the query and filter description may be transmitted for feedback via a chat session or response to an API call. A determination is made at1410as to whether to receive an updated request to query the database system. In some embodiments, the determination may be made based at least in part on user input. For instance, a user may review the description and provide feedback as to whether the description produced by the text generation modeling system accurately characterizes the user's initial intent when formulating the query. The user may then provide feedback either accepting or updating the query requested. If it is determined to receive an updated request to query the database system, then an updated request to query the database system is received at1402. The updated request may then be re-evaluated. In this way, the text generation system may ensure that the text generation modeling system more accurately interprets the user's intent when formulating the query. If instead it is determined not to receive an updated request to query the database system, then a query generation prompt is determined at1412. In some embodiments, the query generation prompt may be determined by combining some or all of the query received with the request at1402and/or the query and filter description determined at1406with a query generation prompt template. The query generation prompt template may include one or more fillable elements that may be filled with text, such as “{{query text}}”. The query generation prompt template may also include an instruction to the text generation modeling system to determine one or more query and/or filter parameters based on the query generation prompt. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. In particular embodiments, a query generation prompt may be used to generate multiple queries, each of which may be executed against a suitable database. An example of a prompt template for generating a query is as follows:We are generating queries for a search engine given a user's original query. The search engine output must follow a specific output format which we will explain to you soon.The search engine, called AllSearch, can search with two different modes, “parallel” (aka Parallel Search) and “kw” (aka Keyword Searches).Parallel Searches are vector-based searches. This means that input queries must resemble full sentences.The full sentences are encoded as dense vectors and used to retrieve the K nearest neighbors in the index's vector space.For example, if a user wanted to know if refusal to wear a mask at work constituted employment discrimination, a good query for parallel search would be:“McVader's termination of Skywalker for refusal to wear a mask cannot be construed as discriminatory.”If the user provided a name, then it's good to use the name, but if no name is given, it's ok to make one up (in this case “McVader”).Keyword searches are bag-of-words based retrieval searches that use ranking methods such as BM-25 or TF-IDF.In these searches, it's important for queries to make exact word or phrase matches in order to get relevant results.A good query would use single words and/or short phrases with words that we would guess are likely to appear in the search corpus.For example, if the user who wanted to know if refusal to wear a mask at work constituted employment discrimination was making a keyword search, good queries would include:apparel workplace discriminationemployee discriminationmask mandates workplacereligious exemption employment lawand so forth.Finally, Keyword Searches can use terms and connectors. The purpose of using terms and connectors is less so to answer a question, but to help someone search over a corpus of documents that may be responsive to the query. Turn the user's question into three terms-and-connectors searches, including using proximity searching, “OR” and “AND” parameters, root expansion (using !), and parentheses using the following guidelines:The terms and connectors search terms should cover all the substantive aspects of the questionExamples of good terms-and-connectors searches: ‘(reject! or refus!)/s settl!/s fail!/s mitigat!’, ‘((sexual/2 (assault! OR harass! OR misconduct))/p “first amendment”) AND (school OR university OR college)’Given the user's original query: “{{query_text}}”,{% if query_comprehension_text %} And given this supplemental information about the query that the user approved: {{query_comprehension_text}},{% endif %}Generate several XML documents (bounded by the ‘<q>’ tag), with each document representing a search query.The documents must conform to the following schema:<q><t>[string—the query text that you generate]</t><m>[the mode, must be exactly one of “kw” or “parallel” ]</m></q>You must provide at least two of each: parallel search, keyword search without terms and connectors, and keyword search with terms and connectors.Provide three more queries of any any mode.<|endofprompt|>Here are the XML documents and nothing else: The query generation prompt is transmitted to a text generation modeling system at1414. Then, a query generation prompt response message is received at1416. According to various embodiments, the query generation prompt may be transmitted to the text generation modeling system via an API request. The text generation modeling system may then complete the prompt via a text generation model implemented at the text generation modeling system, and send a response that includes the completed prompt. A database query is determined at1418based on the query generation prompt response message. In some embodiments, determining the database query may involve extracting one or more database query parameters from the query generation response message. For instance, the query generation response message may include a JSON portion that encodes a list of database query parameters. The database query parameters may then be combined with a query template to generate the database query. Alternatively, the query generation prompt response message may include a fully formed database query. According to various embodiments, the particular operations involved in determining the database query may depend in part on the type of database system employed. For example, the query structure may depend on whether the database system is a relational database system or a nonrelational database system. As another example, the query structure may depend on the structure of tables within the database system. Additional details regarding the querying of the database system are discussed with respect to the method1500shown inFIG.15. At1420, a text filter is determined based on the query generation prompt response message. In some embodiments, the text filter may include any suitable information for providing to a text generation modeling system for filtering results returned by the database query determined at1418. For example, the text filter may include one or more qualitative restrictions capable of being evaluated by the text generation modeling system. As another example, the text filter may include one or more restrictions that are not reflected by information stored in the database system. Additional details regarding the filtering of results returned by the database system are discussed with respect to the method1500shown inFIG.15. FIG.15illustrates a database system query and filter execution method1500, performed in accordance with one or more embodiments. The method1500may be performed at a text generation system such as the system200shown inFIG.2. For instance, the method1400may be performed at the text generation interface system210. A request to execute a database system is received at1402. In some embodiments, the request may be generated automatically, for instance after a database query is generated as discussed with respect to operation1418shown inFIG.14. The request may be generated as part of a chat flow or based on an API request. In either case, the request may be generated based on interaction with a client machine in communication with the text generation interface system210via the internet. A database system query is identified at1504. According to various embodiments, the database system query may be determined as discussed with respect to operation1418shown inFIG.14. One or more query response clauses and associated documents are determined at1506. In some embodiments, the one or more query response clauses and associated documents may be determined by executing the query identified at1504against the database system. As discussed herein, for instance with respect toFIG.13, the database system may store metadata characterizing documents portions of text from documents. Executing the query may result in the database system returning one or more documents, document portions, and/or identifiers that identify documents and/or document portions. One or more relevance prompts are determined at1508based on the one or more query response clauses. In some embodiments, a relevance prompt may be determined by combining some or all of the query results received at1506with a relevance prompt template. The relevance prompt template may include one or more fillable elements that may be filled with text. One or more of the fillable elements may be filled with some or all of the query results received at1506. Additionally, one or more of the fillable elements may be filled with relevance information. The relevance information may include some or all of the text filter determined at1420. Alternatively, or additionally, the relevance information may include some or all of the query received at1402, the query and filter description determined at1406, and/or the database query determined at1418. In some embodiments, the relevance prompt template may also include an instruction to the text generation modeling system to evaluate and/or rank the included search result or results for relevance against the relevance information. The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list. An example of a relevance prompt template is as follows:Evaluate whether these documents are relevant to this research request or query:“{{text}}”$$DOCUMENTS$${{documents}}$$/DOCUMENTS$$* Only respond with relevant documents. In order to be deemed relevant, a document must directly answer the request or query. A document should also be considered relevant if it reaches a conclusion in opposition to the research request.* If there are no relevant documents, do not include any in your response.* Assign a relevance score to each document, judging its relevance to the research request or query: “{{text}}”. The score should correlate to these values:5—the document is directly on-point (i.e., it precisely responds to every aspect of the query or request, even if it is in opposition to the request, and not a similar but different issue; it fully and conclusively settles the question raised in the request either in favor or against the intention of the request, if any)4—the document may provide a useful analogy to help answer the request, but is not directly responsive3—the document is roughly in the same topical area as the request, but otherwise not responsive2—the document might have something to do with the request, but there is no indication that it does in the text provided1—the document is in no way responsive to the requestReturn a JSON array of objects, each object representing a relevant case, ordered with the most relevant case first. Each object in the array will have the keys:* \‘result_id\’—string, the result ID* \‘reason_relevant\’—string, a description of how the document addresses the research request or query: “{user_request}”. In drafting this response, only draw from the excerpted language of the document; do not include extraneous information.* \‘relevance_score\’—number, between 1-5, of how relevant the document is to the research request or query: “{user_request}”* \‘quotes\’—array of strings. For each document, quote the language from the document that addresses the request. In finding these quotes, only draw from the excerpted language; do not include extraneous information. Do not put additional quotation marks around each quote beyond the quotation marks required to make valid JSON.Only valid JSON. Quotation marks within strings must be escaped with a backslash (\‘\\\’). Examples for reason_relevant: \‘“The concept of \\equitable tolling\\” applies in this case.“\‘, \’“The case overturns a lower court decision that found a state abortion restriction unconstitutional based on Roe v. Wade and Casey, and argues that the viability rule from those cases is not the \\“central holding.\\” This case calls into question the continued validity of Roe v. Wade.”’If there are no relevant documents, respond with an empty array.<|endofprompt|>Here's the JSON: In some implementations, more than one relevance prompt may be determined. For instance, if many query response clauses are determined at1506, then these query responses may be divided into groups for the purpose of relevancy analysis. The size of the groups may be determined based on a chunk threshold. Additional details regarding the division of text into chunks are discussed with respect to the method600shown inFIG.6. A subset of the query response clauses that meet a relevancy threshold based on communication with a text generation modeling system are identified at1510. In some embodiments, the subset of the query response clauses may be identified by transmitting the prompt or prompts determined at1508to a remote text generation modeling system. The remote text generation modeling system may then respond with one or more completed prompts. The text generation interface system may then extract relevancy information from the completed prompts. According to various embodiments, the relevance threshold may be determined in any of various ways. For example, all results that exceed a designated relevance threshold (e.g.,3out of a scale of 1-5 as shown in the example prompt template included above) may be identified. As another example, the most relevant results that are able to fit in a designated number (e.g., one or two) chunks may be identified. A query and filter synthesis prompt is determined at1512based on the subset of the query response clauses. In some embodiments, the query and filter synthesis prompt may be determined by combining a query and filter synthesis prompt template with information about the query and with query response clauses deemed suitable relevant at operation1510. The query information may include some or all of the query received at1402, the query and filter description determined at1406, the database query determined at1418, and/or the text filter determined at1420. An example of a query and filter synthesis prompt template in the legal context is as follows:You are helping a lawyer research the prevailing market consensus on a given type of contract clause.Using the following list of contract clauses, analyze the range of different terms for this type of clause in the context of this request from the lawyer: “{{text}}”$$CONTRACT_CLAUSE_LIST$${{documents}}$$/CONTRACT_CLAUSE_LIST$$Based on these contract clauses, and in the context of the lawyer's request, prepare:1. Range of Terms: An extensive analysis of the range of different provisions included in these clauses, following these instructions:* List the dimensions on which the clauses differ, and explain the range of provisions along each of the dimensions.* Focus on the range of favorability to one side or another* Only draw from the language in this list of clauses; do not include extraneous information.2. Average Terms: State what the average terms over the above list of contracts is overthe dimensions you analyzed for question 1 above.3. Suggested Language: Draft a contract clause that is approximately average in terms when compared to the above list of clauses.4. List the clauses that were most relevant to your analysis, following this guidance:* Do not include in this list any clauses that are not relevant to the request.* If none of the clauses are relevant, return an empty array for results.Respond with nothing but a JSON object, with the following keys:\‘range_of_terms’: your analysis of the range of provisions in the clause list, in the context of the lawyer's request.\‘average_terms\’: your analysis of the average provisions over the clauses in the list, in the context of the lawyer's request.\‘suggested_language’: your draft clause with approximately average terms. \‘ids\’: (array of strings), in order of relevance, the document IDs of the documents that are most relevant to the request.Only valid JSON; check to make sure it parses, and that quotes within quotes are escaped or turned to single quotes. For the \‘answer\’ key, this could look like: “This is an answer with \\“proper quoting\\””<|endofprompt|>Here's the JSON: A query and filter response message is determined at1514based on communication with the text generation modeling system. In some embodiments, determining the query and filter response message may involve transmitting the prompt determined at1512to the remote text generation modeling system. The remote text generation modeling system may then respond with one or more completed prompts. The text generation interface system may then extract information for providing the query and filter response message. The extracted information may be used as-is or may be edited, supplemented, or otherwise altered to create the query and filter response message. A query and filter response message is transmitted at1516. In some embodiments, the query and filter response message may be provided to a client machine. The message may be sent in response to an API request, transmitted via a chat session, or provided in some other way. Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory (“ROM”) devices and random-access memory (“RAM”) devices. A computer-readable medium may be any combination of such storage devices. In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities. In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. For example, some techniques and mechanisms are described herein in the context of large language models. However, the techniques of disclosed herein apply to a wide variety of language models. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents.
131,572
11860915
DETAILED DESCRIPTION OF EMBODIMENTS The amount of content available to users in any given content delivery system can be substantial. Consequently, many users desire a form of media guidance through an interface that allows users to efficiently navigate content selections and easily identify content that they may desire. An application that provides such guidance is referred to herein as an interactive media guidance application or, sometimes, a media guidance application or a guidance application. Interactive media guidance applications may take various forms depending on the content for which they provide guidance. One typical type of media guidance application is an interactive television program guide. Interactive television program guides (sometimes referred to as electronic program guides) are well-known guidance applications that, among other things, allow users to navigate among and locate many types of content or media assets. Interactive media guidance applications may generate graphical user interface screens that enable a user to navigate among, locate and select content. As referred to herein, the terms “media asset” and “content” should be understood to mean an electronically consumable user asset, such as television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, Webcasts, etc.), video clips, audio, content information, pictures, rotating images, documents, playlists, websites, articles, books, electronic books, blogs, advertisements, chat sessions, social media, applications, games, and/or any other media or multimedia and/or combination of the same. Guidance applications also allow users to navigate among and locate content. As referred to herein, the term “multimedia” should be understood to mean content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance. With the advent of the Internet, mobile computing, and high-speed wireless networks, users are accessing media on user equipment devices on which they traditionally did not. As referred to herein, the phrase “user equipment device,” “user equipment,” “user device,” “electronic device,” “electronic equipment,” “media equipment device,” or “media device” should be understood to mean any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a hand-held computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smart phone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same. In some embodiments, the user equipment device may have a front facing screen and a rear facing screen, multiple front screens, or multiple angled screens. In some embodiments, the user equipment device may have a front facing camera and/or a rear facing camera. On these user equipment devices, users may be able to navigate among and locate the same content available through a television. Consequently, media guidance may be available on these devices, as well. The guidance provided may be for content available only through a television, for content available only through one or more of other types of user equipment devices, or for content available both through a television and one or more of the other types of user equipment devices. The media guidance applications may be provided as on-line applications (i.e., provided on a web-site), or as stand-alone applications or clients on user equipment devices. Various devices and platforms that may implement media guidance applications are described in more detail below. One of the functions of the media guidance application is to provide media guidance data to users. As referred to herein, the phrase, “media guidance data” or “guidance data” should be understood to mean any data related to content, such as media listings, media availability, media-related information (e.g., broadcast times, broadcast channels, titles, descriptions, ratings information (e.g., parental control ratings, critic's ratings, etc.), genre or category information, actor information, logo data for broadcasters' or providers' logos, etc.), media format (e.g., standard definition, high definition, 3D, etc.), advertisement information (e.g., text, images, media clips, etc.), on-demand information, blogs, websites, director names, episode names, and any other type of guidance data that is helpful for a user to navigate among and locate desired content selections. “Media guidance data” or “guidance data” may also include athletic teams or athletes, stadium names, names of hosts, names of commentators, place names, store names, restaurants, character names, occupations, artists, band names, album titles, song titles, and other words or phrases that could be used to identify content. FIGS.1-2show illustrative display screens that may be used to provide media guidance data. The display screens shown inFIGS.1-2and9-14may be implemented on any suitable user equipment device or platform. While the displays ofFIGS.1-2and9-14are illustrated as full screen displays, they may also be fully or partially overlaid over content being displayed. A user may indicate a desire to access content information by selecting a selectable option provided in a display screen (e.g., a menu option, a listings option, an icon, a hyperlink, etc.) or pressing a dedicated button (e.g., a GUIDE button) on a remote control or other user input interface or device. In response to the user's indication, the media guidance application may provide a display screen with media guidance data organized in one of several ways, such as by time and channel in a grid, by time, by channel, by source, by content type, by category (e.g., movies, sports, news, children, or other categories of programming), or other predefined, user-defined, or other organization criteria. The organization of the media guidance data is determined by guidance application data. As referred to herein, the phrase, “guidance application data” should be understood to mean data used in operating the guidance application, such as program information, guidance application settings, user preferences, or user profile information. FIG.1shows illustrative grid program listings display100arranged by time and channel that also enables access to different types of content in a single display. Display100may include grid102with: (1) a column of channel/content type identifiers104, where each channel/content type identifier (which is a cell in the column) identifies a different channel or content type available; and (2) a row of time identifiers106, where each time identifier (which is a cell in the row) identifies a time block of programming. Grid102also includes cells of program listings, such as program listing108, where each listing provides the title of the program provided on the listing's associated channel and time. With a user input device, a user can select program listings by moving highlight region110. Information relating to the program listing selected by highlight region110may be provided in program information region112. Region112may include, for example, the program title, the program description, the time the program is provided (if applicable), the channel the program is on (if applicable), the program's rating, and other desired information. In addition to providing access to linear programming (e.g., content that is scheduled to be transmitted to a plurality of user equipment devices at a predetermined time and is provided according to a schedule), the media guidance application also provides access to non-linear programming (e.g., content accessible to a user equipment device at any time and not provided according to a schedule). Non-linear programming may include content from different content sources including on-demand content (e.g., VOD), Internet content (e.g., streaming media, downloadable media, etc.), locally stored content (e.g., content stored on any user equipment device described above or other storage device), or other time-independent content. On-demand content may include movies or any other content provided by a particular content provider (e.g., HBO On Demand providing “The Sopranos” and “Curb Your Enthusiasm”). HBO ON DEMAND is a service mark owned by Time Warner Company L.P. et al. and THE SOPRANOS and CURB YOUR ENTHUSIASM are trademarks owned by the Home Box Office, Inc. Internet content may include web events, such as a chat session or Webcast, or content available on-demand as streaming content or downloadable content through an Internet website or other Internet access (e.g. FTP). Grid102may provide media guidance data for non-linear programming including on-demand listing114, recorded content listing116, and Internet content listing118. A display combining media guidance data for content from different types of content sources is sometimes referred to as a “mixed-media” display. Various permutations of the types of media guidance data that may be displayed that are different from display100may be based on user selection or guidance application definition (e.g., a display of only recorded and broadcast listings, only on-demand and broadcast listings, etc.). As illustrated, listings114,116, and118are shown as spanning the entire time block displayed in grid102to indicate that selection of these listings may provide access to a display dedicated to on-demand listings, recorded listings, or Internet listings, respectively. In some embodiments, listings for these content types may be included directly in grid102. Additional media guidance data may be displayed in response to the user selecting one of the navigational icons120. (Pressing an arrow key on a user input device may affect the display in a similar manner as selecting navigational icons120.) Display100may also include video region122, advertisement124, and options region126. Video region122may allow the user to view and/or preview programs that are currently available, will be available, or were available to the user. The content of video region122may correspond to, or be independent from, one of the listings displayed in grid102. Grid displays including a video region are sometimes referred to as picture-in-guide (PIG) displays. PIG displays and their functionalities are described in greater detail in Satterfield et al. U.S. Pat. No. 6,564,378, issued May 13, 2003 and Yuen et al. U.S. Pat. No. 6,239,794, issued May 29, 2001, which are hereby incorporated by reference herein in their entireties. PIG displays may be included in other media guidance application display screens of the embodiments described herein. Advertisement124may provide an advertisement for content that, depending on a viewer's access rights (e.g., for subscription programming), is currently available for viewing, will be available for viewing in the future, or may never become available for viewing, and may correspond to or be unrelated to one or more of the content listings in grid102. Advertisement124may also be for products or services related or unrelated to the content displayed in grid102. Advertisement124may be selectable and provide further information about content, provide information about a product or a service, enable purchasing of content, a product, or a service, provide content relating to the advertisement, etc. Advertisement124may be targeted based on a user's profile/preferences, monitored user activity, the type of display provided, or on other suitable targeted advertisement bases. While advertisement124is shown as rectangular or banner shaped, advertisements may be provided in any suitable size, shape, and location in a guidance application display. For example, advertisement124may be provided as a rectangular shape that is horizontally adjacent to grid102. This is sometimes referred to as a panel advertisement. In addition, advertisements may be overlaid over content or a guidance application display or embedded within a display. Advertisements may also include text, images, rotating images, video clips, or other types of content described above. Advertisements may be stored in a user equipment device having a guidance application, in a database connected to the user equipment, in a remote location (including streaming media servers), or on other storage means, or a combination of these locations. Providing advertisements in a media guidance application is discussed in greater detail in, for example, Knudson et al., U.S. Patent Application Publication No. 2003/0110499, filed Jan. 17, 2003; Ward, III et al. U.S. Pat. No. 6,756,997, issued Jun. 29, 2004; and Schein et al. U.S. Pat. No. 6,388,714, issued May 14, 2002, which are hereby incorporated by reference herein in their entireties. It will be appreciated that advertisements may be included in other media guidance application display screens of the embodiments described herein. Options region126may allow the user to access different types of content, media guidance application displays, and/or media guidance application features. Options region126may be part of display100(and other display screens described herein), or may be invoked by a user by selecting an on-screen option or pressing a dedicated or assignable button on a user input device. The selectable options within options region126may concern features related to program listings in grid102or may include options available from a main menu display. Features related to program listings may include searching for other air times or ways of receiving a program, recording a program, enabling series recording of a program, setting program and/or channel as a favorite, purchasing a program, or other features. Options available from a main menu display may include search options, VOD options, parental control options, Internet options, cloud-based options, device synchronization options, second screen device options, options to access various types of media guidance data displays, options to subscribe to a premium service, options to edit a user's profile, options to access a browse overlay, or other options. The media guidance application may be personalized based on a user's preferences. A personalized media guidance application allows a user to customize displays and features to create a personalized “experience” with the media guidance application. This personalized experience may be created by allowing a user to input these customizations and/or by the media guidance application monitoring user activity to determine various user preferences. Users may access their personalized guidance application by logging in or otherwise identifying themselves to the guidance application. Customization of the media guidance application may be made in accordance with a user profile. The customizations may include varying presentation schemes (e.g., color scheme of displays, font size of text, etc.), aspects of content listings displayed (e.g., only HDTV or only 3D programming, user-specified broadcast channels based on favorite channel selections, re-ordering the display of channels, recommended content, etc.), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc.), parental control settings, customized presentation of Internet content (e.g., presentation of social media content, e-mail, electronically delivered articles, etc.) and other desired customizations. The media guidance application may allow a user to provide user profile information or may automatically compile user profile information. The media guidance application may, for example, monitor the content the user accesses and/or other interactions the user may have with the guidance application. Additionally, the media guidance application may obtain all or part of other user profiles that are related to a particular user (e.g., from other web sites on the Internet the user accesses, such as www.allrovi.com, from other media guidance applications the user accesses, from other interactive applications the user accesses, from another user equipment device of the user, etc.), and/or obtain information about the user from other sources that the media guidance application may access. As a result, a user can be provided with a unified guidance application experience across the user's different user equipment devices. This type of user experience is described in greater detail below in connection withFIG.4. Additional personalized media guidance application features are described in greater detail in Ellis et al., U.S. Patent Application Publication No. 2005/0251827, filed Jul. 11, 2005, Boyer et al., U.S. Pat. No. 7,165,098, issued Jan. 16, 2007, and Ellis et al., U.S. Patent Application Publication No. 2002/0174430, filed Feb. 21, 2002, which are hereby incorporated by reference herein in their entireties. Another display arrangement for providing media guidance is shown inFIG.2. Video mosaic display200includes selectable options202for content information organized based on content type, genre, and/or other organization criteria. In display200, television listings option204is selected, thus providing listings206,208,210, and212as broadcast program listings. In display200the listings may provide graphical images including cover art, still images from the content, video clip previews, live video from the content, or other types of content that indicate to a user the content being described by the media guidance data in the listing. Each of the graphical listings may also be accompanied by text to provide further information about the content associated with the listing. For example, listing208may include more than one portion, including media portion214and text portion216. Media portion214and/or text portion216may be selectable to view content in full-screen or to view information related to the content displayed in media portion214(e.g., to view listings for the channel that the video is displayed on). The listings in display200are of different sizes (i.e., listing206is larger than listings208,210, and212), but if desired, all the listings may be the same size. Listings may be of different sizes or graphically accentuated to indicate degrees of interest to the user or to emphasize certain content, as desired by the content provider or based on user preferences. Various systems and methods for graphically accentuating content listings are discussed in, for example, Yates, U.S. Patent Application Publication No. 2010/0153885, filed Dec. 29, 2005, which is hereby incorporated by reference herein in its entirety. Users may access content and the media guidance application (and its display screens described above and below) from one or more of their user equipment devices.FIG.3shows a generalized embodiment of illustrative user equipment device300. More specific implementations of user equipment devices are discussed below in connection withFIG.4. User equipment device300may receive content and data via input/output (hereinafter “I/”) path302. I/O path302may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry304, which includes processing circuitry306and storage308. Control circuitry304may be used to send and receive commands, requests, and other suitable data using I/O path302. I/O path302may connect control circuitry304(and specifically processing circuitry306) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path inFIG.3to avoid overcomplicating the drawing. Control circuitry304may be based on any suitable processing circuitry such as processing circuitry306. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry304executes instructions for a media guidance application stored in memory (i.e., storage308). Specifically, control circuitry304may be instructed by the media guidance application to perform the functions discussed above and below. For example, the media guidance application may provide instructions to control circuitry304to generate the media guidance displays. In some implementations, any action performed by control circuitry304may be based on instructions received from the media guidance application. In client-server based embodiments, control circuitry304may include communications circuitry suitable for communicating with a guidance application server or other networks or servers. The instructions for carrying out the above mentioned functionality may be stored on the guidance application server. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, Ethernet card, or a wireless modem for communications with other equipment, or any other suitable communications circuitry. Such communications may involve the Internet or any other suitable communications networks or paths (which is described in more detail in connection withFIG.4). In addition, communications circuitry may include circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other (described in more detail below). Memory may be an electronic storage device provided as storage308that is part of control circuitry304. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage308may be used to store various types of content described herein as well as media guidance information, described above, and guidance application data, described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage, described in relation toFIG.4, may be used to supplement storage308or instead of storage308. Control circuitry304may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry304may also include scaler circuitry for upconverting and downconverting content into the preferred output format of the user equipment300. Circuitry304may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by the user equipment device to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage308is provided as a separate device from user equipment300, the tuning and encoding circuitry (including multiple tuners) may be associated with storage308. A user may send instructions to control circuitry304using user input interface310. User input interface310may be any suitable user interface, such as a microphone, remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display312may be provided as a stand-alone device or integrated with other elements of user equipment device300. Display312may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, or any other suitable equipment for displaying visual images. In some embodiments, display312may be HDTV-capable. In some embodiments, display312may be a 3D display, and the interactive media guidance application and any suitable content may be displayed in 3D. A video card or graphics card may generate the output to the display312. The video card may offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors. The video card may be any processing circuitry described above in relation to control circuitry304. The video card may be integrated with the control circuitry304. Speakers314may be provided as integrated with other elements of user equipment device300or may be stand-alone units. The audio component of videos and other content displayed on display312may be played through speakers314. In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers314. The guidance application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly implemented on user equipment device300. In such an approach, instructions of the application are stored locally, and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). In some embodiments, the media guidance application is a client-server based application. Data for use by a thick or thin client implemented on user equipment device300is retrieved on-demand by issuing requests to a server remote to the user equipment device300. In one example of a client-server based guidance application, control circuitry304runs a web browser that interprets web pages provided by a remote server. In some embodiments, the media guidance application is downloaded and interpreted or otherwise run by an interpreter or virtual machine (run by control circuitry304). In some embodiments, the guidance application may be encoded in the ETV Binary Interchange Format (EBIF), received by control circuitry304as part of a suitable feed, and interpreted by a user agent running on control circuitry304. For example, the guidance application may be an EBIF application. In some embodiments, the guidance application may be defined by a series of JAVA-based files that are received and run by a local virtual machine or other suitable middleware executed by control circuitry304. In some of such embodiments (e.g., those employing MPEG-2 or other digital media encoding schemes), the guidance application may be, for example, encoded and transmitted in an MPEG-2 object carousel with the MPEG audio and video packets of a program. User equipment device300ofFIG.3can be implemented in system400ofFIG.4as user television equipment402, user computer equipment404, wireless user communications device406, or any other type of user equipment suitable for accessing content, such as a non-portable gaming machine. For simplicity, these devices may be referred to herein collectively as user equipment or user equipment devices, and may be substantially similar to user equipment devices described above. User equipment devices, on which a media guidance application may be implemented, may function as a standalone device or may be part of a network of devices. Various network configurations of devices may be implemented and are discussed in more detail below. A user equipment device utilizing at least some of the system features described above in connection withFIG.3may not be classified solely as user television equipment402, user computer equipment404, or a wireless user communications device406. For example, user television equipment402may, like some user computer equipment404, be Internet-enabled allowing for access to Internet content, while user computer equipment404may, like some television equipment402, include a tuner allowing for access to television programming. The media guidance application may have the same layout on various different types of user equipment or may be tailored to the display capabilities of the user equipment. For example, on user computer equipment404, the guidance application may be provided as a web site accessed by a web browser. In another example, the guidance application may be scaled down for wireless user communications devices406. In system400, there is typically more than one of each type of user equipment device but only one of each is shown inFIG.4to avoid overcomplicating the drawing. In addition, each user may utilize more than one type of user equipment device and also more than one of each type of user equipment device. In some embodiments, a user equipment device (e.g., user television equipment402, user computer equipment404, wireless user communications device406) may be referred to as a “second screen device.” For example, a second screen device may supplement content presented on a first user equipment device. The content presented on the second screen device may be any suitable content that supplements the content presented on the first device. In some embodiments, the second screen device provides an interface for adjusting settings and display preferences of the first device. In some embodiments, the second screen device is configured for interacting with other second screen devices or for interacting with a social network. The second screen device can be located in the same room as the first device, a different room from the first device but in the same house or building, or in a different building from the first device. The user may also set various settings to maintain consistent media guidance application settings across in-home devices and remote devices. Settings include those described herein, as well as channel and program favorites, programming preferences that the guidance application utilizes to make programming recommendations, display preferences, and other desirable guidance settings. For example, if a user sets a channel as a favorite on, for example, the website www.allrovi.com on his personal computer at his office, the same channel would appear as a favorite on the user's in-home devices (e.g., user television equipment and user computer equipment) as well as the user's mobile devices, if desired. Therefore, changes made on one user equipment device can change the guidance experience on another user equipment device, regardless of whether they are the same or a different type of user equipment device. In addition, the changes made may be based on settings input by a user, as well as user activity monitored by the guidance application. The user equipment devices may be coupled to communications network414. Namely, user television equipment402, user computer equipment404, and wireless user communications device406are coupled to communications network414via communications paths408,410, and412, respectively. Communications network414may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or combinations of communications networks. Paths408,410, and412may separately or together include one or more communications paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Path412is drawn with dotted lines to indicate that in the exemplary embodiment shown inFIG.4it is a wireless path and paths408and410are drawn as solid lines to indicate they are wired paths (although these paths may be wireless paths, if desired). Communications with the user equipment devices may be provided by one or more of these communications paths, but are shown as a single path inFIG.4to avoid overcomplicating the drawing. Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communication paths, such as those described above in connection with paths408,410, and412, as well as other short-range point-to-point communication paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication via wired or wireless paths. BLUETOOTH is a certification mark owned by Bluetooth SIG, INC. The user equipment devices may also communicate with each other directly through an indirect path via communications network414. System400includes content source416and media guidance data source418coupled to communications network414via communication paths420and422, respectively. Paths420and422may include any of the communication paths described above in connection with paths408,410, and412. Communications with the content source416and media guidance data source418may be exchanged over one or more communications paths, but are shown as a single path inFIG.4to avoid overcomplicating the drawing. In addition, there may be more than one of each of content source416and media guidance data source418, but only one of each is shown inFIG.4to avoid overcomplicating the drawing. (The different types of each of these sources are discussed below.) If desired, content source416and media guidance data source418may be integrated as one source device. Although communications between sources416and418with user equipment devices402,404, and406are shown as through communications network414, in some embodiments, sources416and418may communicate directly with user equipment devices402,404, and406via communication paths (not shown) such as those described above in connection with paths408,410, and412. Content source416may include one or more types of content distribution equipment including a television distribution facility, cable system headend, satellite distribution facility, programming sources (e.g., television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facilities and/or servers, Internet providers, on-demand media servers, and other content providers. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by the American Broadcasting Company, Inc., and HBO is a trademark owned by the Home Box Office, Inc. Content source416may be the originator of content (e.g., a television broadcaster, a Webcast provider, etc.) or may not be the originator of content (e.g., an on-demand content provider, an Internet provider of content of broadcast programs for downloading, etc.). Content source416may include cable sources, satellite providers, on-demand providers, Internet providers, over-the-top content providers, or other providers of content. Content source416may also include a remote media server used to store different types of content (including video content selected by a user), in a location remote from any of the user equipment devices. Systems and methods for remote storage of content, and providing remotely stored content to user equipment are discussed in greater detail in connection with Ellis et al., U.S. Pat. No. 7,761,892, issued Jul. 20, 2010, which is hereby incorporated by reference herein in its entirety. Media guidance data source418may provide media guidance data, such as the media guidance data described above. Media guidance application data may be provided to the user equipment devices using any suitable approach. In some embodiments, the guidance application may be a stand-alone interactive television program guide that receives program guide data via a data feed (e.g., a continuous feed or trickle feed). Program schedule data and other guidance data may be provided to the user equipment on a television channel sideband, using an in-band digital signal, using an out-of-band digital signal, or by any other suitable data transmission technique. Program schedule data and other media guidance data may be provided to user equipment on multiple analog or digital television channels. In some embodiments, guidance data from media guidance data source418may be provided to users' equipment using a client-server approach. For example, a user equipment device may pull media guidance data from a server, or a server may push media guidance data to a user equipment device. In some embodiments, a guidance application client residing on the user's equipment may initiate sessions with source418to obtain guidance data when needed, e.g., when the guidance data is out of date or when the user equipment device receives a request from the user to receive data. Media guidance may be provided to the user equipment with any suitable frequency (e.g., continuously, daily, a user-specified period of time, a system-specified period of time, in response to a request from user equipment, etc.). Media guidance data source418may provide user equipment devices402,404, and406the media guidance application itself or software updates for the media guidance application. Media guidance applications may be, for example, stand-alone applications implemented on user equipment devices. For example, the media guidance application may be implemented as software or a set of executable instructions which may be stored in storage308, and executed by control circuitry304of a user equipment device300. In some embodiments, media guidance applications may be client-server applications where only a client application resides on the user equipment device, and a server application resides on a remote server. For example, media guidance applications may be implemented partially as a client application on control circuitry304of user equipment device300and partially on a remote server as a server application (e.g., media guidance data source418) running on control circuitry of the remote server. When executed by control circuitry of the remote server (such as media guidance data source418), the media guidance application may instruct the control circuitry to generate the guidance application displays and transmit the generated displays to the user equipment devices. The server application may instruct the control circuitry of the media guidance data source418to transmit data for storage on the user equipment. The client application may instruct control circuitry of the receiving user equipment to generate the guidance application displays. Content and/or media guidance data delivered to user equipment devices402,404, and406may be over-the-top (OTT) content. OTT content delivery allows Internet-enabled user devices, including any user equipment device described above, to receive content that is transferred over the Internet, including any content described above, in addition to content received over cable or satellite connections. OTT content is delivered via an Internet connection provided by an Internet service provider (ISP), but a third party distributes the content. The ISP may not be responsible for the viewing abilities, copyrights, or redistribution of the content, and may only transfer IP packets provided by the OTT content provider. Examples of OTT content providers include YOUTUBE, NETFLIX, and HULU, which provide audio and video via IP packets. Youtube is a trademark owned by Google Inc., Netflix is a trademark owned by Netflix Inc., and Hulu is a trademark owned by Hulu, LLC. OTT content providers may additionally or alternatively provide media guidance data described above. In addition to content and/or media guidance data, providers of OTT content can distribute media guidance applications (e.g., web-based applications or cloud-based applications), or the content can be displayed by media guidance applications stored on the user equipment device. Media guidance system400is intended to illustrate a number of approaches, or network configurations, by which user equipment devices and sources of content and guidance data may communicate with each other for the purpose of accessing content and providing media guidance. The embodiments described herein may be applied in any one or a subset of these approaches, or in a system employing other approaches for delivering content and providing media guidance. The following four approaches provide specific illustrations of the generalized example ofFIG.4. In one approach, user equipment devices may communicate with each other within a home network. User equipment devices can communicate with each other directly via short-range point-to-point communication schemes described above, via indirect paths through a hub or other similar device provided on a home network, or via communications network414. Each of the multiple individuals in a single home may operate different user equipment devices on the home network. As a result, it may be desirable for various media guidance information or settings to be communicated between the different user equipment devices. For example, it may be desirable for users to maintain consistent media guidance application settings on different user equipment devices within a home network, as described in greater detail in Ellis et al., U.S. patent application Ser. No. 11/179,410, filed Jul. 11, 2005. Different types of user equipment devices in a home network may also communicate with each other to transmit content. For example, a user may transmit content from user computer equipment to a portable video player or portable music player. In a second approach, users may have multiple types of user equipment by which they access content and obtain media guidance. For example, some users may have home networks that are accessed by in-home and mobile devices. Users may control in-home devices via a media guidance application implemented on a remote device. For example, users may access an online media guidance application on a website via a personal computer at their office, or a mobile device such as a PDA or web-enabled mobile telephone. The user may set various settings (e.g., recordings, reminders, or other settings) on the online guidance application to control the user's in-home equipment. The online guide may control the user's equipment directly, or by communicating with a media guidance application on the user's in-home equipment. Various systems and methods for user equipment devices communicating, where the user equipment devices are in locations remote from each other, is discussed in, for example, Ellis et al., U.S. Pat. No. 8,046,801, issued Oct. 25, 2011, which is hereby incorporated by reference herein in its entirety. In a third approach, users of user equipment devices inside and outside a home can use their media guidance application to communicate directly with content source416to access content. Specifically, within a home, users of user television equipment402and user computer equipment404may access the media guidance application to navigate among and locate desirable content. Users may also access the media guidance application outside of the home using wireless user communications devices406to navigate among and locate desirable content. In a fourth approach, user equipment devices may operate in a cloud computing environment to access cloud services. In a cloud computing environment, various types of computing services for content sharing, storage or distribution (e.g., video sharing sites or social networking sites) are provided by a collection of network-accessible computing and storage resources, referred to as “the cloud.” For example, the cloud can include a collection of server computing devices, which may be located centrally or at distributed locations, that provide cloud-based services to various types of users and devices connected via a network such as the Internet via communications network414. These cloud resources may include one or more content sources416and one or more media guidance data sources418. In addition, or in the alternative, the remote computing sites may include other user equipment devices, such as user television equipment402, user computer equipment404, and wireless user communications device406. For example, the other user equipment devices may provide access to a stored copy of a video or a streamed video. In such embodiments, user equipment devices may operate in a peer-to-peer manner without communicating with a central server. The cloud provides access to services, such as content storage, content sharing, or social networking services, among other examples, as well as access to any content described above, for user equipment devices. Services can be provided in the cloud through cloud computing service providers, or through other providers of online services. For example, the cloud-based services can include a content storage service, a content sharing site, a social networking site, or other services via which user-sourced content is distributed for viewing by others on connected devices. These cloud-based services may allow a user equipment device to store content to the cloud and to receive content from the cloud rather than storing content locally and accessing locally-stored content. A user may use various content capture devices, such as camcorders, digital cameras with video mode, audio recorders, mobile phones, and handheld computing devices, to record content. The user can upload content to a content storage service on the cloud either directly, for example, from user computer equipment404or wireless user communications device406having content capture feature. Alternatively, the user can first transfer the content to a user equipment device, such as user computer equipment404. The user equipment device storing the content uploads the content to the cloud using a data transmission service on communications network414. In some embodiments, the user equipment device itself is a cloud resource, and other user equipment devices can access the content directly from the user equipment device on which the user stored the content. Cloud resources may be accessed by a user equipment device using, for example, a web browser, a media guidance application, a desktop application, a mobile application, and/or any combination of access applications of the same. The user equipment device may be a cloud client that relies on cloud computing for application delivery, or the user equipment device may have some functionality without access to cloud resources. For example, some applications running on the user equipment device may be cloud applications, i.e., applications delivered as a service over the Internet, while other applications may be stored and run on the user equipment device. In some embodiments, a user device may receive content from multiple cloud resources simultaneously. For example, a user device can stream audio from one cloud resource while downloading content from a second cloud resource. Or a user device can download content from multiple cloud resources for more efficient downloading. In some embodiments, user equipment devices can use cloud resources for processing operations such as the processing operations performed by processing circuitry described in relation toFIG.3. FIG.5is a block diagram of an illustrative system500for recommending a media asset to a user based on an interaction between the user and another user over a communications network. System500includes three user devices502,504, and506each connected to communications network414described above. Each of the user devices502,504, and506may be any of the user equipment devices300described above in relation toFIG.3. For example, each of the user devices502,504, and506may be one of user equipment devices402,404, or406described above in relation toFIG.4. User devices502,504, and506may be different types of user devices from each other. System500also includes a media asset recommendation system508, which is connected to communications network414as well. User device502is a first user device used by a first user. User device504is a first user device used by a second user. The first user and second user interact with each other over communications network414via their respective user devices502and504. For example, the users may communicate using email, an instant messaging service (e.g., GOOGLE TALK, FACEBOOK chat, etc.), or a voice connection (e.g., a phone call or voice over IP). In some embodiments, user device504is connected to user device502through a different network from communications network414or through a non-networked connection. The media asset recommendation system508receives data related to the interaction and, based on the interaction, identifies a media asset to recommend to the first user. For example, if the media asset recommendation system508determines from analyzing the interaction data that the second user mentioned the television program “Parks and Recreation” and the first user had not seen the program, media asset recommendation system508automatically generates a recommendation for “Parks and Recreation” to transmit to the first user. The recommendation may be in the form of a link to or bookmark of the media asset or to a webpage containing information about the media asset (e.g., the creation of a bookmark to the “Parks and Recreation” page on video streaming website HULU), an instruction to schedule a recording of the media asset, or placing the media asset in a queue associated with the user (e.g., placing the first season of “Parks and Recreation” in the user's NETFLIX queue). The media asset recommendation can be transmitted to the first user device used by the first user502or the second user device used by the first user506. For example, if the first user's first user device502is a computer, tablet computer, cell phone, or smart phone, and the first user's second user device506is a television, media asset recommendation system508may transmit an instruction to schedule a recording to television506. Additionally or alternatively, the media asset recommendation system508could update a web page that is accessible by the user with either first user device502or second user device506. Thus, the recommendation could be transmitted to first user device502, second user device506, or both user devices when the user accesses the web page. The media asset recommendation system508is described in further detail in relation toFIG.7. FIG.6is a block diagram of an illustrative system600for recommending a media asset to a user based on a conversation between the user and another person recorded by a user device. The system600includes two user devices602and606used by the first user; these may be similar to the user devices502and506described above. Both of the user devices602and606are connected to communications network414described above. The system600also includes a media asset recommendation system608. The first user device602includes a microphone604for detecting audio, and in particular, for detecting an interaction between the user610and another person612such as the user's friend or a family member. The interaction detected by microphone604is sent via communications network414to media asset recommendation system608. Media asset recommendation system608is configured to convert the audio signal into text and then analyze the text of the interaction or analyze the audio of the interaction directly. After media asset recommendation system608has analyzed the interaction to generate a recommendation, it recommends media assets to the first user using the first user device602or the second user device604as described above. A more detailed system for recommending a media asset based on a detected conversation is described in relation toFIG.8. FIG.7is a block diagram of an illustrative system700showing details of the media asset recommendation system508fromFIG.5.FIG.7also shows data flow between various elements of the system700for generating a media asset recommendation for a user based on an interaction between the user and another user over a communications network. As in system500ofFIG.5, system700includes a first user device502that is interacting with a second user device504over communications network414. System700further includes a social network server710connected to communications network414. Media asset recommendation system502includes a messaging server702, a text analytics module704, a media asset database706, a media asset recommendation module708and a user media profile712. Messaging server702supports messaging, emailing, or chatting over communications network414. Messaging server702receives text sent by first user device502to second user device504and/or by second user device504to first user device502. This text forms an interaction between a first user of first user device502and a second user of second user device504. Messaging server702sends the text of the interaction to text analytics module704. In some embodiments, messaging server702is not part of media asset recommendation system508, but may be a third-party system from which media asset recommendation system508receives the text of the interaction. Alternatively, first user device502may send the text of the interaction to a media asset recommendation system508that does not include a messaging server702; for example, the text of the interaction may be sent directly from first user device502to text analytics module704. Text analytics module704analyzes the received text of the interaction to identify content mentioned or discussed in the interaction. Text analytics module704is in communication with media asset database706, which contains information related to content, such as any of the guidance data mentioned above and other words or phrases, or abbreviations or shorthand forms of words or phrases (e.g., abbreviations or shorthand forms of any of the guidance data mentioned above), that could be used to identify content. Text analytics module704uses the data in media asset database706to determine a media asset, if any, mentioned or discussed in the interaction. In some embodiments, text analytics module704uses a computerized predictive model to predict the probability that the interaction relates to content. The computerized predictive model may have been trained on a set of training data, i.e., interactions for which the discussed media assets are known. In some embodiments, text analytics module704may use fuzzy matching techniques to determine the similarity between an interaction and possible content to which the interaction may relate. The fuzzy matching may compare words or phrases in the interaction to media guidance data stored in media asset database706. In some embodiments, a fuzzy matching score is computed for each media asset in the media asset database, and the media asset with the highest score is selected. In order to identify a media asset, text analytics module704may require the fuzzy matching score for the media asset to exceed a predetermined threshold level. Fuzzy matching methods are described in further detail in Melnychenko, U.S. application Ser. No. 13/537,664, filed Jun. 29, 2012. Text analytics module704may use natural language processing techniques to organize the text. Text analytics module704may filter stop words, such as articles and prepositions, from the text. In some embodiments, text analytics module704may only retain words of a certain part of speech, such as nouns and/or verbs. The remaining words may be reduced to their stem, based, or root form using any stemming algorithm. Additional processing of the text may include correcting spelling errors, identifying synonyms or related words, performing coreference resolution, and performing relationship extraction. Once the words have been processed, they may be counted and assigned word frequencies or ratios. The computerized predictive model of text analytics module704mentioned above can be trained to classify an interaction as indicative of one or more media assets based on, for example, the word count or word frequency. Because of the large amount of data and large amount of potential media assets, Bayesian classifiers, such as Naïve Bayes classifiers and hierarchical Bayesian models, may be used. The text of the interaction can be viewed as a mixture of various topics, and learning the topics, their word probabilities, topics associated with each word, and topic mixtures of interactions is a problem of Bayesian inference. Suitable statistical classification methods include random forests, random naïve Bayes, Averaged One-Dependence Estimators (AODE), Monte Carlo methods, concept mining methods, latent semantic indexing, k-nearest neighbor algorithms, or any other suitable multiclass classifier. The selection of the classifier can depend on the size of the training data set, the desired amount of computation, and the desired level of accuracy. When additional media assets are added to media asset database706, the computerized predictive model may be updated and/or subject to additional training. If text analytics module704identifies a media asset in an interaction, text analytics module704sends an identifier for the discussed media asset to media asset recommendation module708. Media asset recommendation module708determines whether to recommend the media asset to the user. Media asset recommendation module708may consider, for example, the number of times the media asset was identified, the number of people who discussed or mentioned the media asset, the user's interest in the media asset, the identity of the contact, the contact's attitude toward the media asset, or any other additional factor or combination of factors when determining whether to recommend the media asset to the user. For example, media asset recommendation module708may count the number of times a particular media asset was mentioned in an interaction and determine whether to recommend the media asset based on whether its number of mentions has passed a given threshold. Further, media asset recommendation module708may count the number of interactions in which a particular media asset was identified and determine whether to recommend the media asset based on whether its number of identifications has passed a given threshold. Media asset recommendation module708may additionally or alternatively count the number of contacts who mentioned a particular media asset and determine whether to recommend the media asset based on whether the number of contacts has passed a given threshold. In some embodiments, interactions with certain contacts are weighted more heavily than interactions with others. For example, users can choose to weight heavily media assets mentioned or recommended by close contacts or contacts they know have similar taste in music, movies, television programs, etc. to them. Similarly, users can weight lightly or ignore media assets mentioned by or recommended by more distant contacts or contacts they know have different tastes from them. In some embodiments, media asset recommendation module708can monitor the user's response to the recommendations and, based on the response, determine contacts whose recommendations the user is more responsive to and contacts whose recommendations the user is less interested in. In addition to identifying a media asset, text analytics module704may also analyze the text to determine the user's interest or attitude towards the identified media asset and/or the contact's interest or attitude towards the identified media asset. When analyzing interests or attitudes, analytics module704may focus on the text near the text relating to the identified media asset. Text analytics module704may use this data to determine whether to send the media asset identifier to media asset recommendation module708, or text analytics module704may send data describing the user's and/or contact's attitude to media asset recommendation module708. In the latter case, media asset recommendation module708can determine whether to recommend a media asset based on the user's and/or contact's attitude or level of interest. Media asset recommendation module708may assign a level of interest and compare it to an upper threshold above which the media asset is recommended or a lower threshold below which the media asset will not be recommended. For example, if the text from the user in the interaction indicates that the user is highly interested in the media asset (e.g., “The Bachelorette sounds amazing! I'll definitely check it out this weekend.”), media asset recommendation module708may decide to recommend the media asset to the user. Alternatively, if the text from the user indicates that the user has a negative attitude about the media asset (e.g., “The Bachelorette sounds terrible! I can't believe you watch that garbage.”), media asset recommendation module708may decide not to recommend the media asset to the user. In some embodiments, media asset recommendation module708may allow a user's statements to outweigh or override a contact's recommendation. For example, if the contact speaks highly of the media asset (e.g., the contact says “I can't come to your birthday party—The Bachelorette is on then, and I simply cannot miss it.”), and the user responds negatively (e.g., “That is a lousy excuse! The Bachelorette sounds terrible.”), media asset recommendation module708may refrain from recommending the media asset. In other embodiments, media asset recommendation module708may weigh the user's statements and the user's contact's statements equally, or media asset recommendation module708may put greater weight on the user's contact's statements. In some embodiments, media asset recommendation module708may determine an aggregate level of interest in a media asset that is based on the levels of interest of the user and/or one or more contacts towards the media asset. Media asset recommendation module708may then determine whether to recommend the media asset based on the aggregate level of interest. For example, media asset recommendation module708may compare the aggregate level of interest to one or more thresholds to determine whether to recommend the media asset, not recommend the media asset, or await further information before deciding whether to recommend the media asset. In some embodiments, media asset recommendation module708determines whether to recommend a media asset based on a level of interest similarity between the user and a contact who mentioned the media asset. Media asset recommendation module708may access data from a social network server710that stores information related to interests of the user and the contact. Methods for identifying interests of a user of a social network based on data stored on a social network server is described in detail in Alcala, U.S. patent application Ser. No. 13/453,506, filed Apr. 23, 2012. Media asset recommendation module708compares the interests of the user to the interests of the contact to determine a level of similarity or correlation since users with more similar interests may be more likely to enjoy similar media content. In some embodiments, media asset recommendation module708may only consider interests in media or certain types of media. Alternatively, media asset recommendation module708may weight more heavily the level of similarity of interest of media assets than the level of similarity of other types of interests. Alternatively, media asset recommendation module708may weight more heavily the level of similarity of interest in certain types of media assets. In some embodiments, social network server710may be part of media asset recommendation system502. In other embodiments, media asset recommendation module708may receive interest data from another type of source or multiple sources. In some embodiments, media asset recommendation module708may identify one or more candidate media assets that are likely to be mentioned in an interaction or that the user may likely be interested in. For example, media asset recommendation module708may determine one or more candidate media assets based on interests in the user's and/or contact's social network profile or information within a user profile, such as identifiers of previously accessed media assets or demographic information. In particular, media asset recommendation module708may identify, as candidate media assets, media assets that the user and/or the contact has indicated an interest in or has watched previously. Additionally or alternatively, media asset recommendation module708may identify candidate media assets that are similar to the media assets in which the user and/or contact has indicated an interest or has watched previously (e.g., the candidate media asset has a common actor, director, or genre to a previously viewed media asset). Additionally or alternatively, media asset recommendation module708may identify candidate media assets that are popular among the demographic of the user and/or the contact. Media asset recommendation module708may sort or rank the identified media assets based on, for example, how recently they have been accessed or how recently they have been made available. In some embodiments, a list of candidate media assets is determined by a different module and accessed by media asset recommendation module708. Once the candidate media assets have been determined by the other module or by media asset recommendation module708, text analytics module704may access the list of candidate media assets. Text analytics module704may then access information related to the candidate media assets from media asset database706and determine a media asset mentioned or discussed in an interaction using only the information related to the candidate media assets based on any of the techniques described above. In some embodiments, text analytics module704uses the list of candidate media assets instead of media asset database706for determining a media asset mentioned or discussed in the interaction. In other embodiments, text analytics module704may first use the list of candidate media assets for determining a media asset mentioned or discussed in the interaction, and, if none of the candidate media assets were identified, text analytics module704may then use the full media asset database706for determining a media asset mentioned or discussed in the interaction. Either of these approaches may reduce processing power by reducing the amount of information that the text of the interaction is compared to. Once media asset recommendation module708has decided to recommend the media asset to the user, it sends an identifier of the media asset to user media profile712. User media profile712may be associated with, for example, a television subscription, a DVR, an Internet television service, an over-the-top subscription service, a website, a streaming media service, a DVD-by-mail service, or any other means through which a user can access media and/or media recommendations. Sending the identifier of the media asset to user media profile712may result in, for example, the identified media asset being added to a list or queue of media assets to be transmitted to the user, the identified media asset being added to a list of media assets recommended to the user, a bookmark to the identified media asset being created, a link to the identified media asset being created, and a recording of the identified media asset being scheduled on a device associated with the user. The user may be able to access his user media profile712and/or a list of recommendations associated with the user media profile712using the first user device502or another user device, such as the first user's second user device506ofFIG.5. The user may access user media profile712over communications network414. In some embodiments, user media profile712is not part of media asset recommendation system508as shown. In such embodiments, user media profile712may be stored on social network server710or on a separate server. FIG.8is a block diagram of an illustrative system700showing details of the media asset recommendation system608fromFIG.6.FIG.8also shows data flow between various elements of the system800for generating a media asset recommendation for a user based on an conversation between the user and another person recorded by a user device, as shown inFIG.6. As in system600ofFIG.6, system800includes a user device602having a microphone604and connected to communications network414. Microphone604records a conversation between the user and another person (not shown). Microphone604can begin recording the conversation when a given sound threshold is detected, when the user's voice is identified, when the user indicates that the conversation should be recorded, or according to other conditions. Alternatively, microphone604can be configured to continuously record its environment. System800further includes a social network server810connected to communications network414; social network server810is similar to social network server710described above. Media asset recommendation system608includes an audio to text module814, a voice recognition module816, a text analytics module706, a media asset database806, a media asset recommendation module808, and a user media profile812. Audio to text module814receives an audio signal from the microphone604. Audio to text module814may receive the audio signal over communications network414, or in other embodiments, audio to text module814may be part of user device602. Audio to text module814processes the received audio signal to convert it to text using any known speech recognition process. Audio to text module814sends the text of the interaction to text analytics module804. Text analytics module804is similar to text analytics module704, described above. Text analytics module804receives data from media asset database806, which is similar to media asset database706, described above. Text analytics module804determines a media asset discussed in the conversation based on the text of the conversation received from audio to text module814. Text analytics module804communicates an identifier of the identified media asset to media asset recommendation module808. In some embodiments, rather than using text analytics module804to analyze the interaction, an audio analytics model analyzes the audio signal directly to identify discussed media assets without converting the audio signal to text. In such embodiments, there may be no text to audio module814. In some embodiments, the audio signal of the interaction is analyzed to determine a tone of the user, the contact, or the interaction relating to the media asset to gauge the user's and/or contact's interest in the media asset. Media asset recommendation module808is similar to media asset recommendation module708described above and may use similar factors in determining whether to recommend a particular media asset to the user. Media asset recommendation system808may receive social network data indicative of user interests from social network server810, which is similar to social network server710, described above. A recommendation generated by media asset recommendation module808is communicated to user media profile812, which is similar to user media profile712, except as shown inFIG.8, user media profile is outside of media asset recommendation system608. In other embodiments, user media profile712may be stored on social network server810or in media asset recommendation system608. The form of the recommendation can be similar to the recommendations described above. Audio to text module814may transmit the audio signal to voice recognition module816to perform voice recognition. In other embodiments, voice recognition module816may receive the audio signal from user device602via communications network414, or voice recognition module816may be part of user device602. Voice recognition module816identifies the user's voice and determines who the other person or people with whom the user is speaking. Voice recognition module816may be trained based on previous recorded interactions in which the contact was identified by the user, or may be trained on spoken conversations over communications network414in which the other speaker, using another user device, can be identified. For example, voice recognition module may be trained to identify a user's contacts based on their voices using recorded calls between cellular phones or voice over IP (VoIP) calls. If the interaction occurs over communications network414, rather than between two people in the same location, the one or more other speakers can be identified based on IP address, telephone number, contact information stored on user device602, or other identifying information, rather than voice recognition module816. A user can arrange to receive automated recommendations by, for example, downloading an application or software module onto a user device300(e.g., user device502or602) that allow interactions taking place using the user device300or near the user device300to be monitored. The application or software module may include a setup process for receiving instructions from the user indicating how the user would like to receive recommendations and how to determine when to supply a recommendation. Exemplary setup options are shown inFIGS.9through12. FIG.9is an illustrative display screen900of a user device300showing selectable automated recommendation setup options providing actions to take when recommending a program. The display screen900asks the user what type of action should be taken when the user is engaged in a conversation that discusses a media asset that the user may be interested in consuming. The user device300may allow the user to select from a list of options902including Set a Reminder, Record the Program, Add it to my Queue, Bookmark the Program, Send me a Link, and Add it to a List of Recommendation. As shown inFIG.9, the user has selected the “Add it to my Queue” option. Some of the recommendation options may request additional information, e.g., where to send a link, how to configure the reminder, where in the queue to place the media asset, etc., in subsequent display screens or overlays. In some embodiments, the user may select multiple actions to take when recommending a program. After the user has selected the desired actions, he selects the “Next” button904, which presents another set up option or finalizes the settings. FIG.10is an illustrative display screen1000of user device300showing a selectable automated recommendation setup option for providing a recommendation threshold. User device300may present display screen1000after receiving a selection of the “Next” button in display screen900. The display screen900asks the user for a recommendation threshold, i.e., the number of times the media asset recommendation system508or608should identify a media asset in a user's interactions before recommending it. The recommendation threshold may be defined as the number of times that the media asset was identified within one or more interactions. For example, over two interactions, the media asset recommendation system508or608may detect the phrase “Parks and Recreation” or “Parks and Rec” (a common shorthand name for the show “Parks and Recreation”) used eight times; thus, the media asset recommendation system508or608identified the media asset eight times, and this can be compared to the threshold input by the user. Media asset recommendation system508or608may check for a set distance between mentions in order to consider the mentions part of two different detections. The distance may be, for example, a number of lines of text, a number of words, or a length of time. Alternatively, in other embodiments, the number of interactions in which the media asset was identified is monitored. For example, the media asset recommendation system508or608may identify three different interactions (e.g., interactions with different people, or distinct interactions including one or more of the same person) in which the upcoming football game between the NEW YORK JETS and the NEW ENGLAND PATRIOTS is mentioned. The media asset recommendation system508or608may determine that an interaction discusses the JETS v. PATRIOTS game based on, for example, detection of some combination of “New York”, “New England”, “Jets”, “Patriots”, “Pats” (a common abbreviation for “Patriots”), Sanchez (a player for the NEW YORK JETS), Brady (a player for the NEW ENGLAND PATRIOTS), etc. The display screen1000can be customized based on a user's previous response to setup options. For example, since the user had previously indicated inFIG.9that he wants a recommended program to be added to his queue, the user device300asks “In general, how many times should you discuss the program before I add it to your queue?” FIG.11is an illustrative display screen1100showing a selectable automated recommendation setup option for providing preferred contacts for making recommendations. As discussed above in relation toFIG.7, a user may choose to weight heavily the mention of media assets by certain contacts, such as close contacts or contacts they know have similar taste in music, movies, television programs, etc. to them. To obtain this information, user device300displays screen1100asking the user if there are any contacts it should favor when making recommendations. Favored contacts1102are displayed and can be removed from the favored contacts list by selecting the “X” next to their names. User device300can receive additional contacts via the user's selection of a contact in drop down menu1104. The contacts available in drop down menu1104can be populated by, for example, social network contacts, mobile device contacts, email contacts, or messaging contacts. When user device300receives a selection of a contact in drop down menu1104, it can display that contact in the list of favored contacts1102and allow the user to select a new contact using drop down menu1104. Once the user has finished selecting favored contacts, he selects the “Next” button1106. Upon receiving a selection of “Next” button1106, user device300may present display screen1200ofFIG.12. Display screen1200includes a selectable automated recommendation setup option for providing contacts to ignore when making recommendations. As discussed above in relation toFIG.7, users can weight lightly or ignore certain contacts, such as more distant contacts or contacts they know have different tastes from them. To obtain this information, user device300asks the user whether there are any contacts it should ignore when making recommendations. Ignored contacts1202are displayed and can be removed from the ignored contacts list by selecting the “X” next to their names. User device300can receive additional contacts via the user's selection of a contact in drop down menu1204. The contacts available in drop down menu1204can be populated by, for example, social network contacts, mobile device contacts, email contacts, or messaging contacts. When user device300receives a selection of a contact in drop down menu1204, it can display that contact in the list of ignored contacts1202and allow the user to select a new contact using drop down menu1204. Once the user has finished selecting favored contacts, he selects the “Next” button1206to complete automated recommendations setup. FIG.13is an illustrative display screen showing an automated recommendation generated by media asset recommendation system508or608. As shown, media asset recommendation system508or608has identified from the user's interaction that the user may be interested in watching the live football game between the NEW YORK JETS and the NEW ENGLAND PATRIOTS scheduled for Sunday, October 21 at 4:25 pm. For example, microphone604of user device602may have recorded the user and a friend placing bets on the JETS v. PATRIOTS game (e.g., the friend says “I bet you $50 that Sanchez will throw for more yards than Brady this Sunday” followed by the user saying “You're on!”). Media asset recommendation system608will identify that the user is interested in the JETS v. PATRIOTS game, and user device608, upon receiving identifying information for this media asset from media asset recommendation system608, or upon accessing the recommendation from user media profile812, asks the user whether it should take any of the options1302available for a live-broadcast program: set a reminder, record the program, bookmark the program, and/or send the user a link to the program. Additional or alternative options available for live content may be included in options1302. The user can select one or more of these options and, after he has completed his selection(s), select the “Done” button1304. User device608then processes the selection(s) and takes actions as required to implement the user's selected action(s). Alternatively, if the user is not interested in watching the JETS v. PATRIOTS game or does not want any of the actions1302to be taken, the user may select the “No, thanks” button1306. In that case, user device608would not take any of the actions1302. FIG.14is an illustrative display screen showing a second automated recommendation generated by media asset recommendation system508or608. As shown, media asset recommendation system508or608has identified from the user's interaction that the user may be interested in watching BUFFY THE VAMPIRE SLAYER, a television program which is no longer regularly broadcast, but is available in non-broadcast forms, such as through over-the-top content providers, on-demand providers, Internet providers, or streaming media providers. For example, a user of user device502may have a conversation with a friend using user device504over an instant messaging service, such as GOOGLE TALK. The following conversation is analyzed by the media asset recommendation system: Friend: Season 6 of Buffy is getting really exciting User: I don't think that I made it that far—I stopped watching after season 5 Friend: You missed out on the best part of the series! User: I should really go back and watch it one of these days. Based on this interaction, media asset recommendation system508will identify that the user is interested in watching season 6 of BUFFY THE VAMPIRE SLAYER and add this as a recommendation. Upon receiving information identifying BUFFY THE VAMPIRE SLAYER Season 6 from media asset recommendation system508, or upon accessing user media profile712, user device502or506asks the user whether it should take any of the options1402available for an on-demand program: add the first season of the program to the user's queue, bookmark the program, or send the user a link to the program. Additional or alternative options available for on-demand content may be included in options1402. The user can select one or more of these options and, after he has completed his selection(s), select the “Done” button1404. User device300then processes the selection(s) and takes actions as required to implement the user's selected action(s). Alternatively, if the user is not interested in watching BUFFY THE VAMPIRE SLAYER or does not want any of the actions1402to be taken, the user may select the “No, thanks” button1406. In that case, user device502or506would not take any of the actions1402. FIG.15shows an illustrative process for automatically generating a media asset recommendation based on a user's interaction. First, media asset recommendation system508or608processes verbal data received during an interaction, e.g., the audio signal recorded during a user's interaction, as described in relation toFIGS.6and8(step1502). Media asset recommendation system608, upon receiving the audio of a user's interaction, may convert the audio to text, as described in relation toFIG.8. Media asset recommendation system508or608analyzes the verbal interaction (step1504). Based on the analysis, media asset recommendation system508or608identifies a media asset that was referred to during the interaction (step1506). The identified media asset is added to a list of media assets associated with the user, such as list of recommendations in user media profile712or812(step1508). Media asset recommendation system508or608or user media profile712or812then transmits data identifying at least one media asset in the list of recommended media assets to a user device associated with the user (step1510). FIG.16shows an illustrative process for automatically generating a media asset recommendations based on a user's interaction and various user input and interest. First, media asset recommendation system508or608receives the text of a user's interaction, as described in relation toFIGS.5and7, or the audio of a user's interaction, as described in relation toFIGS.6and8(step1602). Based on the received data, media asset recommendation system508or608identifies one or more of the user's contacts involved as described in relation toFIG.7or8(step1604). Based on the identity of the contact, media asset recommendation system508or608determines whether the media assets mentioned by the contact should be ignored, as described, for example, in relation toFIG.12(decision1606). If the contact is to be ignored, media asset recommendation system508or608continues monitoring a user's interactions or receiving data from monitored conversations. If the contact is not to be ignored, media asset recommendation system508or608accesses the interests of the user and the contact, for example from social network server710or810(step1608) and determines a level of interest similarity based on the identified interests (step1610), as described in relation toFIG.7. Media asset recommendation system508or608also compares the data from the interaction (text or audio) to data from a media asset database, e.g., by using a computerized predictive model trained with data from the media asset database, as described in relation toFIG.7(step1612). Based on this comparison, media asset recommendation system508or608identifies a media asset that was mentioned in the interaction (step1614). Media asset recommendation system508or608analyzes the text or audio of the interaction to determine the user's interest in the media asset (step1616). Based on the analysis of the user's interest in the media asset, media asset recommendation system508or608determines whether the user has low or no interest (decision1618). If the user has little interest in the media asset, media asset recommendation system508or608continues monitoring a user's interactions or receiving data from monitored conversations. If the user does not have low or no interest in the media asset, media asset recommendation system508or608determines whether the user has a high interest in the media asset (decision1620). If the user has a high interest in the media asset, media asset recommendation system508or608may immediately recommend the media asset. To do this, media asset recommendation system508or608accesses the user's recommendation preferences received as described inFIG.9(step1624). Based on the user's recommendation preferences, media asset recommendation system508or608takes the desired action for recommending the media asset (step1626). If the user does not have a high interest in the program (e.g., the user has a moderate interest in the program, or the level of interest was not determined), media asset recommendation system508or608determines whether to recommend the program based on the interest similarity determined in step1610and the previous identifications of the media asset (decision1622). For example, media asset recommendation system508or608may compare the number of times the media asset was identified to a threshold received from the user as described in relation toFIG.10. If the asset recommendation system508or608determines to recommend in step1622, media asset recommendation system508or608accesses the user preferences and takes the desired action for recommending the media asset based on the user's recommendation preferences (steps1624and1626). Otherwise, media asset recommendation system508or608continues monitoring a user's interactions or receiving data from monitored conversations. It should be understood that the above steps of the flow diagrams ofFIGS.15and16may be executed or performed in any order or sequence not limited to the order and sequence shown and described in the figures. Also, some of the above steps of the flow diagrams ofFIGS.15and16may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow.
91,174